Hacker News new | past | comments | ask | show | jobs | submit

High Performance Git

https://gitperf.com/
Git is industry standard, because for what it give you it's a remarkably robust and simple program to use. We're all vaguely aware that the internals are complex, but the UX is clean and usable enough that the complexity usually doesn't leak out.

But the day this breaks down and I have to deal with bloom filters, packfiles, maintaining the git garbage collector or rerere cleanup, is the day I switch our codebase to a centralized VCS.

This stuff is cool to learn about; but it's 5 layers removed from anything I want to be thinking about in my day to day work.

i think it is the other way around. Git is pretty simple internally, and its ui is just knobs and levers to reach into that simple reliable internal structure. This is why for some people it seems like a mess - they want button "do what I want" (and all people and their needs are different), and for other people it's clean - open the throttle, engine will rev.
I'm pretty sure git is industry standard almost entirely entirely because GitHub exists. And I very much disagree that the UX is clean. The cli is more than a bit of a mess.
loading story #47931805
loading story #47931776
I never faced git performance issues when working with code. Guess my repos weren't bit. But when I tried to use git as a versioned database of changes in my pet project, I learned a lot about indexes, compacting, etc. Article covers a lot and is very helpful!
Surprise, surprise, another piece of LLM-generated slop on the front page of HN.

From chapter 1:

> When Git slows down, engineers adapt in bad ways. They stop asking questions the history could answer. They batch work to avoid sync cost. They keep messy branches alive longer, postpone cleanup, and treat the repository like something slightly dangerous.

From https://gitperf.com/epilogue.html

> Once machines start producing code at machine cadence, the model from this book does not break. What changes is the pace: more branches, more commits, more automation, and more surrounding metadata. The traffic gets louder, and the features that keep Git legible under pressure move from "nice to have" to "essential."

> These stop looking like side optimizations. They are what keep machine-scale Git traffic usable.

Similarly, if not performance-focused, I can wholeheartedly recommend Building Git[0], which walks you through building your own git clone in Ruby (although the language is immaterial).

[0]: https://shop.jcoglan.com/building-git/

I'm only on to chapter two and already it's explained some plumbing details that I somehow have missed all these years. This is great
> LFS adds its own operational overhead.

Seemingly seconds on every remote-touching command, even on a very small repo.

What is worse is that for about half a year or so, I now have to authenticate my ed25519-sk key with my Yubikey thrice (!) when using LFS. On every push.
That they didn't go with git annex was such a fit of NIH of a mistake.
loading story #47932419
I've always wanted to see a book that describes git for the common man and gives them tons of examples for how to use it to do productive things.

Even for a small office, git can be immensely useful. Entire production line workflows can be implemented with git .. if only folks would learn to use it productively.

Its not just for development. Writers can use it productively. Accountants too.

It always kind of irks me that Git hasn't just been folded into the OS front-end UI by any of the OS vendors .. it'd be so revolutionary to give common folks an easy way to manage the timeline/history of their computer use using git.

The obvious reason is that most file formats used by writers, accountants, etc. are binary files which do not very much benefit from git.
loading story #47932069
I've been wanting to ask this:

Why isn't

    git clone --depth 1 ... 
the default?

I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.

A question: why is git involved at all in this? You don't want a repository.
loading story #47930098
What if that's only you? Git isn't made only for those who "just want to install something"
{"deleted":true,"id":47930137,"parent":47929926,"time":1777346449,"type":"comment"}
Its not the default because that'd be counter-productive to developers who use git with larger repositories, which is how git started life in the first place - your clone depth would be entirely useless for Linux kernel developers, for example, if it were default ..
ted nyman: #1 most knowledgable college football fan in sf

and also git

which makes more sense i guess

Of most things, really, he was on Jeopardy for a reason! https://thejeopardyfan.com/tag/ted-nyman
The text reads like an LLM was involved in this.
{"deleted":true,"id":47930653,"parent":47929035,"time":1777353261,"type":"comment"}