I do agree with the feeling that Redis started to add more and more features as time went on. A lot of that is because the time and cost to stand up a dedicated service (like Kafka, RabbitMq, etc etc) was higher than just putting more data into Redis.
While I agree with the theme that Redis has become more and more complicated and had more features added to it, as part of a monetization push by Redis Inc, it's understandable.
Especially since there are plenty of other posts on HN titled "Just use Postgres" for everything. So, why does Postgres get a pass on being a message queue, distributed lock manager, JSON document store, and vector database, while Redis is not allowed to?
For that mater, Redis/Valkey is relatively easy to stand up... I almost don't give it a second thought to stand up Redis, and I'll generally reach for an MQ library that's using Redis over the dbms more often than not... Mostly in that, by comparison setting up Rabbit or Kafka gets a lot more complex.
What I don't always "get" is Redis as a persistent database, such as with the "LamerNews" codebase (which EchoJS uses), so that use case still feels a bit alien to me, and I'm surprised it works as well as it does.
I think that if you develop your application from scratch and directly use Redis as it was intended (data structures over the network) then it makes sense. I've done it for some applications and it's quite nice.
But you have to make the choice to skip using a relational database, and a lot of application frameworks make it very easy to use a relational database out of the box, to the point where you would have to make a conscious choice to use Redis directly, and sometimes for a CRUD app it's easier to just use the RDBMS.
I have also moved my git repositories to a self-hosted NUC. I have not yet bothered with a HTTP frontend to share it with the world, mostly because I don't want to provide AI scrapers with content and don't want to put the work in to block them.
It's a shame that all these companies that benefited from open source have poisoned the industry like this
I use Gitea in my NUC, hardware was used and cost like 50 quids. Has been running for 3 years! If you lock it down so that it is just available in LAN and no internet, it is a solid, timeless experience.
I also have a self hosted Foregejo on a Pi (but probably not much longer) that acts as a mirror of my GitHub. The main issues I keep facing are:
- Repositories seem to mirror fine for a few weeks and stop. Pretty useless. I have a PAT token for it that does not expire, and yet it seems to claim otherwise, despite the token working elsewhere when I test it.
- Sometimes there is nothing in the logs, sometimes it's the database being locked for some reason. The only thing that uses the database is Forgejo.
- So far I haven't been able to tell if this is Forgejo, crappy SD IO on the Pi causing database locks, or Forgejo sucking at being a mirror.
Could be yes. I’m using a M.2 SSD with USB adapter and that seems to work well with my RPi 4. Wasn’t that expensive plus should last way longer than SD card.
Why would someone gladly provide their work as open source but draw the line at AI reading it and using that knowledge to help more programmers later? It makes no sense to me. I actively want all of my code to be read by AI.
+ they don’t want to help train a model that might ultimately put them out of work.
I don’t personally agree that AI are taking out jobs, but I do think it’s still a reasonable concern others have so I would sympathise if that were the rationale.
Doesn't seem inconsistent to me. I may want my code to be open source so that other humans can read it, understand it, build on it, and contribute to it.
I may also have a philosophical opposition to generative AI at the same time - there are plenty of environmental, societal, and intellectual-property costs that some may find unconscionable.
It's kind of breaking the social contract. Licences were drafted, conferences were held, and endless flamewars tried to codify what it means to collaboratively build, distribute, own and use open software.
Then came the model trainers, ignoring the entire discourse, reasoning: "if I can download it, it's mine too use". And then basically selling the resulting tech back to the community.
Not unlike big tech extracting money from open source, but at least the latter usually (somewhat maliciously) complied with the license.
1. Many teachers don't publish, and those that do publish often still reserve their best for their students.
2. OS development sometimes operates like esoteric societies: you publish enough that people with the desire and insight become interested and engaged - both a filter and an invitation. So you can tailor the community you like.
Both depend on people really valuing these mutually-constitutive relationships.
My observation is that the generations raised on social media and gaming are happy enough with those substitutes, and view publishing their best work as a kind of self-promotion and participation in a larger, diffuse community (without a real role in governance). And they're right: expecting more personal communities now is a severely limiting factor, and AI removes most of the incentives to participate in someone else's project.
Open source is not necessarily about helping any programmer, for any endeavor. Actually, my code targets the end users, not fellow programmers.
I don't want my code to be used to build proprietary software. I want code built on top of mine to respect its users. I choose the AGPL for this reason.
I also don't mind the attribution.
The LLMs don't care about all that, and do that by hogging the resources, y creating a lot of waste and pollution and disrupting society for unclear benefits. No thanks.
> It's a shame that all these companies that benefited from open source have poisoned the industry like this
Open Source and the OSI are an industry plant. Look at who sponsors it.
The monopoly hyperscaler conglomerates get free labor and use it to build the world we despise: tracking panopticons, phones we can't install things on, device attestation, browser monoculture with no adblock, etc. etc.
Google made people fall in love with BSD/MIT, and look what it did.
Just a few of the classic plays:
"That Belongs to Us Now" - (1) vendors build stuff like Elasticsearch and Redis, (2) the hyperscalers yoink it into their proprietary offerings and take all the profits, (3) original authors and their companies starve.
"Embrace, Extend, Extinguish" - (1) vendors take an open source project like KTHML or Linux and build their version, (2) they flood the market with their offering, pushing out the competitors, (3) they use anti-competitive means to get their thing in front of all eyeballs, (4) once they have marketshare, they do evil things like add tracking and remove freedoms
Open Source needs to replaced with "freedom for the people, companies must pay". Source available shareware with anti-hyperscaler teeth.
Even Richard Stallman's licenses are not strong enough. CC BY-NC-SA is better.
"Pure" Open Source is corporate welfare. It was a mistake. It enabled giants to hang us with our own rope.
> Open Source and the OSI are an industry plant. Look at who sponsors it.
This is ignorant to the history of Open Source software. Software has been open long before it was subsidized by large corporations.
"Computer software was created in the early half of the 20th century.[2][3][4] In the 1950s and into the 1960s, almost all softwares were produced by academics and corporate researchers working in collaboration,[5] often shared as public-domain software." https://en.wikipedia.org/wiki/History_of_free_and_open-sourc...
You're talking about a different thing to OP. OP is talking about the OSI and the specific incarnation of 'open-source' that came with it, you are talking about the more general social pattern of open collaboration.
One problem with all of these licenses is that however the code is available, we can’t practically prevent the LLM companies from training on it (especially given that they don’t respect IP laws anyway). No idea what to do about this. Wonder if communities will have to move to some kind of fractured system where source is gated behind a login.
Rough times out there for transparent organizations.
Why can't others just be "Others I disagree with"? Why it has to be some grand conspiracy?
I'm all for open source, most of what I do is released as MIT, almost never "Free Software", still doing the same thing since LLMs appeared, regardless of everything else.
I'm a real person, have nothing to do with OSI but willing to explain my position, as long as you take it as real opinions held by a real person, instead of going into conspiracy theory land. Ask me anything, I'll give you my honest perspective.
But our 25 year lax regulatory environment has created a world where the largest players abuse consumers and the competitive ecosystem.
Open source is one of the many strategies these companies have abused to create grave harm to our society. It's let them get further with our support and with less expenditure. It's given them an ethical smoke screen.
- Social media algorithms are the tobacco products of our century. Kids are growing up with a distorted sense of self worth, people are getting angrier and more polarized, and all of it is highly addictive - all to fuel corporate profits.
- The most popular and important computer form factor is controlled by a duopoly and we can't even own / repair / install / have rights to our devices.
- All hardware is becoming locked to device attestation, meanwhile companies are lobbying for "age verification" (read: full-on identity tracking).
- Distribution is being locked to monopolies. 92% of "URL bars" are owned by one company, and typing something into a computer goes through a bidding war protection racket.
I can go on and on about it. I shouldn't even have to. You know this.
A lot of this is because of a lack of proper competition. Since the DOJ / FTC / EU / ASEAN are being toothless (the latter are slowly waking up), the next best thing we can do is take away their open source abuse. Stop letting them use our work against us and the rest of the population.
I share your worries, but I don't blame open source for it. They would have done the same (or worst) without it.
Also, open source is one more justification on why we need to increase taxes on the very rich. At this point all of them have built their fortunes on it. Just like they do on the rest of public infrastructure.
I find non-commercial licenses too extreme. People selling your free software or using it in a commercial way so long as they respect the license is a good thing
So, they implemented a git client in zig, that had some significant speedups for their usecase. However:
> The git CLI test suite consists of 21,329 individual assertions for various git subcommands (that way we can be certain ziggit does suffice as a drop-in replacement for git).
<snip>
> While we only got through part of the overall test suite, that's still the equivalent of a month's worth of straight developer work (again, without sleep or eating factored in).
I have heard that you can speed up your favorite compression algorithm by 1000x, if you are not so concerned about what happens when you try to decompress it.
I ran the test suite specifically for git's CLI as that was the target I wanted to build towards (Anthropic's C compiler failed to make an operating system since that was never in their original prompts/goals)
The way it gets organized is there are "scripts" which encompass different commands (status, diff, commit, etc) however each of these scripts themselves contain several hundred distinct assertions covering flags and arguments.
The test suite was my way of validating I not only had a feature implemented but also "valid" by git's standards
> The bun team has already tested using git's C library and found it to be consistently slower hence resorting to literally executing the git CLI when performing bun install.
I find that to be a much more remarkable claim. Git doesn't have a C library, and even if it did, In which world is literally shelling out faster than a C library call? I suppose libgit2 could be implemented poorly.
If we follow their link[1] we get some clarity. It's an markdown (ai prompt?) file which just states it. Apparently they've "microbenchmarked" creating new git repositories. I really wonder if creating new git repositories is really on their hot path, but whatever.
Where does that claim in that random markdown file come from then? Well apparently, 3 years ago when somebody was "restructuring docs"[2] and just made it up.
I guess there really is a class of "engineers" that the AI can replace.
libgit2 is not nearly as thoroughly tested as the git CLI is, and it is not actually hard to imagine that calling the git CLI to create new repos is faster than shelling out to a C library.
referencing the commit where they removed the ability to link with libgit2 because it was slower.
Having built a service on top of libgit2, I can say that there are plenty of tricky aspects to using the library and I'm not at all surprised that bun found that they had to shell out to the CLI - most people who start building on libgit2 end up doing so.
I don't know what the bun team actually did or have details - but it seems completely plausible to me that they found the CLI faster for creating repositories.
> Your comment does not seem to be in good faith, implying that they've made up the performance difference.
I believe I have accurately represented what the article says. Had the article provided the comment you have just linked, I would have commented on that as well. I did not intend to imply that they manufactured the performance difference, merely that they don't know what they are talking about. The thought I have in my head is that they are incompetent, not that they are malicious.
I wholeheartedly agree that libgit2 is full of footguns, that's why it matters that it's not actually "git's own C library" but a separate project. I also agree that you usually end up shelling out to git, exactly because of those problems libgit2 has. If those problems aren't speed though, and I don't think they are, the blog post would have to cover how this reimplementation of libgit2 avoids those problems.
I'm not here to litigate if bun would be faster with libgit2. I am however here to make the argument that the blogpost does not make a convincing argument for why libgit2 isn't good enough.
I'm actually assured to hear the git CLI is better covered than libgit2 since the CLI test suite is what I used as my "validation" for progress on meeting git's functionality
As for what happened with Bun and libgit2, my best guess honestly is smth to do with zig-c interops but don't doubt there are optimizations everywhere to be done
Bun's attempted to integrate with libgit2 instead of spawning calls to the git CLI and found it to be consistently 3x slower iirc
The micro-benchmarks are for the internal git operations that bun rn delegates to CLI calls. Overall, network time (ie round trip to GitHub and back) is what balances the performance when evaluating `bun install` but there are still places where ziggit has better visible wins like on arm-based Macs https://github.com/hdresearch/ziggit/blob/master/BENCHMARKS....
I don't know what that "BENCHMARKS" document is supposed to show. When I try to replicate their results I'm getting wildly faster executions of standard git, and they don't provide enough details for me to theorize why.
I also noticed that their version of the "blame src/main.zig" command doesn't actually work (it shows all lines as not being committed). Sure, it's easy to optimize an algorithm if you just don't do the work. Git does indeed take longer, but at least it actually gives you a blame of the file.
Edge cases certainly apply with scripts depending on specific git CLI args or stdout strings may not suffice with ziggit.
_However_, for the use cases that most developers or agents are looking for, ziggit should have enough features covered. Happy to fix issues or bugs if that's not the case
> _However_, for the use cases that most developers or agents are looking for
What use cases are those? How did you determine that these are the use cases most developers/agents are looking for?
For me, git has a ton of features that I rarely use. But when I need them, I really need them. Any replacement that doesn't cover these edge cases is fundamentally incomplete and insufficient, even if it works fine 99% of the time.
I have been struggling with this, myself. I used to push everything to GitHub, but a couple months ago I switched over to using my small low-power home server as a Git host. I used to really enjoy the feeling of pushing commits up to GitHub, and that little dopamine rush hasn't really transferred to my home machine yet.
It's a shame. The people who control the money successfully committed enshittification against open source.
As someone who was impacted by GitHub's git outage in late February, which caused us to cancel a feature release, I am more sensitive to the availability of their git service, than their chatbot.
Jazzband maintained some incredible Django packages and tools that made it possible for me to build a system at my $JOB that would have been impossible to do on my own. It is a true tragedy of the commons situation where I was expected do more with less, and I didn't have the ability to contribute back/donate anywhere near the value that these projects provided to $JOB or myself. I did contribute personally, but it's very clear how all of this value has been extracted and used by large companies to build higher and higher walls for themselves, and none of the people that actually make any of this work get more than crumbs.
By this point, this take is old to the point of being tiresome.
People should get what the deal is with open-source maintainership at this point. They should’ve gotten it back when Jazzband started. Nothing has changed since then. If you don’t want big companies using your stuff and not pay for it, don’t publish OSS. If you have some expectation that Google is going to write you a fat check, put it in the license—even if it’s practically unenforceable, it’s loads more than what 99% of OSS projects do right now.
If people go into OSS maintainer positions expecting anything other than what has time and time again happened…it’s like that little comic of the guy poking a stick into his bike wheel spokes and falling over.
The implication that OSS maintainers get nothing for their time is also laughable. If you were doing it for the money you wouldn’t be doing it in the first place. If they actually cared about making the world a better place and wanted to volunteer their time toward it they should go donate down at the soup kitchen. The reality is not everyone is so financially focused, but that shouldn’t be mistaken for altruism. It’s more that some people get their rocks off through other means. The reality is that OSS maintainers often find that they’re more financially focused than they thought they were—the novelty of their code running at Google wears off, the novelty of microcelebrity wears off, etc—and they get tired of it.
While I agree with the theme that Redis has become more and more complicated and had more features added to it, as part of a monetization push by Redis Inc, it's understandable.
Especially since there are plenty of other posts on HN titled "Just use Postgres" for everything. So, why does Postgres get a pass on being a message queue, distributed lock manager, JSON document store, and vector database, while Redis is not allowed to?
reply