Yeah, I talk to someone on google meet who will seamlessly transition between talking to me and talking to Claude while on the call, and it is extremely annoying.
I dunno, it seems like the fact that we arrived at a fairly standard structure for URL paths that works pretty well is not a bad outcome.
Seems a lot better than the other potential world we could lived in, where paths were a black box and every web server/framework invented their own structure for them.
In my current project I use URIs to refer to absolutely any entity in a git(-ish) repo. Files, branches, revisions, diffs, anything. URI turns out to be a really good addressing scheme for everything. Surprise. But the most used and abused element is always the path. Query takes a lot of that mess away. Might have been unmanageable otherwise.
Grouping data by user is common and normal in computing: /home laid precedent decades ago.
Project directories are an extremely common grouping within a user’s work sets. Yeah, some of us just dump random files in $HOME, but this is still a sensible tier two path component.
The choice to make ‘view metadata-wrapped content in browser HTML output’ the default rather than ‘view raw file contents’ the default is legitimate for their usage. One could argue that using custom http headers would be preferable to a path element (to the exclusion of JavaScript being able to access them, iirc?) or that the path element blob should be moved into the domain component or should prefix rather than suffix the operands; all valid choices, but none implicitly better or worse here.
Object hash is obviously mandatory for git permalinks, and is perhaps the only mandatory component here. (But notably, that’s not the same as a commit hash.) However, such paths could arguably be interpreted as maximally user-hostile.
File path, interestingly enough, is completely disposable if one refers to a specific result object hash within a commit, but if the prior object hash was required to be a commit, then this is a valid unique identifier for the filesystem-tree contents of that commit. You could use the object hash instead of the full path within the commit hash, but that’s a pretty user-hostile way to go about this.
So, then, which part of the ordering and path selections do you consider indiscriminate, and why?
actually, instead of the object hash, you could also use the commit-hash. then the filename would be mandatory, but the url would be more readable and usable: give me the file VERBS.md as it is at commit <hash>
Which target audience of github needs extra verbosity in the commit hash, though? Once you know it you know it; if you don’t know git you aren’t the target audience; etc. Saying /user=foo is no better than ?user=foo if your audience can work it out without confusion from your unadorned paths. We have a great deal of history with filesystems showing that people are capable of keeping up with paths that lack key names if exposed to and familiar with them, and if the filesystem isn’t being constantly randomized.
Of course there's nothing to stop you using URIs like this (I think Angular does, or did at one point?) but I don't think the rules for relative matrix URIs were ever figured out and standardised, so browsers don't do anything useful with them.
what would be a better way of doing that? i am not disagreeing, but i just can't think of any way to improve on this. put everything into the query part? i prefer to use the query only for optional arguments. in this example the blob argument is the only thing that doesn't fit in my opinion.
Every object in git (commit, tree, revision of a single file) has a hash that is guaranteed unique within a repository (otherwise many more things than a web UI would break) and likely also globally. I can understand wanting to isolate repositories to prevent hash collisions from causing problems, but within a repo everything has a universally unique ID.
edit: for instance, that specific VERBS.md is represented by the blob 3b9a46854589abb305ea33360f6f6d8634649108.
> this should be sufficient to represent the file.
Except it's not, because the oid can be a short hash (https://github.com/gritzko/beagle/blob/a7e172/VERBS.md) and that means you're at risk of colliding with every other top-level entry in the repository, so you're restricting the naming of those toplevel entries, for no reason.
So namespacing git object lookups is perfectly sensible, and doing so with the type you're looking for (rather than e.g. `git` to indicate traversal of the git db) probably simplifies routing, and to the extent that it is any use makes the destination clearer for people reading the link.
turns out that "blob", "raw" and "commit" have nothing to do with the hash itself, but are functions to describe how the object in question is to be presented. so what i said above about blob being redundant is false, the problem is rather that it is in a weird place. it should be at the end, like a kind of extension because it signifies the format of the output. except i think putting it at the end makes handling relative paths more difficult as it would have to be appended to every link to other files.
the roxen webserver has an interesting solution for that. they call it prestates and it's placed at the beginning of a url: https://github.com/(commit)/gritzko/beagle/a7e172/VERBS.md . it sets the format value visually apart, and you could have multiple prestate values separated by a comma. i have used that feature extensively on my own sites. i even expanded on the concept in custom modules.
They are following the /key/value/key/value pattern, but the first two pairs in a GitHub URL are fixed to user and project, which lets them omit the key names. I could see them not being willing to hardcode the third pair to blob.
Back when GitHub URLs were kind of cool, github.com/user/gritzko/project/beagle would have been much less cool than just github.com/gritzko/beagle.
As far as this website reports, I'm undistinguishable from most other Mac users in Brooklyn, New York. Seems like it's not actually highlighting the frightening aspects of fingerprint.
Yeah, your browser fingerprint might be a needle in a needlestack. You might not be able to distinguish one needle from another needle easily, but if you have enough needle samples you can start to identify what the needles are pointing at. Data aggregators collect enough pseudo-indistinguishable needles to be able to disambiguate and associate them with a known identity or cohort. For example, your mobile browser might be indistinguishable from most other Mac users in Brooklyn, but your mobile browser might be the only one running on a device from an IP address that regularly logs a meal in MyFitnessPal at that Starbucks wi-fi before making Apple Pay/Google Wallet purchase, hits the next 8 stops on the train before connecting to the same cell tower at the narrow window as you enter your office (telling on myself a bit, tho I am in Vancouver, not Brooklyn).
Span this across all of your movements and activities across multiple aggregators and it's a trail of movement through a fog of data that is fuzzy, but enough to identify you, or a small cohort of similar users.
> Maintainers review auto-closed issues daily and reopen worthwhile ones. Issues that do not meet the quality bar below will not be reopened or receive a reply.
Seems like not an unreasonable way to deal with the problem of large numbers of low quality issues being submitted.
But how is it any different from keeping them open?
Like if they are going to sort through all the issues eventually (like they claim), why not just close the ones that are not worthy when they get to them instead of closing all by default?
Is it just so that the project doesnt have open issues on its github page? But they are open issues in reality because the maintainer will eventually go through them?
Nothing is "unreasonable" in the sense that an open source project should have the right to do what it wants with its rules but its definitely a weird stance.
> But how is it any different from keeping them open?
If all open issues are actionable items, that makes expected workload a lot easier to handle.
If most open issues are actually in "needs triage / needs review" state, you lose the signal from the noise.
The issue tracker for a project exists primarily as a tool for maintainers, not for outsiders. Yes, the maintainers could change their workflow to create a new view that only shows triaged tickets.
Or, they could ensure the default 'open' view serves their needs.
Somehow going through closed issues just to reopen them sounds like more effort than just using the built in label system which is made for this purpose, but maybe that's just me.
If that process actually happens then there’s absolutely no reason not to have the reviewing maintainer close it after review instead. The only reasonable conclusion is that documented process is aspirational at best and vibed itself at worst.
The established culture on a lot of projects is that you open an issue, and then you have to keep pinging it every week otherwise the stale bot closes it with "this issue is stale, closing, but your contribution is very important to us".
I quite like pi and learned about the contribution guidelines a while after using it. Hard to complain about people making software for free using a process that works for them.
I will say having a project with a slim issue tracker that only contains things the maintainers have blessed (and thus presumably are more likely to get worked on) is pretty nice.
If you’re googling for a bug your hitting and come across and auto closed issue, you know you have to submit a higher quality issue to get it looked at, rather than just +1ing the existing lacking issue.
> Sure, that profit does not cover the model training costs, but that’s a separate issue
It is? If another company comes out with a better model tomorrow and offers it at the same price Anthropic charges for Opus, they’re going to lose customers fast. They have to keep training to keep selling inference.
Most businesses factor in the cost of making their product into the product’s P&L.
also, like super mario kart, SOTA models from the rear will be continually released because theyre sunk costs and open weights will advertise for themselves. Also, its clear FOMO is a DDoS attack on any perceived leader because theres no way they dont oversell.
Lastly, theyll realize like every good capitalist, theres more profit in exclusiveness and cutiing out customers.
> if there's one clear example of "Product Model Fit", it's OpenClaw
You think so? OpenClaw certainly owned the hype cycle for a while. There was a thread on HN last week where someone asked who was actually using it, and the comments were overwhelmingly "tried it, it was janky and I didn't have a good use case for it, so I turned it off." With a handful of people who seemed to have committed to it and had compelling use cases. Obviously anecdotal, but that has been the trend I've seen on conversations around it lately.
Also, the fact that the most starred repo on GitHub in a matter of a few months raises a few questions for me about what is actually driving that hype cycle. Seems hard to believe that is strictly organic.
The early versions of this design arrived in 2008, though it has a sweet sweet flash header complete with audio until 2021.
An even more irrelevant side note: it appears that archive.org has a javascript based flash emulator built in to run old flash websites, which is pretty amazing.
Agreed -- except that all of their docs and marketing pitches it for use cases like "per-user, per-tenant or per-entity databases" -- which would be SO great.
But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.
If you want to dynamically create sqlite databases, then moving to durable objects which are each backed by an sqlite database seems to be the way to go currently.
And now you've put everything on the equivalent of a single NodeJS process running on a tiny VM. Next step: spread out over multiple durable objects but that means implementing a sharding logic. Complexity escalates very fast once you leave toy project territory.
Also the fact that they are applying this to the GitHub action they built, promoted, and directly integrated into claude code is pretty frustrating.
reply