> An 1890 Supreme Court case provides that a state cannot prosecute federal law enforcement officers acting in the course of their duties.
> The law also ran headlong into the Supremacy Clause of the Constitution, which holds that states may not regulate the operations of the federal government.
Adding to this: while certs are indeed well-supported by OpenSSH, it's not always the SSH daemon used on alternate or embedded platforms.
For example, OpenWRT used Dropbear [1] instead, which does not support certs. Also, Java programs that implement SSH stuff, like Jenkins, may be doing so using Apache Mina [2] which, though the underlying library supports certs, it is buggy [3] and requires the application to add the UX to also support it.
You can just replace dropbear with openssh on OpenWRT. That was one of the first things I did, since DropBear also doesn't support hardware backed (sk) keys. Just move it to 2222 and disable the service.
I reenabled DB on that alt port when I did the recent major update, just in case, but it wasn't necessary. After the upgrade, OpenSSH was alive and ready.
I downvoted this comment for sounding like a summarizing LLM, not adding anything substantial beyond the title of the post, before realizing you were the poster and author.
> it's basically a cost optimization masquerading as a feature
Cost optimization in the user's favor.
Remember that every time you send a new message to the LLM, you are actually sending the entire conversation again with that added last message to the LLM.
Remember that LLMs are fixed functions, the only variable is the context input (and temperature, sure).
Naively, this would lead to quadratic consumption of your token quota, which would get ridiculously expensive as conversations stretch into current 100k-1M context windows.
To solve this, AI providers cache the context on the GPU, and only charge you for the delta in the conversation/context. But they're not going to keep that GPU cache warm for you forever, so it'll time out after some inactivity.
So the microcompaction-on-idle happens to soften the token consumption blow after you've stepped away for lunch, your context cache has been flushed by the AI provider, and you basically have to spend tokens to restart your conversation from scratch.
Twisted pair is good but it only gets your losses so low at these speeds. Keep in mind that USB cables have a very small budget for signal loss, and at 40Gbps they're carrying frequencies 25x higher than 10gig ethernet.
But apps shouldn't be able to hammer WindowServer in the first place. If your app is misbehaving, your app should hang, not the OS window compositor!
FWIU there's really no backpressure mechanism for apps delegating compositing (via CoreAnimation / CALayers) to WindowServer which is the real problem IMO.
https://calmatters.org/justice/2026/04/immigration-mask-ban-...
> An 1890 Supreme Court case provides that a state cannot prosecute federal law enforcement officers acting in the course of their duties.
> The law also ran headlong into the Supremacy Clause of the Constitution, which holds that states may not regulate the operations of the federal government.
reply