Hi, embedded firmware engineer here. I give it a B-
There's a weird area between the workloads that fit on a microcontroller, and the stuff that demands a full-blown CPU. Think softcore processors on FPGAs, super tiny MIPS and RISC-V cores on an ASIC, etc. Typically you run something like Yocto on a core like that. Maybe MontaVista or QNX if you've got the right nerd running the show.
So you have serious compute needs, and security concerns that justify virtual memory. But you don't have infinite space to work with, so hardware acceleration is important. Having a standard API built into the kernel seems like a decent idea I guess.
And yet, I've never heard of AF_ALG. I've never seen it used. The thing is, if you have some bizzaro softcore, there's a good chance you also have a bizzaro crypto engine with no upstream kernel driver. If you're going to the trouble of rolling your own kernel with drivers for special crypto engines, why would you bother hooking it into this thing? Roll your own API that fits your needs and doesn't have a gigantic attack surface.
Maybe I'm naïve, but it seems like the people who keep their eye on the ball and really try to make a great product are the ones who win out in the long run.
If you optimize for performance reviews, you'll make a lot of money, yeah. But you'll eventually find yourself overemployed and incapable of keeping up with that gambit anymore. Or, you'll find yourself doing something you never wanted to do. In extreme cases, it's like those people at Palantir in that post last week, realizing they're the bad guys. Usually it's just looking at your calendar on Monday evening, seeing a wall of meetings from 4PM to 9PM, and telling your kid you can't go to the park today.
Meanwhile, the "product people" I know well are all doing really cool stuff during the day, then going home to enjoy their lives. They don't make as much money, but they're happy.
Quote that one Wu-Tang song today, and you'll be quoting that one Talking Heads song in a couple years. I guess.
I think this can be true at the IC level and in situations where the organization's success depends on the product being good, but that's not always the case. Big companies with market control can go years, or perhaps even indefinitely make bad product decisions and still print money. Product development comes to revolve less around merit and more about appearances.
I've worked in big tech and had the sort of conversations with my managers where they say: "The work you're doing in X is great. I use it and it really needs work. But it's not a priority, or even 'impactful'. Your work on X is effectively equivalent to doing no work".
Sometimes it isn't even about getting a promotion, sometimes the implication is you should be worried about keeping your job. You can still do X which everyone knows is great and someone should do, but "on your spare time, as an extra" because Y is what your performance review will really revolve around.
The sad part is I can tell they mean it, and do agree someone needs to work on X, but it isn't their decision to make, because they have to show face and explain to their manager why an engineer earning XXX,XXX didn't meaningfully work on Y. Ultimately someone up the chain who you've never talked to is the person who decided X is unimportant; they don't want to kill it they just don't use it, or have a strategic reason to not care about it.
In the politics of upper management perhaps it was something an adversary used to vouch for, and now you have to prove the org can do without it. Or perhaps it's the ace in your pocket, and you wan't it to be lack-luster so when the big boss above you starts talking about retirement, you can show amazing wins in the area and be first in line for succession. Companies are not democracies. For better or for worse big companies are not democracies, they are feuds, so if the kingdom isn't in danger its future comes to depend a lot not on what's the best decision, but how a decision fits the game of thrones.
> Good thing GitHub has plenty of built-up goodwill to spend down
Do they, though? I don't know a single person who uses GitHub who actually likes it. It's far more often something like "it's fine, but I miss (GitLab|Gerrit)" or "I stopped using it for personal stuff and moved to (Codeberg|GitLab)."
The brand recognition among non-technical folks is really the strongest selling point in my eyes. And that's irrelevant to ~95% of software development.
GitLab is getting ensh*ttified as well. Rarely a day passes when they’re not trying to somehow push their AI features on me, even though I never asked for it. Thinking about moving to managed Forgejo.
Same reason I dislike SMS based 2FA, or worse SMS/email based 1FA codes.
You dont truly own your cell number or domain. Meanwhile passkeys are certainly hardware I own, likewise my TOTP codes are stored and calculated locally.
It's unreal what the Quest headsets can do. Go look up "questnav." Robots on holonomic drivetrains moving at 20 ft/s while strafing and spinning, maintaining perfect pose tracking using nothing but a Quest 3S strapped into a 3D printed bracket. And with basically zero latency. Oh, programmed by high schoolers btw. It's astonishing.
> Wait, so.. how are we supposed to test Intel builds of our macOS apps from now on?
You don't. You could stay on an old MacOS. Apple would prefer that you tell your customers to stop being poor and buy a new computer. They will make your situation increasingly unbearable until you do.
The overwhelming majority of people haven't needed a new computer since 2016. The current economic situation makes a new computer a worse value proposition than it's been in 35 years. Vendors are responding to this situation by manufacturing obsolescence. Microsoft pulled the same stunt with Windows 11's TPM 2.0 requirement.
I think it's a stretch to call Apple's ARM transition "planned obsolescence". The M-series chips are very clear improvements on what came before and there is a clear rationale for that transition.
We're talking here about an OS that hasn't even come out yet, that will get years of security support, for computers that Apple hasn't been selling for several years now. Seems pretty reasonable.
I said "manufactured," not "planned." I don't think Apple intended to do this at the outset. Tim Cook wasn't leaned back in an office chair, twirling a moustache saying "yes, let's make every mac made before 2019 SUCK!"
If it was planned, Rosetta 2 would have never existed in the first place. It would have been a qemu fork haphazardly crammed into Xcode.
There was no "planning" here. Here's how I imagine it went: a developer whined about tech debt, management seized an opportunity to generate revenue, neither party considered, yknow, humans, and now we're here.
The average user cares about their fans going full-blast when running some garbage electron app and their battery life being shit. You're just being dense.
Back when these machines were released they were hallowed as being the ultimate in mobile computing. What happened here what did not happen to other Intel-based machines from that era?
For developers, the difference is like night and day.
My 2019 MacBook Pro used to sound like a jet plane taking off whenever I did any sort of build. On a bad day, I could've baked some cookies on it. Admittedly, the corporate spyware that was constantly scanning every single file didn't help matters.
Eerily similar story here. My wife was using her 2017 MBP (the one they got sued over) and she adored it until Tahoe suddenly caused Chrome to run like hot garbage. I bought her an open-box M3 Air. She likes the color. It doesn't provide any more value to her life than her 2017 MBP did, and yet we're out $1000 because Apple said so.
So on the one hand you are so much aware of the obsolescence issue and on the other hand you just decided that upgrading a 2017 MBP to Tahoe is a good idea? I am on a M4 Pro Mac mini and it is still running Sequoia.
> Apple would prefer that you tell your customers to stop being poor and buy a new computer.
This is certainly an interesting way to characterize dropping support for old hardware. What is a reasonable way to go about hardware deprecation in your view?
When a company obsoletes a hardware platform, as Apple will do with this update, it is their moral duty to open the platform. Release the necessary source code, blobs, and docs to allow independent actors maintain their equipment, or maintain that equipment on behalf of its owner.
Apple won't. We all know that. The Broadcom wi-fi chips and the subtle differences in their embedded architecture vs. conventional PCs means x86 Macs will never be adequately supported without Apple's stewardship.
I think one thing that rubs people the wrong way is that Apple has basically infinite money at this point. They're not dropping support for old hardware because they don't have the resources to maintain the support. They're just doing it because they want to, and that's kinda lame.
Especially when I can keep getting both feature and security updates for Windows on hardware that's the same age (or older) as the EOL Apple hardware.
This isn't even just an Apple attitude. The whole macOS and iOS software ecosystem has this "nothing before the prior two OS releases exists anymore" attitude, and it is absolutely infuriating. It is absolutely possible and not a huge lift to support prior operating systems, but Mac developers just don't tend to care or do it.
The reasonable way to go about hardware deprecation is to not do it until that hardware is Truly Gone™, buy some actual definition of Gone that isn't an arbitrary number of years or versions.
That's overly dramatic. I don't think a new Macbook Air today is a worse value proposition than some Mac from 35 years ago. I just checked Apple prices from 1991:
- Mac Classic II, the slowest of the bunch, $1.900, or about $4.661 today
- Quadra 900, the fastest model in 1991, was $7.200 ($17.663 today)
- PowerBook 170 was $4600 ($11.285)
"Value" and "price" aren't the same thing. A new computer in 1991 cost more, but it also covered a vastly increased set of use cases versus a machine from 5 years prior (assuming the hypothetical 1991 computer buyer had even owned a computer before). Today, you can buy a used MBP with an M1 and it will do everything a new MBP can do, and the differences compared to a new machine will be imperceptible to most users.
Plenty of people would even be perfectly happy on an x86 Mac, too. Sure, there would be a perceptible difference compared to a new machine, but not enough to justify the price. That's what obsoleting Rosetta is about, it's about artifically making x86 Macs so unbearable that would-be happy users have no choice but to buy something else.
> FUZIX is a fusion of various elements from the assorted UZI forks and branches beaten together into some kind of semi-coherent platform and then extended from V7 to somewhere in the SYS3 to SYS5.x world with bits of POSIX thrown in for good measure. Various learnings and tricks from ELKS and from OMU also got blended in
This README reads like a blog post.
Is this intended for some kind of professional purpose? Because I could see this being amusing for hobby purposes but I have no idea what I'd do with it at work.
Sounds like just another Monday for a firmware dev, honestly. Can't repro your bug because your board is subtly different than mine, but I think I see what's wrong?
There's a weird area between the workloads that fit on a microcontroller, and the stuff that demands a full-blown CPU. Think softcore processors on FPGAs, super tiny MIPS and RISC-V cores on an ASIC, etc. Typically you run something like Yocto on a core like that. Maybe MontaVista or QNX if you've got the right nerd running the show.
So you have serious compute needs, and security concerns that justify virtual memory. But you don't have infinite space to work with, so hardware acceleration is important. Having a standard API built into the kernel seems like a decent idea I guess.
And yet, I've never heard of AF_ALG. I've never seen it used. The thing is, if you have some bizzaro softcore, there's a good chance you also have a bizzaro crypto engine with no upstream kernel driver. If you're going to the trouble of rolling your own kernel with drivers for special crypto engines, why would you bother hooking it into this thing? Roll your own API that fits your needs and doesn't have a gigantic attack surface.
reply