If you don't care about ink quality, then aftermarket ink is fine.
However, if you want your pictures to last 10+ years under the sun, or being able to read what you have printed after some time, getting the genuine ink is the way.
People think ink is simple. It is not.
Anybody thinking otherwise, some points of pondering:
- Why Xerox and HP run their own toner/ink labs to formulate their own ink down to molecule level?
- Look at your standard disposable pens. Gel, liquid, dye, pigment, alcohol/water/oil based, UV resistant or not... It's a hard chemical problem.
- Similarly even something bland like fountain pen ink has hundreds of different formulations. Not colors, formulations. Washable to cellulose reactive and everything in between...
It's not dyed drinking water.
Lastly, I'm not against people using 3rd party ink at any level. I just want to point out that not every ink cartridge is created equal.
Then why don't they allow it, perhaps with warnings?
They don't block after market ink because of quality concerns, though they might claim so, they block it because they want to make more money from you themselves through ink sales. The common response here is “but they make a loss on selling the hardware!”, to which my response is “their bad pricing decision is not my problem”.
My roomba had warning in its guide that third party accessories may not properly work with the device. I was like pshaw you're just nickel and diming me!
But indeed, the third party brush caused the robot to have all types of errors. Some third party parts did work, just not the brushes. I guess there's some sort of strict size tolerance and the third party ones were a bit too big or small.
I agree that "making loss on the hardware and using ink to offset that" is a very bad business decision. I have an 10+ year old HP Deskjet 4515 Ink Advantage which had a high initial price but cheap refills (black ink is pigment, but color cartridge is dye, but is UV resistant if printed on good photo paper), and that thing never created any problems for me hardware or software wise.
I can still use any print I got from it even after a decade. Ink's that stable on these.
From my perspective, 3rd party ink or toner is a support nightmare, esp. if it's bottom of the barrel. Again, from my perspective you should be able to take the responsibility and use these if you really want, but any ink or toner related damage might be out of warranty then (HP's genuine cartridges come with their own guarantees).
So, I can speculate that makers both offset the price and don't want to handle support tickets related to 3rd party ink damage for lower end devices, and buyers of higher end models are either using 1st party ink, or fine with paying the repair costs if their 3rd party installations go haywire.
Also, it's possible that kits for higher end inkjet systems (large format/plotter systems) tend to be higher quality since these models cater to professional shops which needs high quality supplies.
Lastly, I talked with someone who said that they buy the cheapest paper and cheapest ink because the printouts are disposable for them, and I find that point entirely fair, too.
My main point was underlining the fact that ink is not something simple in formulation. I don't defend banning 3rd party ink, but just pointing out some facts. I believe everybody can carry out their own fafo procedure.
That does not mean I cannot use the ink I want in a tool that I own.
Yes, your ink might be better. Market it that way and make it known. No problem with that. But prevent me from using my tool using DRM and firmware updates? That is customer hostile.
Printing a family picture on 4"x6" photo paper, framing it and putting in a living room exposes it to copious amount of UV light over a decade.
It's one of the exact reasons inkjet printers and blank, inkjet-compatible photo paper exists. HP was bundling them with their printers when I last opened mine.
> We have documented incidents of service outages caused precisely by spikes in unauthorized traffic - overwhelming the servers, causing service disruptions affecting everyone. The cost was instability felt by all users.
So it's a problem that their printers are popular, and they can't be bothered to scale their infra, so let's gate everything based on USER AGENT STRING! This is so crazy of an excuse that I don't believe it.
"We forced every user of every printer, worldwide, to interact with their printer through our centralized servers. This caused service disruptions affecting everyone. The cost was instability felt by all users."
There, I fixed it for you Bambu. You may use it under Creative Commons.
Seems like making the slicer only able to talk to the printer via the cloud was a bad way to do things, where any issue results in “instability felt by all users.”
This is false. After the authorization-related firmware changes last year LAN mode doesn't allow 3rd party slicers to connect.
LAN mode is also abandonware with numerous issues and missing features that they've had no interest in fixing. Orca slicer has had to rely on hacky workarounds in Bambu's buggy networking plugin just to be able to connect to printers in a different subnet.
https://github.com/bambulab/BambuStudio/issues/4512
> I can connect to my P2S in LAN mode with OrcaSlicer just fine (currently using the latest 2.4.0 nightly).
You either haven't updated the firmware or you also enabled "Developer mode" which has its own issues.
> This is a separate issue, I think even Bambu Studio can't connect to printers in LAN mode on a different subnet.
It's not a separate issue, it's a long-standing bug in their proprietary networking plugin that they refuse to fix. Orca slicer has implemented a hacky workaround so it actually works there.
> This is a separate issue, I think even Bambu Studio can't connect to printers in LAN mode on a different subnet.
Yes, that's the point. The nerworking is broken. The issue isn't unique to a specific slicer, their software sucks. Orca ran into the issue because they wanted to make a basic feature that works on every other printer on the market work on a bambu.
A conspiracy-theory steelmanning interpretation of that statement is that Bambu thinks that some unscrupulous Chinese manufacturer is performing DDoS attacks against them, but can't fully and publicly admit to that for legal reasons.
Or a vm per container, if you insist on containers. I've have a couple of relaxed weeks recently due to running everything on VMs rather than some random Kubernetes service.
ASML is one of the bigger bottlenecks I hear. They're fully booked out years in advance so even if Intel wants to build many more fabs, they can't.
There was a recent interview with Dylan Patel and he explained it pretty well.
Basically, there are tiers of risks and how "AGI pilled" each tier is. The bottlenecks and supply constraints get worse and worse as you down down the tiers.
Tier 1: OpenAI/Anthropic - extremely AGI pilled and think it's a sure thing. They want all layers underneath to prepare to make as many chips as possible and go all in.
Tier 2: Nvidia/AMD/Broadcom - very bullish but doesn't think AGI is a sure thing
Tier 3: TSMC, Samsung, SK Hynix, Intel, Sandisk, Micron - bullish but if they're wrong and overbuild, they can actually go bankrupt. Each fab can cost tens of billions. An N2 fab is estimated to be $30b each.
Tier 4: Every supplier to T3 such as ASML, Applied Materials, other fab machines and suppliers - Less bullish, may even see this as just a super cycle rather than a permanent increase in demand so they're less inclined to take too many risks to scale up
Next decade seems possibly false - if Intel starts getting deals and commitments now, it takes them about half a decade to build a fab. Agree it seems unlikely though.
Intel doesn't even have enough capacity right now to make enough Xeon chips. CPU demand is absolutely booming but their Intel 18A and 3 nodes don't have great yields.
Ok, real question. What products are people actually building with agent frameworks? I get the utility of AI coding tools and generic chat apps, but that is the extent of utility that I've been able to get from AI. I'm looking for examples that are real businesses, not toys.
I use a custom framework for creating basic but useful tools that work with sensitive data. There are cases in my organization where I like the idea of people using Claude or similar to assist with a process, but Claude Desktop or Claude Code doesn't offer the safety or security we need (in part because the people using it are unconstrained, in part because the harnesses aren't perfect and the LLMs can make bad choices).
This provides a harness that's a state machine with very explicit directives, and it uses Deno as the runtime to constrain network, filesystem, environment, and other types of access at runtime as needed.
Kind of like using skills in Claude Code to teach it how to do something, but with extremely tight guard rails. Like, you can only write a specific file when in a specific state, otherwise that tool isn't even callable.
It requires understanding the problem that's being solved quite well. This often leads to realizing it can be automated without a harness. Finding cases where an LLM is genuinely crucial to enabling the automation is difficult.
A good example of one recently was getting a local LLM to define schemas for an internal tool based on existing research data. It looks at the data, figures out the semantics of the data, relationships, and how that maps to the target schema. This is impossible to automate without this semantic inference. It then uses duckdb to perform transformations from raw data to the appropriate schema, and finally, tests the schema in the validator with the data. It makes a very complex, often unappealing and confusing process very easy. Once it's done, the data is in better shape than we ever got it to by hand. This is partially because of a validator I created, but also because the LLM can identify patterns really well and retain a massive spec while it works.
You could do it with all kinds of existing harnesses but this one lets us comfortably define processes we trust and lets us operate on data our partners would never allow into the cloud or on OpenAI/Anthropic's servers in particular.
> I'm looking for examples that are real businesses, not toys.
These tools are used within a real business (specifically a coastal science NGO) and they aren't toys, so hopefully that's useful information. Based on my experience so far, and it could be my lack of imagination, I have no idea how you'd use these as the foundation for a business. I find more cases that can be automated without an LLM than I do with one, and they tend to be so niche and strange that no one else would ever need them and they can't be generalized.
We're building https://brooked.io/. In the same way that Cursor provides a lot of features on top of the base agents, we want to do the same for spreadsheets. There are many workflows that benefit from having an agent available - resolving cell values from a prompt, writing functions, sheet insights, alerting, debugging.
John Deere has lost so much good will among farmers due to their lock-in efforts, it's wild. Unfortunately, many farmers are stuck with them because the only tractor dealership within a reasonable distance is John Deere.
I think there is a definite possibility that they aren't compute constrained, but rather trying to improve a sorry cash flow situation before IPO.
Of course, I don't have real insight into available compute, but the vibe slope seems to have dropped a bit, at the same time as new GPUs are being shoved into datacenters as fast as possible.
Their enterprise API customers are literally competing to see who can throw the most money at Anthropic. Anthropic has very little reason to focus on a $20/month user, and with their current momentum (especially since enterprise deals are long-lived) they could remove Claude Code from the Pro plan without any revenue hit. In fact, it may be a huge revenue boost given the strength of the Anthropic brand.
Never buying a cartridge based inkjet printer again.
reply