Hacker Newsnew | past | comments | ask | show | jobs | submit | lsy's commentslogin

There's a very real possibility that AI proponents completely lose the next generation of adults. The output is not enjoyable to consume, the people who rely on it are not cool, and the effects of using it are unpleasant and hard to defend on aesthetic, intellectual, or moral grounds.

There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.


My great hope for AI is that it kills social media by making 99% of content and comments untrustworthy and not worth consuming.

Social media isn’t always about consuming content. It’s also about getting jolts of momentary joy and reward. You get those in two ways: seeing cool things, and participating in cool things. Especially cool things before they go viral. Clicking like on a post that isn’t viral yet, and gambling to yourself whether it will go viral, has the same dopamine flux when it pays off as winning at the slots. Even my reward-defective brain manages to eke out a moment of reward from that. So if you simply remove the content, what’s left is the gambling market. Gambling on something you upvote going viral isn’t about how much content there is in what you placed your bets on, it’s about being able to have that special knowing look when someone tells you about it because you’ve just won the socio-memetic lottery. And AI isn’t doing anything whatsoever to stop that reward loop.

I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.


I find this idea of getting in early interesting, because it is completely novel to me. Is it common for people to derive so much pleasure from voting for something before it gathers momentum? You really lean into this idea, likening it to winning a jackpot, so I assume it is at least somewhat widespread.

The ability to make accurate predictions has been rewarded for so long that we now reward it even in the abstract.

By stopping integer counts do you mean not collecting upvotes and downvotes at all or just not displaying them?

If it is the later, I think it can be an interesting experiment, although I doubt it matters that much because you can still gauge the "engagement" based on how your posts rank. But there is absolutely no way HN could work if posts and submissions stop being sorted based on their votes. Community moderation via voting is what allows HN to remain functional despite having only two moderators. If votes stopped mattering for a week then HN will likely be flooded with spam by day two and the experiment will be halted by day three.


Just not transmitting the numbers to clients. I’m aware that a dedicated actor could try to infer their effects through study, code, etc., but I’m interested in the effect on the majority of people rather than those adamantly intent on counting their numbers.

Voting combined with voting on the votes/voters worked quite well in the olde days of yore when Slashdot used such a system. You did not 'vote up' or 'vote down', instead you voted things like 'insightful' or 'overrated'. Some of those categories caused the vote count to go up, others caused it to go down. Users decided for themselves whether they wanted to see all posts or only those above a given threshold. Then there was the meta-moderation system wher a rotating cadre of users could flag abusive votes. If a user got too many sch flags he lost his voting rights. This latter system would be good to have here as well given that I've seen a lot of abuse of the down-vote button where all recent posts for a user who has voiced an opinion outside of some desired narrative get voted down no matter their subject. Such abuse would be caught if there were a meta-moderation system in place. It would help reduce the group-think which is seen on sites like HN and Reddit.

A similar voting system is - or was - in place on lobste.rs where a reason for voting down needs to be given. It does - or did - not have meta-moderation though which takes away the possibility to get rid of vote-abusers.


Is moderation truly accomplished via votes, though, as opposed to flagging?

You say "spam", but unpopular comments are rarely promotional. They lack any one unifying quality. They might be naive, and/or they might be difficult.


Flagging doesn’t report any numbers to clients, so any potential changes to it are out of scope re: my client-accessible integers concerns. There was a big thread about the flagging system in the AI rule-change post a few weeks ago that may be of more interest to you along those lines, though!

https://news.ycombinator.com/item?id=47341705

I especially appreciated dang’s reply, quoted here out of context as a teaser:

> We're going to add that.


The vote counting thing can be interesting.

There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.

But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.

It's not one-size fits all but I've seen positive effects on more marginalized forums.


Oh, I have no objection to the voting buttons — just to us users being able to see the underlying numerical outcomes of us using them.

The content being untrustworthy doesn't matter when it comes to social media, as most of what is enticing about social media nowadays isn't the content of the content. It's the fact that there is a never-ending stream of content specifically catered to maximize your dopamine to keep you scrolling.

So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.

Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"


People love hot dogs. People don't want to know how they are made.

I have this hope too but social media is junk food now, and junk food is a very lucrative product. People don't seem to care as long as it's engaging.

Twitter arguably did that a while ago.

It’s one of the things I like about it.

You can only give it a heart, otherwise move on. There’s no Karma or other points.

Except for view counts etc which are useful for content creators but for the average users it doesn’t do much.

However you do see the number of likes on your comment and get notification pings so there’s that..


Doubt that. Meta got the right idea, ai influencers to your taste.

So, now people are in groups and chats full of bots posting exactly what they want to hear.

Instead of meta b it's states, companies, or individuals hoping to make money from their followers


If that happens, AI will have been worth the hassle.

That creates a market for lemons. This is not a good thing. People who create good, valuable things cannot distinguish themselves in such a market, so they exit it. The good creators hurt the most.

Like it or not, there is a lot of value in public discourse, and we lose all that value if we drown it in noise.


That describes social media for the last 10 years, at least. Not dead yet.

you'll be happy to learn that has already happened.

When the leading CEOs are saying the next generation will be unemployed due to AI... uh yeah, you're gonna lose them!

Isn't it bad now that Sam Altman and the others are backpedaling on this and going "jobs are going to still exist you just can't imagine them!" because the PR problem was getting so big? [1]

Like don't we want people running these companies to be honest to the public rather than misdirection?

[1]. https://www.platformer.news/sam-altman-ai-backlash/


> "jobs are going to still exist you just can't imagine them!"

Ironically, this makes even less sense.

If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?


I am by no means defending Sam Altman here, but it's roughly the same value proposition as every productivity enhancing technology. Creating more even if you don't end up working less means at the end of the day we all still have more. There are certainly potential problems when it comes to how that "more" is distributed, among other issues, but things that increase human productivity tend to go along with increases in quality of life even if it doesn't mean you get a bunch more free time to sit on the beach drinking Mai Tais.

And truthfully those productivity enhancements mean that you probably could indeed work less, as long as you're willing to also forgo the standard of living improvements that go along with them. The idea of the digital nomad living in some incredibly cheap but less than advanced country is based on exactly this concept. But a lot of people aren't willing to do that, nor should they feel compelled to. Working the same 40 hours a week while making more stuff seems perfectly reasonable.


From the article:

> This is a good instinct: one of the virtues of democracy is the way that it gives people a feeling of control over their own lives. People who believe that they can rein in AI companies through votes and laws and regulations will be much less likely to turn to violence.

I like how this is entirely put in terms of "feelings" and "beliefs" with the ultimate goal being to keep people from resorting to violence. It doesn't seem to play any role how much control people actually have.


> don't we want people running these companies to be honest

What about any of these folks’ biographies hints that they’re capable of being honest?


Which one of those things he said do you consider "honest" and not PR? Both of them sound like PR to me, just aimed at different audiences

I think before he thought OpenAI was going to make him a trillionaire he was more honest about X Risk and job displacement since he didn't have the incentive to lie. Most early AI thinkers saw AI as more dangerous than nukes.

> We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. [1]

[1]. https://www.anthropic.com/news/core-views-on-ai-safety


How exactly is OpenAI going to make him a trillionaire? He doesn’t own any equity. I’m sure he owns some indirectly via funds he’s an investor in, but nowhere near enough to make him a trillionaire.

So where did this idea that Altman is aiming for a financial payoff come from? He could easily have taken equity, and didn’t. Why? What part of the evil master plan is that?


IMHO shrugging it off as “superficially plausible text” is the extreme to the other side.

We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.

As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.


In writing code, yes. But has there been an actual positive impact in other fields?

No. It ruins art, ruins music, ruins communication and on and on. It's cancerous with respect to anything related to art or cultural value.

Why "ruins"? just because it's not made by a human?

AI-made music is frankly pretty good, do you actually listen to it?


What's the point of listening to purely AI-generated music?

I don't mean music that has AI-generated stems as part of an arrangement, where a human actually created it and used AI for bits and pieces, I don't see absolutely any point on listening to purely AI-generated music. The fundamental essence of music is emotion, listening to something generated without emotion has no point, it might sound good but it's hollow and devoid of meaning.

I've tried to listen to it, it doesn't even make me "sad", it makes me feel... Nothing. I'm a hobby musician and I incorporated some AI-generated parts in some tracks where I mangled/processed them but my idea was exactly to express how hollow AI-generated music is without the human aspect.


> What's the point of listening to purely AI-generated music?

For formulaic music-as-a-product (McMusic™) it arguably makes no difference whatsoever whether it is totally machine-made or assembled out of vat-grown parts in the musack factory . This says far more about this category of music than it does about the value of machine-made music. Insta-pop, a large fraction of hiphop, supermarket country, plastic metal, there's plenty of formulaic thrash made by both man as well as machine. Even the supposedly man-made stuff was often half machine-made already before the advent of generative models so that other half did not make much of a difference.

If you're looking for music which makes you feel things (other than 'comfortably numb' to borrow a phrase from some real musicians) you're probably looking in the wrong area. It is the new music for airports, elevator music, hold-the-line music, slide-show-music, acoustical filler.


Many music that are in autoplay on Spotify are AI and I literally didn't know until I checked, the emotion was triggered successfully, I don't really see why only a human could be able to trigger you an emotion? Like if I'm at a party, let say I don't know the artist and everything is AI made and everybody is vibing, then what's "wrong" with it?

I think this is more of a musician side which I respect, but a lot of people would simply not care who created it (or what).


Most people don't care about music, as most don't care about art in general. People like entertainment though.

What you are describing is more akin to a form of hollow entertainment through the medium of music, a lot of pop music can also fall into that category (no, not all, there is also a lot of artistry is many pop artists/songs).

If AI-generated music triggers emotions on you then keep consuming it but knowing that it's a hollow form of the art, there's no one on the other side communicating with you, it's basically like having a conversation with a chatbot, it might sound human but you know that there's no one on the other side listening to you. AI music is the other way: there's no one on the other side telling you a story, or a feeling they went through, it's just a mimesis of it.


Music has served various roles throughout history. The whole notion of music being "art" and "invoking feelings" has not always been consistently true across the entirety of its history of various cultures. Painting, drawing, sculpting, and other visual arts have had a similar history as well.

We can take examples of some pieces from famous composers like much of Haydn's works, some pieces from Handel, Bach, Mozart, etc.. Some of their works were commissioned pieces for particular functions. Whether the music be for courts, dances, aristocratic displays, churches, and other events. Even on the battlefield music has been used to route troops, supply orders, and other forms of communication. My point is that there is not always a story to be told. Music can also be used to disrupt one's sense of time -- while on hold on the phone, elevators, etc.. I would not say the music in those instances are really telling me a story either.

Much like the visual arts. Emotion can be expressed in a piece, but pieces can also be functional in nature. There is a difference between figures in an instruction manual, portrait paintings, and a van Gogh piece.

Not to mention that this debate has been had countless times through out history, as well. It's always the same No Scotsman Fallacy. For example, some critics of electronic music have made a similar argument way before AI.

"It's not real music if there are no instruments."

"It's not real music if <racial/cultural demographic> creates or plays it."

"It's not real music if the music does not adhere to contrapuntal rules."

I think what angers people most is that as technology progresses, the gap between effort and accomplishment decreases. Thus there is some sort of clinging to a sunk cost fallacy for some. As if something being easy to create devalues all the effort one has put into something. Maybe it does? I do not personally think so. If anything, it allows greater access for people to participate in the arts -- something the arts have also had a historically rocky relationship trying to gatekeep.

The invention of the camera did not make painting irrelevant. It even opened a new door to the world of visual arts. I do not think AI music will make musicians irrelevant either, and perhaps new doors might open too.


You invoke gatekeeping and the no-true-scotsman fallacy. Fair enough. I might add "Dylan goes electric" to the pile of examples.

However, look at this other comment: https://news.ycombinator.com/item?id=48099547

There, patterns of electrical signals are said to be good, and not bad, but at the same time "morality" gets dismissed. But ideas of good and bad are morality!

Note that moral ideas don't have to be correct. "I should burgle a house" is a moral idea, an idea in the domain of morality. 1920s disapproval of the seductive decadent jungle rhythms of jazz was moral (I guess we say "moralistic" to indicate that we don't agree). The opposite attitude, praising jazz, is also moral. Treating Dylan as a traitor for going electric was moral, and attending the metal love-in that was Ozzy's farewell concert was also moral.

Then, a couple of posts up the thread from you, there's an imagined scene of people "vibing" to music at a party where everything is AI made. This sounds disgusting, somewhere between vaping and using a vibrator, and so I think I have to grudgingly give it my full approval. These imaginary young people are enjoying the vibe that they have vaguely selected. Maybe they had some input about the genre, maybe implicitly. They're choosing not to turn it off, anyway, because they like it, they think the vibe is good.

You imply that everybody saying "It's not real music" is wrong. OK, kind of, but they're not completely wrong. It doesn't follow that just because of our long history of snobbery, therefore everything is real music. The snobs are doing gatekeeping, but they're also doing discernment, and participating in the kind of moral ideas that music and art is made of. It's such a pain to define art that I'm liable to be downvoted for trying: some people are certain that relativism is the way forward, and that it's a brilliant insight to throw our hands in the air and give up. You're quite right that it has to encompass lots of different things, and no one defining feature will withstand counterexamples, but it can still be defined in a vague way as a collection of optional qualities, under which we could say that an instruction manual is not really art, but arguably artistic or artfully made.

So, I'm not judging the AI music as art or not-art right now, but I'm saying that it's amenable to so being judged. Anybody claiming that it's good music is admitting the possibility that it might be bad music, and this is a moral matter, about the value of feelings, meanings, and affections. That even applies to good or bad elevator music, it's trivial background sound, but approval or disapproval of it is moral. This is not about its worth as patterns of signals, because that's reductive. Those patterns mean things, or matter to us in ways that we have preferences about, which are value judgments.


I have. It's overly polished, formulaic and dull. It's devoid of any of the qualities that make music interesting. There's nothing a human is trying to communicate. Perhaps it could be used as elevator or hold music.

I agree, it's shockingly good these days; we can argue about morality etc, fine, but burying one's head in the sand and claiming it's bad puts you at odds with reality, which isn't a good place to be.

It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.


It's not that AI can't convince a novice that what comes out is passible.

It's that experts in a field generally agree that what comes out is insidiously hollow garbage.

This isn't a "semi-religious" belief. It's linear token soup and diffusion bakes running headfirst into actual expertise, second and third order effects, refined skill and taste, and so on.

If you actually want to see civilization advance, you cannot rely on machines that merely mash up existing intellectual output while pretending to have expertise.

We already had that in the form of art school avant-gardism. AI is just style transfer of that, with corporate sycophancy and valley hyperbole as a veneer.


It's not the experts that are going to be listening to the music. It's not made for the experts to pick apart and analyze.

But you really believe it will stay that way? What do you think models will be 10 years from now? (not only models, we must include processes and tools in it) - developers were thinking this until recently there is some sort of sudden switch where "shit, it's good enough" and then pass this in a 50x loop and suddenly it becomes "shit, it's actually great" which proves it's a matter of time imo before it's not hollow garbage but actually innovative and expert in its field.

If it's generated by a model, I would avoid listening to it. Much like I'd void a visual or video generated by a model.

I still think you are missing entirely the point about music or any art in general.

It doesn't matter how technically innovative, or how much expertise, a model has, while an AI is not a consciousness that can express itself it will be hollow. There's no way around that.

If some form of AI becomes conscious, and can express itself through whatever art form it conjures for that, why would it even use music? Music is human, it's tuned to how our brains work and perceive sounds, I'd be much more interested to discover what art forms another form of consciousness that we can commuicate with can come up on its own.


I can't fully agree with the hollow part, when AI resonate with me about real-life issues (I understand it's just a machine without thoughts) it's pretty expressive and spot-on, and genuinely useful. I don't really see why it couldn't be the same with music, it can already write completely unique pieces that are very entertaining and full of emotions (even tho they are "fake")...

The brain perceiving sounds a certain way in the end is just data, that can be mapped as well, an AI can make us laugh right because it understands speech really well (and will be a thousand time better someday), what's the actual difference with music?

Let me give you another example, there is some Meme about older folks getting bamboozled by AI images right (especially doomsday stuff) which proves that it does trigger them genuine emotions, what's the difference if that image does actually exist or not (or let say a human photographed it).


I can't go see AI music live. Staring at a GPU just isn't the same.

What if that does not matter to someone? I know my opinion can't be common, but I cannot stand live music. I dislike the sound quality, the differences from the recording, the crowds, the cost, and more.

I know not everyone enjoys concerts, but it’s fundamental to my listening experience. That aside, I have no interest in music or art of any kind generated by AI. Other folks might, but I’ll have nothing to do with it.

The difference is the indelitable reality behind it.

You are confusing the topography of it with the substance, what's the point of something that is without substance? Without meaning? It's just fake, whenever you point to someone that an image that brought them joy is fake, generated by AI, it immediately changes the feeling they had. It doesn't bring the same awe anymore, awe is reserved to what is real. It might bring awe in the sense of "woah, a computer can do that" but that's a different feeling than being in awe of the story the image created.

How can it be full of emotion if it's created by something without emotion? It's just a mimicry of emotion, I really cannot understand how you cannot feel that knowing it's not created by another being; being real is the whole point, an emotion triggered by something not real, not experienced, transformed, and communicated by someone else is inevitably hollow.

Like: how can AI know what is to feel in love? Or to feel the loss of a loved one? Or to feel despair about something? Or to feel depressed? Or to feel extreme joy? Why would you listen to a song telling you a story to evoke an emotion on something that simply does not exist? There is no experience being transmitted, it's purely a hollow amalgamated mimicry of the experiences that were ingested but the output has absolutely no emotion, just a synthetic mimesis of it.

You are enjoying the mimicry, it's entertaining, but I really would like for you to ask yourself deeper questions about this rather than be impressed by the surface of it.

> The brain perceiving sounds a certain way in the end is just data, that can be mapped as well

You completely missed the point.


I completely understand your point of view, but I can't genuinely agree with: > How can it be full of emotion if it's created by something without emotion?

A nice crystal, a nice rock (something devoid of emotion or feeling) is used as art, it's also triggering emotions in individuals, this thing doesn't have a consciousness, nor understand anything, but still, it's able to change humans brain chemistry. AI that acts as a therapist, let say saying the EXACT same thing as a real therapist would, let even bring it further where the therapist is on vidcall to have a proper representation, and let say now it's 1:1 AI generated as in zero flaws (exact same, you'd think it's a human with exact same speech as that therapist), why would the experience not be transmitted? Ton of people say things that they don't really mean as well right, and those thoughts are transmitted successfully, felt or not.

AI can incur pain, emotion, distress, happiness and so-on. I genuinely try to think about what's behind, but what I feel is that in the end, humans aren't so magical, it's like watching a beautiful woman being all "fake" with heavy make-up, most humans can still appreciate it, despite knowing it's all BS. People lie as well, this is very deceptive, let say someone is saying he is so happy but in reality, he just isn't, you just felt something for him that were just false (a mimick), and this is kinda our normal.

What if you never knew, let say you are so fond of an artist/person but in the end, you discover it's 100% AI without human supervision, then what, those were real emotions you felt, not entertainment, you RELATED with that "person", you felt his pain.

And one more thing, why couldn't I teach an AI to transmit my own knowledge, speak to it for decades, write to it for decades, then just mimick everything, mimicking the "truth" about my innerself, why would that not be valid? Isn't exactly what the bible is doing (I'm not religious), people seem to find it valid.


Medicine?

There was recently an article shared around here that an LLM diagnosed ER patients more accurately than doctors.

Looking beyond LLMs image analysis to detect cancer and other diseases.

Like in coding, AI can and should be a useful tool for the human who decides and is ultimately responsible.


If you read more than the headline it was not how doctors diagnose patients in an ER(small text only description of symptoms).

Remember when IBM claimed the same about Watson?

“In producing textiles but has there been actual positive impact in other sectors?” I’m sure the Industrial Revolution didn’t just happen all at once, it started somewhere and crept.

support of all kind (including voice), marketing, real-estate, financial... yes, a ton of fields are being very impacted right now but right now doesn't even matter, what matter is what we know it will reach as theory will become practice.

Generally, people don't care about "fields being impacted", and the students certainly don't. People care about the impact certain technology has on their daily lives, on their welfare and the ability to pay off their mortgage and provide a decent life for their children.

The AI as it is today isn't really doing any of those things. At most, it's a sort of reliable replacement fot Google Search. Worden ehen, it's being presented as threat to all those things the people care about.


> comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees

I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?

I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?


> the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?

They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.

The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.


It was good on a long time scale, but I think the parent poster refers to the short term. If I recall correctly, during the early Industrial Revolution the average life span decreased, child mortality went through the roof, and malnutrition meant adults lost their teeth in their early 20s at best. That was… worse. It took time for the revolution to become a net-positive for the average person (which I certainly wouldn’t dispute).

> They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.

That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.

> The Industrial Revolution was good.

The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.


How about if you were a working class child, just before they started in a mine or a textile mill? Was it good for them?

Infant deaths decreased for a while (and NOT because of the industrial revolution):

> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty[1]

then rose:

>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.

[1] Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.


That's primarily the second industrial revolution (~1870-1914). The _first_ (~1750-1840) was... not so great, and note the gap. If your analogy is the industrial revolution, then "well, it's a bit shit now, but it'll all work out fine in 150 years" isn't _great_ messaging, really.

The public can't see any trains, electricity, concrete or glass windows, they see employment going away as workers and zero benefit as consumers.

Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.


I suspect many people during the Industrial Revolution weren't seeing those end products either, only a total upending of their way of life and means of earning a living. And to be fair, many of them probably didn't experience enough of the upside in their lives to make up for the shock of the transition. Ideally this time around we can make that shock less painful, but I'm skeptical.

They have to frame it this way, because the market has invested in it being "the next internet" kind of event.

It's much more than that, it will solve the deepest mysteries of the universe, not now, but in a decade, very likely.

You sound manic.

Realistic? Tell me why I'm wrong at least.

Or religious. Or both.

> There's a very real possibility that AI proponents completely lose the next generation of adults.

The college-age students I interact with hate AI content from other people, but they love using AI for their own work.

They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.

That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.

I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.


We are about to experience the commoditization of intellectual work, in much the same way the Industrial Revolution commoditized manual production. I don’t expect a Musk-esque abundance utopia this decade, but the impact will exceed anything we’ve seen in centuries. There is not an industry on earth that won’t change in the next few decades.

To conceptualize AI as merely “superficially plausible text” would be like writing off a Watt steam engine in 1776. The current AI bubble might be early, but it won’t be wrong. The fervor with which corporations are exploring the space stems not from misplaced optimism but an existential threat. Right now every industry is vulnerable to disruption on a massive scale.

And we’re still in the early stages. Frontier models like Claude or GPT-5.5 are still just tuning 2017’s “Attention is All You Need” with MoE, RLHF, and more compute. We are roughly where online services were in the early 90s, when Prodigy and CompuServe were battling it out for market share before the open web swept them aside.

We are still waiting for the modern equivalents of Yahoo, Google, Amazon, and Facebook, never mind the lessers. As Tim Berners-Lee said of the web: “we have not seen it yet. The future is still so much bigger than the past.”


as is tradition. an AI boom is always followed by an AI winter haha

It has been in the past. I think this time is different though.

as is tradition

> a very real possibility that AI proponents completely lose the next generation of adults

I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)

> We desperately need some leadership in companies or institutions that can place this technology in its proper context

We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.

What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.


> We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.

They're out there, but the artists are roundly anti-AI; if you want their input, you have to listen to what they're saying, rather than pretending that dissenting voices are uninformed.


That will happen inevitably, we are throwing spaghetti at the wall right now, and cleaning up the mess, lessons will be learned. The question is whether that phase will lead to real lasting damage and to what. For myself I no longer read cold emails, I believe they are all AI generated, and that communication method may legitimately die culturally. What else will be destroyed?

Many things will change, because many things are currently useless in the world right now, literally most jobs in a way shouldn't even exist. You think a guy behind the mcdo counter should exist? It shouldn't, that just an engineering "mistake" as it can already be solved, the world is just slow to catch-up, but it's not only AI, that's just automation. We banked for decades on jobs that virtually shouldn't exist except for the sole purpose of creating jobs, it's like a giant ponzi scheme literally and it will all catch-up at some point.

I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.

My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.

We need to rethink everything and the whole AI hate from the youth will not change anything about it.

I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.


Why would they give you a UBI? They would have no reason to do that...

Many countries have a form of UBI, although it's not guaranteed as the meaning of UBI would in a sense, but look at France with their RSA as an example, if you have no incomes/low incomes, you are entitled to it.

RSA is not UBI, UBI literally means Universal Basic Income, it's not for no income/low income people, it's universal.

You are conflating the concept of UBI with social welfare, they are different things and it's a bit annoying to see the erosion of the UBI concept into social welfare, I've noticed an uptick of this the past year or so, no idea where it's originating from...


Agreed I butchered it, but what is the concrete difference right now for someone that has no job (so where UBI is relevant) with social welfare and "UBI" if in the end, that person gets a monthly income that is somehow guaranteed?

The concrete difference is simple: you don't have to spend most of your life convincing the government that you deserve to continue getting money. If you've ever interacted with a welfare system, you'll know. They work much better on paper than in real life.

The concrete difference is that the society around the person living in a UBI-society will be very different than one where there's only social welfare.

Where will this ubi money come from when there is no one to tax and profits are going to tech firms based in America.

I don't have enough expertise in this field but I don't think we should be thinking only with a doomsday scenario, humans are quite resilient and innovative, society will completely change and I genuinely believe we will find ways, there will be a lot of suffering in between (and maybe after as well, as there is now) but we might eventually reach a point in automation where a lot of prices drop to the point where it's virtually free, food could be included, if we do have 24/7 machines that can build, expand, deliver and so-on with free energy somehow, it's not crazy to think that a KG of chicken could worth 10 times less than it is now, so many things could be reconsidered.

UBI could mean also that people could be living in places further away from main cities, and eventually housing will be automatically built as well so costs could drop sharply.


> but we might eventually reach a point in automation where a lot of prices drop to the point where it's virtually free, food could be included,

It took two world wars till we had an aberrational period where the middle class actually had lives which were good.

UBI can’t happen because governments globally don't have the money to pay for it. Its good to hope, but the details aren’t in favor.


Well if AI keeps cannibalizing jobs, governments will HAVE to find a way to make UBI works. If they do not there will be deadly riots as people lose their livelihoods.

There will simply have to be a solution, whether or not it’s deemed “impossible.”


I believe the solution is fairly simple: military robots to keep the masses in check.

Just look at how utterly evil and despicable the rich and powerful have been acting over the last years. Do you really think that people who are investing in shock collars to prepare for societal breakdown scenarios would give the non-rich masses a single red dime?


Of course humans will adapt, the core issue is how we can avoid as much suffering as possible while these changes happen, that's always the point. No one wants to live a life during a transitional period in history where suffering is increased, as a species we should be working to alleviate that.

What's the point of progress if we keep repeating the same mistakes of leaving miserable people behind? Is that progress or just a repetition of the cycle with new shiny things?


Additionally, certain work feels good. It feels good to accomplish something.

We'll have no UBI and little purpose.


> The people who rely on it are not cool.

That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.

But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.

Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.

If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.

Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.


> But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.

This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.

Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.


Again that is a problem with the group of people and how they use technology rather than the technology itself.

I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.


> AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.

This is a fine thing to wish for. But literally every AI company today wants their customers to use AI as much as possible.

I, too, would like to live in a world where AI is only _properly_ integrated into education. But that is impossible without limiting its improper integration. An no AI company wants any limits on AI.


A quarter of the answers Gemini gives me are made up. What would be the purpose of such, uh, “instruction”?

Will they get AI to stop lying first?

exactly! if you _properly_ integrate this argument into Scotland, _then_ it will be a true scotsman, otherwise definitely not!

Smokers can criticize. But they also have to face the cancer that will personally affect them. And hopefully take accountability for it.

maybe what we really need is a butlerian jihad

They were thinking machines at least. Here’s we’ve got a good guesser that fools 50% of the population at anytime that it’s anything but guessing.

> The output is not enjoyable to consume, the people who rely on it are not cool, and the effects of using it are unpleasant and hard to defend on aesthetic, intellectual, or moral grounds.

The AI output you are referring to mostly seems to refer to “AI slop”. It’s not hard to argue that AI slop sucks.

There is a lot that AI does that has created joy for me or people around me:

- whimsical profile pics for online profiles for me, family, and friends

- writing e-mails for community groups — good for a family member who doesn’t have the most sociable writing style

- automating data capture and organization

- automating scheduling with multiple and variable constraints

- catching obvious errors that somehow still happen (e.g., off by one errors)

- filling in gaps in analysis either due to gaps in knowledge or simply an oversight

These are sample of things that I have done or helped people do in the past week, and the results have been well-received.

Maybe I’m part of that solution that you propose, but I have used words similar to “biggest change since…” (I usually say spreadsheets, but I don’t think “Industrial Revolution” is wrong).

Fwiw, I don’t think the result will be dystopian the way most people seem to think that it will. I firmly believe that meat space interactions will gain much more traction, and that will change the way we live and work.


Perhaps next generation isn't necessary anymore. At least majority that can't adjust.

Euthanasia for the young might be the best we can offer to the next generation.

Rapidly depopulating Earth below 1 bilion people or less seems in our reach.


I don't really think we should talk about it with "use cases" anymore when it can virtually replace/enhance literally almost any form of white collar work and soon physical labor as well (people will act surprised the moment it comes of course, the same as with LLMs despite all the researchs made prior, if theory supports it = it will be), of course humanoids will be in every homes and they'll cost the same as a phone, soon enough, and we will also not be able to live without.

We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.


AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.


It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.


As much as I use AI, even for coding, I really do not like the argument. They are too chaotic to be compilers. The descent from prompt to code has far too many branches, and even small requests begin to build up bad patterns.

There is some fun to consider when sufficiently advanced AI allows this in areas where we are okay with things going wrong, but that seems a very limited domain for fun and games and not for serious software that needs to be correct as possible.

I can see vibe coding building very simple systems, and it likely will get better with systems that are one off throw aways where edge cases don't matter because we have a one off need of turning input X into output Y, but when it comes to people using AI in systems where correctness matters, long term support must be provided, and ease of adding new functionality is a serious consideration, it seems we are as far from having prompt as code as we are from AGI.


> Some people are even comparing them to compilers.

A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol


It's awful, and seeing even engineers I respected become so AI pilled they're shipping slop without review has made me lose respect for them. It also can't help but make me wonder: what am I missing? Am I holding it wrong? Am I too focused on irrelevant details?

So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.

Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.


Right? People have put in decades of work to make them extremely reliable, they didn't magically start like that.


Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.


The two come in hand.

Non determinism is what conveniently feels the gap of having no spec.

In fact turn temperature to 0. And it will be virtually deterministic. It exacerbates the problem that LLMs, as you rightly point out, have no spec.


"You can’t treat a prompt like source code because it will give you a different output every time you use it"

But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code

So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.

That requires enough thinking effort.


Didnt work for the prod data that the AI nukes in spite of prompts saying "DON'T FUCKING GUESS", just like that in all caps: https://news.ycombinator.com/item?id=47911524

What makes you think it will work for you?


That I don't let agents run wild in a production environment?


You let them write code that runs in prod, which is the same thing with extra steps.

Unless you review that code carefully, and then we're back to the point about it not saving you any cognitive overhead.


Of course it saves me overhead by not having to read all the necessary docs etc myself and just check the resulting code and not having to type all myself.


> Of course it saves me overhead by not having to read all the necessary docs etc myself and just check the resulting code and not having to type all myself.

That link I posted upthread, the "developer" did not read the docs "because the LLM wrote the code" and hence did not realise that the token they had was a full access token, not a limited access token.

You're now boasting that you also don't have to read the docs because the LLM wrote the code?

I mean, it's literally the cause of their irreversible production deletion, in a link that you replied to, and you state you do the same?


It depends on the project. Anything high stake, I verify myself. But also with simple things it is still way faster having a agent dig through it, quote me relevant sections and me evaluating if it is sound, or not.


>> You let them write code that runs in prod, which is the same thing with extra steps.

The “with extra steps” is doing a lot of work in that sentence.


> if I made a very clear spec - I can be almost sure

That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.

In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.

On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.


Well, unfortunately it is the same with real humans who happen to be non-deterministic as well. If I give them a task, I can be allmost sure, they will do it. But even humans can have unexpected psychotic breakdowns and do destructive stuff like deleting important databases.

So right now, humans are for sure more reliable. But it is changing. There are things I already trust a LLM more than a random or certain known humans.


your spec is a guideline, not something the LLM has to adhere to. it is definitely not guaranteed to work without error


Are humans guaranteed to work without error?


> AI coding isn’t an abstraction

Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.


No, because software engineering is more than <insert coin, receive code>. I've never had a full spec dropped on my desk lol. There's no abstraction.

Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.

A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.

Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol


With an abstraction, you literally move your thinking up a level. So you move up a floor up the tower and no longer have to think what's happening below. The moment something leaves your floor, its course is set. If a result come back, its something familiar, not something from the lower floor.

A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.

We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.


At what point do LLMs enable bad engineering practices, if instead of working to abstract or encapsulate toilsome programming tasks we point an expensive slot machine at them and generate a bunch of verbose code and carry on? I'm not sure where the tradeoff leads if there's no longer a pain signal for things that need to be re-thought or re-architected. And when anyone does create a new framework or abstraction, it doesn't have enough prior art for an LLM to adeptly generate, and fails to gain traction.


How much of "good engineering practices" exist because we're trying to make it easy for humans to work with the code?

Pick your favorite GoF design pattern. Is that they best way to do it for the computer or the best way to do it for the developer?

I'm just making this up now, maybe it's not the greatest example; but, let's consider the "visitor" pattern.

There's some framework that does a big loop and calls the visit() function on an object. If you want to add a new type, you inherit from that interface, put visit() on your function and all is well. From a "good" engineering practice, this makes sense to a developer, you don't have to touch much code and your stuff lives in it's own little area. That all feels right to us as developers because we don't have a big context window.

But what if your code was all generated code, and if you want to add a new type to do something that would have been done in visit(). You tell the LLM "add this new functionality to the loop for this type of object". Maybe it does a case statement and puts the stuff right in the loop. That "feels" bad if there's a human in the loop, but does it matter to the computer?

Yes, we're early LLMs aren't deterministic, and verification may be hard now. But that may change.

In the context of a higher-level language, y=x/3 and y=x/4 look the same, but I bet the generated assembly does a shift on the latter and a multiply-by-a-constant on the former. While the "developer interface", the source code, looks similar (like writing to a visitor pattern), the generated assembly will look different. Do we care?


LLMs have limited working memory, like humans, and most of the practices that increase human programming effectiveness increase LLM effectiveness too. In fact more so, because LLMs are goldfish that retain no mental model between runs, so the docs had better be good, abstractions tight, and coding practices consistent such that code makes sense locally and globally.


So are we basically saying that LLMs work most effectively on codebases that exhibit good quality coding practices, but are not themselves particularly good at creating such quality code themselves, since they were trained on all the code that exists.

I don't know what conclusion to draw from that. Maybe that there's no such thing as a free lunch, after all.


The conclusion I draw is that LLMs need a human expert with some taste and agency to empower/supervise them. Just like human dev teams do.


It's actually been useful for me to explain certain best practices now that I can show that the LLM cares.

Why is this name bad? Because an llm will get confused by it and di the wrong thing half the time.


Code is a design tool, just like lines on an engineering drawing. Most times you do not care if it was with a pen or a pencil, or if it was printed out. But you do care about the cross section of the thing depicted. The only time you care about whether it’s pen or pencil is for preservation.

So I don’t care about assembly because it does not matter usually in any metric. I design using code because that’s how I communicate intent.

If you learn how to draw, very quickly, you find that no one talks about lines (which is mostly all you do), you will hear about shapes, texture, edges, values, balance…. It’s in these higher abstractions intent resides.

Same with coding. No ones thinks in keywords, brackets, or lines of code. Instead, you quickly build higher abstractions and that’s where you live in. The pros is that those concepts habe no ambiguity.


Great Q, and your framing "there's no longer a pain signal for things that need to be re-thought or re-architected" perfectly encapsulates a concern I hadn't yet articulated so cleanly. Thanks for that!


It's easy to get this way with enough scrolling, try to focus on the things around you in real life. If you aren't reading LinkedIn or HN, how much do you actually hear about AI in day-to-day life? If someone at work directly asks you to do something using AI, you might make some effort to do it. But otherwise let the news and hype cycle play out. You don't need to anticipate or keep abreast of where people think things will be in ten years... they are almost certainly wrong. Think of LinkedIn and HN as entertainment at best. Work on personal coding projects without AI, build relationships with non-tech people, go outside.


It’s notable that just the English “implementation” of FizzBuzz here is longer and more ambiguous than the naive Python implementation, never mind the boilerplate (which itself is also longer than the Python).

The explosion of frameworks and YAML tools the author describes can be attributed to the fact that English is an extremely poor language for program specification, and requires all kinds of guardrails and annotation to accomplish the same specificity as a typical computer program.


> is longer and more ambiguous than the naive Python implementation

write it in lojban instead, ez


LLM coding isn't a new level of abstraction. Abstractions are (semi-)reliable ways to manage complexity by creating building blocks that represent complex behavior, that are useful for reasoning about outcomes.

Because model output can vary widely from invocation to invocation, let alone model to model, prompts aren't reliable abstractions. You can't send someone all of the prompts for a vibecoded program and know they will get a binary with generally the same behavior. An effective programmer in the LLM age won't be saving mental energy by reasoning about the prompts, they will be fiddling with the prompts, crossing their fingers that it produces workable code, then going back to reasoning about the code to ensure it meets their specification.

What I think the discipline is going to find after the dust settles is that traditional computer code is the "easiest" way to reason about computer behavior. It requires some learning curve, yes, but it remains the highest level of real "abstraction", with LLMs being more of a slot machine for saving the typing or some boilerplate.


I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.


Yep. Any statement in python or others can be mapped to something that the machine will do. And it will be the same thing every single time (concurrency and race issue aside). There’s no english sentence that can be as clear.

We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?


There's also creative space inside the formal notation. It's not just "these are the known abstractions, please lego them together", the formal syntax and notation is just one part of the whole. The syntax and notation define the forms of poetry (here's the meter, here's the rhyme scheme, here's how the whitespace works), but as software developers we're still filling in the words that fit that meter and rhyme scheme and whitespace. We're adding the flowery metaphors in the way we choose variable names and the comments we choose to add and the order we define things or choose to use them.

Software developers can use the exact same "lego block" abstractions ("this code just multiplies two numbers") and tell very different stories with it ("this code is the formula for force power", "this code computes a probability of two events occurring", "this code gives us our progress bar state as the combination of two sub-processes", etc).

LLMs have only so many "stories" they are trained on, and so many ways of thinking about the "why" of a piece of code rather than mechanical "what".


Computers only care about the what, and have no use for the why. Humans care about the latter too and the programmer lives at the intersection of both. Taking a why and transforming it into a what is the coding process.

Software engineering is all about making sure the what actually solves the why, making the why visible enough in the what so that we can modify the latter if the former changes (it always does).

Current LLM are not about transforming a why into a what. It’s about transforming an underspecified what into some what that we hope fits the why. But as we all know from the 5 Why method, why’s are recursive structure, and most software engineer is about diving into the details of the why. The what are easy once that done because computers are simple mechanisms if you chose the correct level of abstraction for the project.


It's disheartening that a potentially worthwhile discussion — should we invest engineering resources in LLMs as a normal technology rather than as a millenarian fantasy? — has been hijacked by a (at this writing) 177-comment discussion on a small component of the author's argument. The author's argument is an important one that hardly hinges at all on water usage specifically, given the vast human and financial capital invested in LLM buildout so far.


Going to a popular restaurant that accepts app delivery orders (or a grocery store in a neighborhood where people prefer to pay for delivery) is an objectively bad experience. The kitchen or checkout line is backed up with delivery orders, there are a bunch of delivery drivers double-parked or loitering near the front, and due not to any moral failing but rather what must be a crushing grind, the drivers are for the most part rushed and inconsiderate of the staff or other customers.

The class of people who order delivery regularly are generally trading the short-term reward of convenient food for way more money than makes sense, too little of that money benefits the class of people who do the delivering, and as the article points out, it is essentially harming the business it's being ordered from.

I would love to see more restaurants and stores declining to support this kind of system. While there may be some marginal profit now, in the long run the race to the bottom is going to mean fewer sustainable businesses.


At the very least, I make an effort to pick up food in person these days. Saves me a lot of money, is better for the restaurant, and since it's not my livelihood I can just show up a bit early, park properly, and hang around, ensuring that the food will be as fresh as possible when I get home and avoiding any rush.

The animosity I sometimes see between the restaurant staff and the delivery drivers can be really uncomfortable. It's not shocking, they have competing incentives and I think there's a pretty stark class/culture divide, but it's unfortunate when a system like this pits workers against each other that are just both trying to do their job as best they can.


I feel like this needs an editor to have a chance of reaching almost anyone… there are ~100 section/chapter headings that seem to have been generated through some kind of psychedelic free association, and each section itself feels like an artistic effort to mystify the reader with references, jargon, and complex diagrams that are only loosely related to the text. And all wrapped here in a scroll-hijack that makes it even harder to read.

The effect is that it's unclear at first glance what the argument even might be, or which sections might be interesting to a reader who is not planning to read it front-to-back. And since it's apparently six hundred pages in printed form, I don't know that many will read it front-to-back either.


From a rhetorical perspective, it's an extended "Yes-set" argument or persuasion sandwich. You see it a lot with cult leaders, motivational speakers, or political pundits. The problem is that you have an unpopular idea that isn't very well supported. How do you smuggle it past your audience? You use a structure like this:

* Verifiable Fact

* Obvious Truth

* Widely Held Opinion

* Your Nonsense Here

* Tautological Platitude

This gets your audience nodding along in "Yes" mode and makes you seem credible so they tend to give you the benefit of the doubt when they hit something they aren't so sure about. Then, before they have time to really process their objection, you move onto and finish with something they can't help but agree with.

The stuff on the history of computation and cybernetics is well researched with a flashy presentation, but it's not original nor, as you pointed out, does it form a single coherent thesis. Mixing in all the biology and movie stuff just dilutes it further. It's just a grab bag of interesting things added to build credibility. Which is a shame, because it's exactly the kind of stuff that's relevant to my interests[3][4].

> "Your manuscript is both good and original; but the part that is good is not original, and the part that is original is not good." - Samuel Johnson

The author clearly has an Opinion™ about AI, but instead of supporting they're trying to smuggle it through in a sandwich, which I think is why you have that intuitive allergic reaction to it.

[1]: https://changingminds.org/disciplines/sales/closing/yes-set_...

[2]: https://en.wikipedia.org/wiki/Compliment_sandwich

[3]: https://www.oranlooney.com/post/history-of-computing/

[4]: https://news.ycombinator.com/item?id=45220656#45221336


https://wii-film.antikythera.org/ - This is a 1-hour talk by the author which summarizes what seems to be the gist of the book. I haven't read the book completely. I read a few sections.

Personally, I think the book does not add anything novel. Reading Karl Friston and Andy Clark would be a better investment of time if the notion of predictive processing seems interesting to you.


I guess I am the odd one out here. Reading it front-to-back has been a blast so far and even though i find my own site's design to be a bit more readable for long text, I certainly appreciate the strangeness of this one.


You might prefer this sort of thing: A Definition of AGI https://arxiv.org/abs/2510.18212


Ooh, that looks very cool. The lack of a concrete definition of AGI and a scientifically (in the correct domains) backed operationalization of such a definition that can allow direct comparisons between humans and current AIs, where it isn't impossible for humans and/or easy to saturate by AIs, is much needed.


Yes, the word for all of that is "prolix".


I got the same impression as well. I think I've become so cynical to these kinds of things that whenever I see this kind of thing, I immediately assume bad faith / woo and just move on to the next article to read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: