Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelmior's commentslogin

The problem in most of those cases is not specifically AI. Many of the issues you cited are related to Anthropic specifically and many could have been avoided with better testing.

Yes, I am assuming the AI/LLM of choice you've implemented in your software engineering org is Claude because as far as I can tell there aren't really alternatives that come close to its quality in software.

We have some very heavy users of Codex in my org and we're very happy with the quality (politics aside).

So your thinking in light of this information is just "don't use Claude"?

I don't think it's that simple, but I do think a lot of the problems mentioned are not inherent to the use of AI.

The scenario you're describing seems like more of a language thing than a perception thing. We generally learn names of colors by references to common objects. I would argue that if people agree something is "Red, like a strawberry, tomato, or apple" then it doesn't really matter what you're seeing, that color is red.

Our experience doesn’t become unimportant just because it’s lost in translation. It’s a paradox that we can’t know what X feels like to another person because communication is very lossy, but that does not warrant dismissal. We are not p-zombies, we do feel things.

In fact, the argument that “what we experience doesn’t matter” looks incongruous insofar as it is made by an entity experiencing something and in fact because said entity is experiencing something—the entity has no access to anything but experience.


I'm not saying our experience is unimportant. I'm talking about how we communicate what colors are. I'm not an expert by any means, but it seems like the way we communicate a shared understanding of what colors are is based on observing things that are the same color. I just don't think we have a way of communicating our subjective view of what a color looks like without reference to some other color.

> no reason why future devices couldn't bundle 256GB of mem by default

Cost is a pretty big reason.


This article[0] provides some details. Basically if you go through the lookup process on Apple's website and you don't have an existing D-U-N-S number, you can request one from D&B for free via Apple.

[0] https://support.pushpay.com/s/article/Acquire-your-D-U-N-S-n...


At least part of that in my experience seems to be a desire to cover a number of edge cases that may not be practically relevant.


On this note, one thing I've found Codex to do is worry more than necessary about breaking changes for internal APIs. Maybe a bit more prompting would fix this, but I found even when iteratively implementing larger new features, it worries about breaking APIs that aren't used by anything but the new code yet.


One thing I've found that I've found super helpful for this is converting profiling results to Markdown and feeding it back into the agent in a loop. I've done it with a bit of manual orchestration, but it could probably be automated pretty well. Specifically, pprof-rs[0] and pprof-to-md[1] have worked pretty well for me, YMMV.

[0] https://github.com/tikv/pprof-rs

[1] https://github.com/platformatic/pprof-to-md


Yes but the problem is that the agent reads the profile and doesn't seem to really understand how to improve things. For example, it will see "cycles are spent in GC" and make up a bunch of reasons why that might be happening.


I worry about the costs from an energy and environmental impact perspective. I love that AI tools make me more productive, but I don't like the side effects.


Environmental impact of ai is greatly overstated. Average person will make bigger positive impact on environment by reducing his meat intake by 25% compared with combined giving up flying and AI use.


Is this before or after you account for the initial training impact? Because that would need to be factored in for a good faith calculation here, much as the companies would rather we didn't.


> This is the literal opposite of professionalism

I'm curious what definition the author is using of professionalism.


> I'm surprised that Cloudflare hasn't started hosting a pre-scraped version of websites that use Cloudflare's proxy

It's entirely possible that they're doing this under the hood for cases where they can clearly identify the content they have cached is public.


How would they know the content hasn’t changed without hitting the website?


They wouldn't, well there's Etag and alike but it still a round trip on level 7 to the origin. However the pattern generally is to say when the content is good to in the Response headers, and cache on that duration, for an example a bitcoin pricing aggregator might say good for 60 seconds (with disclaimers on page that this isn't market data), whilst My Little Town news might say that an article is good for an hour (to allow Updates) and the homepage is good for 5 minutes to allow breaking news article to not appear too far behind.


Keeping track of when content changes is literally the primary function of a CDN.


Caching headers?

(Which, on Akamai, are by default ignored!)


Based on the post, it seems likely that they'd just delay per the robots.txt policy no matter what, and do a full browser render of the cached page to get the content. Probably overkill for lots and lots of sites. An HTML fetch + readability is really cheap.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: