Hacker Newsnew | past | comments | ask | show | jobs | submit | pfortuny's commentslogin

Tesla has been sued for a similar reason "full self-driving".

AI companies are selling their products as "perfect" ("better than humans...").

I agree in part with you but I also agree that they are selling a hammer which can blow-up without notice.


I do agree that the companies could do a better job telling about the dangers, but let's be real here. It's hardly a secret that LLMs can be erratic. It's not news.

Other companies also tell me their product is the best thing since sliced bread. I still try to find the flaws. That's part of my job. But suddenly with LLMs we just blindly trust the companies? I don't think you.

I don't blindly give up my brain and my agency and no one else should. It's fun and educational to play around with LLMs. Find the what they are good at. But always remember that you can't predict what it will do. So maybe don't blindly trust it.


We are still in the "ether" times of dark matter. We have still not had a Michelson-Morley experiment. That's it.

Not that I am saying it does not exist. Only that we do not have the means of falsifying it if it is false.


That's a piruleta. The spherical ones are chupa-chups.

The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n). This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096 elements.

So not exactly "n" as in O(n).

Also: only for 16-bit integers.


> The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n)

> So not exactly "n" as in O(n).

For large enough inputs the algorithm with better Big O complexity will eventually win (at least in the worst cases). Yes, sometimes it never happens in practice when the constants are too large. But say 100 * n * log(n) will eventually beat 5 * n for large enough n. Some advanced algorithms can use algorithms with worse Big O complexity but smaller constants for small enough sub-problems to improve performance. But it's more like to optimization detail rather than a completely different algorithm.

> This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096

Yes, that's my point. It's basically "I made binary search for integers X times faster on some specific CPUs". "Beating binary search" is somewhat misleading, it's more like "microptimizing binary search".


Not to corporations, no. You do not need to be charitable to a corporation.

That shows the power of the Spanish FA and Telefonica together.

Always by default I assume.

Unlikely. That would be extremely expensive in bandwidth, storage and compute. Deciding to build the product like that would be an engineering decision that i would fire someone for.

Well, say a frame per second. Also: how many of these are there today?

You can discard them after tagging+using them for learning.


Tagging, tagging, tagging. That is what "improving...": teaching its LLMs and diffusion models.

It is because processors do not do what one might naively think they do.

The set of non-invertible answers is of measure 0 (that is the claim). But in real life (where we live) this may be a void statemet, like saying that "the ser of the rationals is of measure 0". Right, that is true. It is also useless.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: