Journal articles are sometimes years behind. There are still papers coming out that use GPT-3.5 (!) for their main result. These days I'm basically only reading arXiv preprints (and whatever is trending on GitHub).
A slight oversimplification, as LLMs are also capable of generating the most statistically plausible textual sequence, which can be a sequence not found in the dataset but rather a synthesized combination of the likely sequences of multiple preceding sets of tokens, but yes, that is in fact what it is doing. Computer software does what it is programmed to do, and LLMs are not programmed to do logical inference in any capacity but rather operate entirely on probabilities learned from a mind-bogglingly large corpus of text (influenced by things like RLHF, which is still just massaging probabilities).
The claims about solving Erdos problems have been wildly overstated, and notably pushed by people who have a very large financial stake in hyping up LLMs. Nonetheless, I did not say that LLMs are useless. If they are trained on sufficient data, it should not be surprising that correct answers are probabilistically likely to occur. Like any computer software, that makes them a useful tool. It does not make them in any way intelligent, any more than a calculator would be considered intelligent despite being completely superior to human intelligence in accomplishing their given task.
Honestly big noobquestion: isn't math just very very nested patternmatching based on a few foundational operators?
ive always felt, that im bad at math, cause i forget all the rules, but seeing solutions (and knowing the used pattern) always made "sense".
I always thought the hard math problems are so deeply nested or you have to remember trick xyz that people just didnt think about it yet..
The amount of mathematical structures and transformations you can apply (the possible rules) is effectively infinite. Simply remembering the rules might work at first, but you'll soon run into the combinatorial explosion: https://en.wikipedia.org/wiki/Combinatorial_explosion
You could go a step further, and simply say "well, ok, then the LLMs are merely doing some form of incremental/heuristic search!". Yes, but at that point you'd also be hard-pressed to claim that humans themselves are doing anything beyond that. You run out of naturalistic explanations.
If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.
> Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference.
A statistical approximation of logical inference (as vague as I state it) could (and will) very well pass for logical inference, at least for the common people, whose logic skills are far from perfect.
Also, humans are certainly not capable of the perfect logical inference you speak of. And I get the irony of what I'm saying with such certitude. Logic is still framed in axioms that are framed in languages, we'll never truly get there. Ah, but absoluteness gets in the way of practicality.
Yet, here we are with a tool, that is maybe not at its prime yet, that equals and beat many human beings at logical inference on some problems that are pragmatically relevant. Should I say symptoms of logical inference at that point?
As to why LLMs capacity for (apparent) logical inference is only limited to specific use cases, I don't have a clue. But I'd like to argue that, humans are like that too.
Well, I'm not clairvoyant, but this is a very easy prediction to make. And we're not talking about decades in the future, this is simply a matter of letting the near-future unfold.
>wanting the freedom to copy artists’ works when using it
Learning from copyrighted content is legal - for both humans and AI. If Meta is in hot water for anything, it's piracy and/or storage of copyrighted material.
Huh? Then that's even more straightforward, and your comment from late 2022 doesn't hold up at all. So, unless you're specifically going out of your way to break copyright law, inference is totally fine.
Maybe it is time for an internet divorce. Permanently cut it in half between those who are ok with AI and those who are not. If it were up to me, I'd never want to hear from the latter group again.
A few short paragraphs in, and this author is already mumbling something about muslims and trans people. Again showing that 99% of anti-AI activism is nothing more than a new issue for the far-left.
All else being equal, this raises my confidence in both Dawkins in general and whatever the hell he said about AI consciousness.
What I don't understand is how half the comments are calling out how bad the content is, yet it's somehow 4th on the frontpage?
It looks like generic AI slop, the site doesn't even render the headings for their SEO spam "Curated AI Tool Collections by Use Case" section properly and they're half cut off. The images all have the very distinct generic AI hue to them without any attempt of bringing it into a specific style or brand.
Who is upvoting this stuff? Do people not care? Is it just bots gaming the system? Am I an old man shouting at a cloud?
reply