Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A "causal model" is needed to fix bugs ie, to "root-cause" a bug.

LLMs yet dont have the idea of a causal-model of how something works built-in. What they do have is pattern matching from a large index and generation of plausible answers from that index. (aside: the plausible snippets are of questionable licensing lineage as the indexes could contain public code with restrictive licensing)

Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Hence, while they are a great productivity boost as a semantic search engine, and a plausible snippet generator, they are not capable of building (or fixing bugs in) a machine which requires causal modeling.



>Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Prove that the human brain does symbolic computation.


We dont know what the human brain does, but we know it can produce symbolic theories or models of abstract worlds (in the case of math) or real worlds (in the case of science). It can also produce the "symbolic" turing machine which serves as an abstraction for all computation we use (cpu/gpu/etc)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: