Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think ranking new information according to held beliefs is necessarily that far towards AGI (admittedly, strong NLP may be, though).

There's no fundamentally creative step in deciding "How well does this new piece of information match pieces I previously had"?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: