2025-07-02 at

Science Machine

LLMs aren't "strict" rule based which is why they can't do formal logic. If they do, it took a hugely inefficient learning process to implement, which is much simpler just to code the logical rule in NAND gates lol.

Hence that thing where if you want LLM to behave more reasonably, you have to put it in series with a PA/ATP and loop it. Basically what meat people do. The random generation and culling of dumb ideas are separate parts of an iterative process.

- LLM : make informed guesses
- PA/ATP : check logical coherence
- Loop

In other words :

- 'generative' hypothesis proposal 
- hypothesis falsification attempt
- loop it

Tada : science machine

( This note was from a chat, which didn't cover the phenomenology i.e. sensory data structures concern. Of course you need that too, for science, and anthropomorphy. )

No comments :

Post a Comment