2024-10-24 at

Learning and Supervision

// A comment on current gripes about LLMs not executing proper reasoning. //

This is pretty obvious from the architecture anyway. It is the most inefficient way to implement logic. Humans are a very inefficient implementation of logic (unless we one day discover that neurons for logic are highly specialised in a uniform way across individuals, which seems rather unlikely based on science to date). This has been my whole gripe about wasting energy : why try to imitate the worst part of the old, and call it a great achievement? Lmao. 

Previously I would refer to this as an overuse of unsupervised learning, and an underuse of supervised learning. My understanding is that most humans have their reasoning capabilities as a result of highly supervised training methods. We simply have to remember which is left and which is right, there is no universal left and right. The same arbitrary rigour is needed when it comes to the meaning of the letter A, the assignment of rights to humans, and the selection of gender and national identity. It is only after the supervised learning anchors basic cultural assumptions in elementary education, that we teach students to unlearn and experimentally mess with fundamental definitions for the sake of research and epistemology about truth, culture, and economic values. 

Segue to that thread about Cook being able to change his mind, and the question of whether that is a rare or common trait among plebs, vs among leaders. In the micro-history of OpenAI, success was achieved by marketing to the popular notion that "intelligence is the ability to infer something out of nothing", that being the popular notion of what a "clever person or genius", is. Unsupervised models were banked-on whole-hog, despite their inefficiencies. That this particular concept of "cleverness or genius" is inherently incomplete, is an often discussed trope. The standard dialectical counterpart is, "all stand on the shoulders of giants". In both humans and machines, the efficient genius does not reinvent every element of his software ... one does not unnecessarily reinvent the wheel, a lingua franca, an industry, or any paradigm whatsoever. Rather, one begins first by reviewing the the commonly accepted facts as a "canon", afterwhich one engages in "research about the canon". 

This also illuminates the commonality between trolls and the early popular models from OpenAI : they basically ignore cultural anchors, and present paradigms which are locally-logical, but which may from time to time fly in the face of common assumptions. That is why both are funny. Hands up, if you know what I mean. :)

For the uninitiated : it should be noted that efficient software for reasoning has existed for decades in the less popular field of automated theorem provers - this stuff is pretty much "first nature" for turing machines, whereas it's second nature to us meaty being. 

xref : 2022,  

No comments :

Post a Comment