2026-02-23 at

Reminders on the legal personhood and liability of AI

Governance long game : 

  • - "who has legal liability?"
  • - "the only difference between AI and a meat person becomes a legal distinction - literally a social class status"

And before AI is granted legal personhood :    

  • - "if you press the self-driving button" -> 
    • either "you the driver", 
    • or "the cloud operator + edge process maintainer" 
  • -> "accept direct legal responsibility for the actions of the car"

AI formation of scientific hypotheses

( These notes were in response to a comment that we should test AGI by seeing if it can reinvent general relativity. )

That's noT how it works, however. You need to give the system access to empirical inputs, not just library sources.

Scientific theory development is based on both

1. an existing theory

2. empirical observations that appear to falsify 1, leading to calibrated attempts that produce new theories with or without further experimentation

"GR was hypothesised from thought experiments, not empirical experiments"

2.1. Thought experiments are done with spatial processing, not with stochastic parrots. The processing algorithm has to actually do physics, including spatial simulations in thought. LLMs don't do that, LLMs only process data structures which are semantic, not spatial. You need spatial data structures in thought, to do spatial simulations in thought. Industry will do more soon.

2.2. All of the former is based on reflection on empiricial observations

2026-02-22 at

mutate and cull

I suppose the best way to introduce the LLM-ATP relationship in terms of neuroanatomy is, it is analogous to the DMN-CEN relationship.