AI governance will eventually take the notion of "deterministic outcomes" more seriously.
Current tech approaches are based on "non-deterministic" fundamentals, which is why there is a lot of confused tolerance for the approach of governing AI "conversationally", the way we govern humans. Eventually there will probably be "classification standards", which establish degrees to which a synthetic mind is "deterministic" or "non-deterministic" in its output.
Legal governance will depend on this. It's probably the case that "deterministic systems" will be regulated as tools, where responsibility falls more upon a tool-user who is a legal person. Correspondingly, "non-deterministic systems" are ultimately black-boxes just like meatheads, and it may make more sense to establish for them "gradients of legal personhood, based on standards of maturity", based on how we traditionally govern human children.
No comments :
Post a Comment