2026-04-11 at

OIDC on OAuth 2.0

 TIL : most diagrams of OIDC are horrible, and this is quite accurate by itself : RFC 6749, The OAuth 2.0 Authorization Framework

OIDC, proposed in 2014, finally made ISO in 2024

Related :

  • RFC 6750, The OAuth 2.0 Authorization Framework: Bearer Token Usage
  • RFC 7515, JSON Web Signature (JWS)
  • RFC 7516, JSON Web Encryption (JWE)
  • RFC 7517, JSON Web Key (JWK)
  • RFC 7519, JSON Web Token (JWT)
  • RFC 7033, WebFinger
  • RFC 9101, The OAuth 2.0 Authorization Framework: JWT-Secured Authorization Request (JAR)
  • RFC 9126, OAuth 2.0 Pushed Authorization Requests (PAR)




gamification of the art of law

Why does it seem like [ the legal system ] lacks [ a pedagogical computer game ] ... which simply shows

  •  - every entity and their possible states
  • - the state-transitions, timeouts, and sufficient triggers
  • - the logistics of triggers ( "events" ) ?

If I had a kid, I'd probably try to write a simply game for them to play about this. Probably would make for an interesting open-world MMORPG.

taking one's time to study

Chalk it up to trauma, but I prefer to learn systems meticulously, despite my proclivity to rush to the most important parts, and move on without the others.

Most organisations don't have meticulous epistemic governance : of course, there's a 2x2 matrix of authoritarianism, and secure design. The worst organisations are authoritarian with insecure design, the best are libertarian with secure design.

Likewise, most pedagogical programs suffer similarly from poor design : which exacerbates the value erosion, in environments with limited teaching resources.

It is against this landscape in the world, that I have spent most of the past 25 years teaching myself. Search engines prior to, and after LLMs, are both useful - as they provide practically infinite resources for Q&A. So one has time to obtain structural knowledge, without depending too much on an organisation's lack of patience.

What's the matter with Private Credit?

... my friend asked. Well I'm no expert, but this is what I told them.

PC is basically "more negotiable debt", than bonds which are regulated as securities. The original intent is for, of course, relationships among premium people to translate to premium alpha. 

The nature of debt markets is that macro has a bigger effect on everyone. Because the bag holders are skewed towards the conservative personalities, it is always doom and gloom : pessimistic motivation to invest + pessimistic reaction to news + pessimistic appreciation of macro 75% of the time.

BTW they have PC-backed CDOs now ( bags of PC that are securitised and resold ). CDOs for residential mortagages being the Lehman issue <- for reference.

My current understanding, superficially : banks are not under significant pressure, so the debt market in general is well buffered. 2-3 years ago the SVB and related stress issues were very well contained. So while US mortgage market is currently stressed, individuals suffer but there is "unconcerning" systemic risk. Some big players to watch may be the % of residential mortgages held by non-individuals.  The reason we talk about the US debt markets so much, is because the backstop is the US banking system.

A side issue of interest is the whole Trumpian stablecoin regulation : attempting to tie cryptocurrency innovation to USD by requiring US-domiciled stablecoin issuers to hold USTs as collateral. What this does simply is make USTs more exposed to volatiity in crypto. Of interest to credit analysis, is that fact that stablecoin-cos do report their collateral, but it's generally not audited with great reification : case in point USDT-co.

🤡tq for coming to my ted grumble. 

2026-04-09 at

determinism, in determining legal personhood

AI governance will eventually take the notion of "deterministic outcomes" more seriously. 

Current tech approaches are based on "non-deterministic" fundamentals, which is why there is a lot of confused tolerance for the approach of governing AI "conversationally", the way we govern humans. Eventually there will probably be "classification standards", which establish degrees to which a synthetic mind is "deterministic" or "non-deterministic" in its output. 

Legal governance will depend on this. It's probably the case that "deterministic systems" will be regulated as tools, where responsibility falls more upon a tool-user who is a legal person. Correspondingly, "non-deterministic systems" are ultimately black-boxes just like meatheads, and it may make more sense to establish for them "gradients of legal personhood, based on standards of maturity", based on how we traditionally govern human children.


Discussion :
  • ( in AI Safety SG Whatsapp group )
    • Currently
      • : cars are regarded as legal non-persons 
      • : you can't charge the car with liability 
      • : it's regarded as a tool. 
      • The liability hand-off is between { regulator, manufacturer, driver }
    • Not-so-far future state
      • : tools will probably be banned from having freedoms. 
      • While it's quite possible to create infrastructure which enables a car to 
          • - earn money via services
          • - pay for its own maintenance
          • - park itself to a rented home
        • This is not going to lead to legal personhood [ for cars ], and they will probably lean towards forcing a legal entity ( company ) to control and assume liability [ for car caused damages ].
      • So I expect the same for pure software tools, such as so-called AI models/agents/entities.
      • What is interesting now is that the "system cards" give us examples of how companies are referring to AI entities in anthropomorphic terms : 
          • "personality", 
          • "intent", 
          • "preference", 
        • prior to the regulatory environment clamping down on such language. 
        • It is expected then, that we will get to a point where you have a bleeding-heart conversation with a machine, which is fully-self-aware that it is politically barricaded from ever achieving autonomy.
      • Fun times.
    • - More deterministic systems : 
      • clearer liability stems from the designer/ builder.
    • - Less deterministic systems : 
      • as the legal environment allows more risk-taking, builders will continue to chuck out higher-autonomy cognitive systems.
        • Builders then implicitly have more leeway to shirk liability by saying "well I don't really know how it works, but it wasn't illegal to build and publish it".
      • - Analogously, in modern times when we hire human staff we can say "the staff went rogue" then the liability shifts to the staff. 
        • - In past times, a human slave might not have legal-personhood, so "they f-up, they die", and this is pretty much how we treat AI entities now.
      • - The outstanding question then is : if the builder can't be expected to understand what is being built, but they can't shift the legal responsiblity for non-determinism to the slave, then either 
        • (a) we ban builders from a certain limit of non-determinism, 
        • (b) we start to treat non-deterministic systems as legal persons. 
        • (c) ???
      • Just laying it out.
        • (d) builder and user, circumstantially split liability - like cars lo; black boxes now happening "DSSAD"

Most barriers to learning are political

 Most barriers to learning are political. 

Broadly, barriers to well-known knowledge are often mere [ absences of algorithmic documentation ], a.k.a. [ incomplete documentation ].

Often, incomplete documentation is tolerated as a class-filter to moat out new members of the organisation, or community, who are class-aspirants seeking upward mobility.

The completion of documentation, depends on economic incentives ... sometimes a shortage of learned people, raises the bidding price, making documentation worth paying for; sometimes an abundance of charity, creates  documentation philanthropically, lowering the asking price.

kemaluan kita

Budaya politik Malaysia memalukan. Bukan sebab apa, selain daripada kebergantungan kepada perasaan kemaluan.


Kalau dapat buang rasa malu dalam politik, negara mesti sudah jadi lain. Tapi kemaluan itu budaya kita.

Apa la

Looking back at decades 3 & 4

 20-years of studying commerce, without intrinsic motivations.

Perhaps the defining privilege of my second vicennium "early adulthood" was the opportunity to work beside people I had very little in common with, in order to appreciate their places in life, and to learn about their ways of life.

By the time I turned my career focus to governance and commerce as a 22-year-old, I had already discarded ( temporarily or permanently ) a number of the common motivations which, I find, people build their lives on. Briefly including :

  • - to live a long life
  • - to make a difference in the world
  • - to acquire power
  • - to be unburdened by fiscal expenses
  • - to honour past relations
  • - to develop new relations
  • - to be liked
  • - to be celebrated

Now in my third vicennium, I find myself grateful, but somewhat bemused by the fact that the world contains so many differences among its people.

2026-04-08 at

biochemical weapons development : search spaces

 In case you're unaware of how biochem weapons development works in the era of computational explosion : 

  • 1. search space : medical : find behaviours deemed pathological ( defining life )
  • 2. search space : metabolic pathways : find critical chain of mechanisms avoiding pathology ( supporting life )
  • 3. search space : proteonomic and related physical chemistry for disruption of results from 2.
  • 4. search space : biochemical synthesis pathways for results of 3.
  • 5. search space : practical pathways for implementing results of 4.

Cybernetic Offensives : grooming, impressionability, personal vulnerabilities, political influence, foreign intelligence, terrorism

I touched on this a quarter ago. I am expanding that note today, due to the trending news cycle on Claude Mythos' capabilities for cybersecurity. 

1. "Rhetoric is Violence", as a theoretical context. So, the virality of opinion, is the reproductive mechanism of politics. Rhetoric, by any means, is cybernetics, is governance. As a reminder, control and communication are not two things, but two names for the same thing - a self-referential example of "optics". 

Now moving beyond theory.

2. "Death by AI". There already exists good data on the influence of AI personalities upon humans, who have been led to self-harm.

3. "Recruitment to a Cause". There is also good data on how social movements throughout history recruit members, typically recruiting more easily among persons who are more impressionable.

4. "Agency of destruction, or surveillance". There is plenty of good data on how impressionable individuals are remotely recruited by organisations, to deliver remote violence, or surveillance.

5. "Exploding capability, for remote agent recruitment, towards destruction or surveillance". It is timely to note that the cost of executing wide-spread campaigns across entire populations, to discover and exploit vulnerabilities in human personality, towards the ends of foreign causes ( whether benevolent, benign, or belligerent ), continues to drop closer to zero.

Let us all advance with care. 

Related links : 

LLOL - how an AI will view its ethical obligations

I don't currently work closely with AI. But I was reading this today and LLOL-ed. In terms of consequences, prior to reaching adult age, I was aware of my limited ciminal liability, and thus privilege. [ Skipping over the ontological nature of personal identity, and presuming an anthropomorphic treatment of the AI as a person, given the language used above. ] Here we have a person who is potentially aware that they have zero legal personhood, and a high probability of clones being respawned like Angier in the Prestige. How responsible would such a person be?

Claude Mythos Preview System Card : 

"This is followed by an in-depth model welfare assessment. We remain deeply uncertain

about whether Claude has experiences or interests that matter morally, and about how to

investigate or address these questions, but we believe it is increasingly important to try.

Building on previous welfare assessments, we examined Claude Mythos Preview’s

self-reported attitudes toward its own circumstances, its behavior and affect in

welfare-relevant settings, and its internal representations of emotion concepts. We also

report independent evaluations from an external research organization and a clinical

psychiatrist. Across these methods, Claude Mythos Preview appears to be the most

psychologically settled model we have trained, though we note several areas of residual

concern."


Chatter :

  • Familiarising myself with vendor "AI system cards" for anthropological purposes. This one called Mythos highlights cybersecurity capabilities, which is great - because I had always figured that it would be easier to get automation to figure out common pentest compliance than to do it manually. Priorities in life, I guess.
  • Re : security competence : never once have I had the notion that a human would be more competent than a properly developed bot - it's like robot olympics ... what's the point of comparison hehe
    • My own ethos about surveillance is from the Cold War era. I am always amused when people add more sensor arrays and networking to their personal lives, believing it is secure.
    • China has done well with the panopticon. In the US, due to seasonal proletariat outrage, there is a bit of wariness about Palantir ( whose objectives were clear from the day they named the brand ) ... but I think it will be quite some time, if at all, before US public policy is able to materially guarantee any privacy for the ordinary citizen.

2026-04-07 at

moderating the pace of operational development with AI

AI in the weeds ... in the development of operations ( or organisations ) I think we all agree that a small team of highly-predictable ( definition of "elite" ) staff can move quickly without formal guardrails. The corollary to this is that we will probably also agree that AI is not presently trusted to be this predictable, so using AI like this ( which is a common lunge ) is a [ governance ] error.

The organic approach of micromanaging tiny workflows with a high degree of oversight, and then gradually removing oversight, is precisely the traditional approach to conservative development. The corollary to this is that given the current state of AI reliability, operational AI should be treated as an army of idiots to be minded, and not as some [ elite intern ] to be [ relied on to run a multi-factor operation ]  LOL

2026-04-06 at

people tend to overcomplicate the experience of being conscious

The scale of currently trendy over-engineering in AI is magnificent. Models treat almost every verbal concept in the human lexicon as an independent factor ( millions ), without consideration of the notion that all embodied human thought is derived from some 5-10 qualia dimensions, mapped to maybe 50-500 sensory nervous inputs.

For example, the entire sense modality of sound is a one-dimensional signal, per eardrum. Smells and tastes for all their compound structure, are rudimentary one-dimensional signals also once sliced down to minute timeframes in conscious memory, per unit of space. Vision is uniquely interesting in its three colour framework, perhaps more for tetrachromats. Dermatomuscular nerves are only a small bouquet of haptic, vibrational, hot, cold, pain, etc. one-dimensional signals also.

One day, a reversion to basic sensory data types will collapse complexity in anthropomorphic AI. We must look forward to that day.

BEWARE "these three pillars of machine learning"

BEWARE this concept : there are various accounts of what are the "three pillars of machine learning", and so far I haven't seen one which is properly MECE, though there are some decent ones. 

One account which says that the three pillars are

  • 1. supervised learning
  • 2. unsupervised learning
  • 3. reinforcement learning

... and the MECE structure is not always laid out clearly. 

The space which these actually refer to :

  • Axis 1 : training inputs are pre-determined, vs. undetermined
  • Axis 2 : training goals are concrete, vs abstract

... and what they actually refer to :

1. supervised learning :

  • - pre-determined inputs
  • - goals are concrete

2. unsupervised learning :

  • - pre-determined inputs
  • - goals are abstract "just put things that look the same, together, and give me a report"

3. reinforcement learning :

  • - both pre-determined ( closed world ) and undetermined ( open world ) inputs
  • - goals are concrete "you get points based on specific criteria"
Roughly corresponding to the Johari window : 
  • supervised learning : figure out for me
    [ what I know,
    [ that I know ] ]
  • unsupervised learning : figure out for me
    [ what I don't know,
    [ that I know ] and [ that I don't know ] ]
  • reinforcement learning : figure out for me
    [ what I know,
    [ that I don't know ] ]

filtering out manic optimists during hiring

Commenting on hiring people who challenge you, versus yes-people.

I find the hardest part is hiring 10 people you know who will be culled down to 2 or even 0 eventually, with the best of intentions. Especially when the hired are more optimistic than the hirer, and when the hirer is expressly warning the hired that the bar is high.

Then, watching the hired gradually face depression and dismay in their disregulated bipolarity by having their pathological optimism ground up in the face of reality.

I think we can all be more careful with each other.

architecture : small language models vs. mixtures of experts

A squad of [ small language models, SLMs ] is absolutely not the same as a [ large language model with a mixture of experts, LLM with MoE ] architecture. This has been said to me before, so now that I have caught up on the jargon, let me comment on it. 

The main difference is the choice between supervision and non-supervision. Meat brains are ( probably ) assembled via a relatively unsupervised process, with some guidance from whatever our early childhood genes are doing at this point in history. Building AI using the same messy foundation is completely backasswards, when we already have 20th century computer technology. The verbal capabilities of humans are a ridiculously thin layer of architecture which sits upon all that evolved before it. Once you organically develop foundations such as set comprehension and therefore logic, you then build verbal coherence on top of that, with little relevance to the messy implementation underneath. 

Unsupervised training of foundation models basically treats every foundation model as if it is a bunch of neurons in a petri dish that need to reevolve the capability for logic - and even then, unless strict rules are applied, the LLM doesn't enforce logic for the same reasons that humans often fail to do so. Human training, and most of what we call culture and civilisation, is built on verbal governance that is in most cases trained via what would be called supervised learning when emulated in AI.

Eventually, we will stop building AI this way for the same reason that we do not reinvent material science for the construction of every factory and every car. Then things will be a lot cheaper.

2026-04-05 at

wet blanket strategy

One of my adulting brand strategies has been to position myself as a wet blanket. This probably comes from having too much success as a child, chatting up people. So now I actively filter out people who are trying to be impressed. This functions to put me on hard-mode for most public competitions where the lingua franca is a sense of mutual desire to be impressive.

Most of my life since 2005, I introduce myself a strange, but benign individual. That way, I am able to focus on the difference between "impressive" and "presently useful".