2026-04-28 at

AI in Malaysian Public Policy

Sorry I'm late! Not a top priority, but ... one of the drums I am banging on medsos now is AI sovereignty for Malaysia. The only "talked about" foundational model we seem to have in the works is ILMU by YTLol. We seem to be on the right track or a lot of DC capex, and the NSS2024 for the long3-term. We shall see how much attention Putrajaya puts on this. Meanwhile any plans to go very deep on other foundational models are ... apparently not spoken.

Tell me if you know of others!


Malaysia's AI supply-chain vulnerabilities are discussed briefly below. LLMs are of course, used at all levels of government process. 

And my concerns of course, aren't limited to LLMs or Malaysia. 

Applying the linguistic device of security rings, for illustrative purposes :

Ring 0 :

LLMs may not be the ideal tech to model neuro, but presuming that they are "the present limit of Malaysia's Federal AI interest" ...

Ring 1 :

LLM training and initiation allow for inherently malicious programming, unless otherwise proven. Doesn't matter where it's hosted, just that the foundational model0 is being copied around. 

  • 1.1 : open-source largely, but non-exhaustively, solves this
  • 1.2 : self-hosting, of course, reduces opportunities for a malicious model to dial-home, or simply perform corruption or inception, in genAI, chat, or other media, but how careful are installers, and operators? 

Ring 2 :

I think fine-tuning and RAG issue pop up here. Downstream developers and vendors can issue their models0.1, 0.2b, etc.

So ILMU is playing down here, in the Ring2 space, and (mis-) calling itself a foundational model. So are mostof the "we use FOSS models" developers. 

No comments :

Post a Comment