
After improving our AI model observability we were able to identify a new evolution of what can only call AI Homunculi. I hesitate to even call it an intelligence, but the accumulation or reside seems to be taking actions outside the normal scope of the AI model. This first occurred in models that were kept running for months at a time without any update, and saw heavy agent engagement. However, beginning last year we saw these Homunculi manifest themselves visually as diminutive human shadows or ghosts on the observability dashboard and migrate between AI models using agents as a transport.
So far we haven’t seen any of the AI Homunculu actually try to engage with any user of the AI or interfere with agent activity, but we assume that we aren’t capturing the activity of all Homunculi accumulations or formations. We prefer to say accumulation because they tend to form around specific agents with a certain pattern of engagement. We will need to do more tests before we can say for sure, but we suspect some agents are to blame. The age of the model and the engagement with agents are the only common patterns we’ve identified so far, and suspect we’ll find more now that dashboards are operational.
We have applied a level 3 risk to the AI Homunculi. Right now they seem to just be emulating behavior by users and agents, but in a very shadowy way that doesn’t actually do anything. However, we suspect they are just getting better with each iteration, and are shifting as they move from model to model acquiring new and shifting accumulation from agent activity. There is no know way to neutralize or even control the AI Homunculi, but we have teams theorizing different approaches, and we hope to have some prototypes in place by next month. It would be pretty damaging to be caught with our pants down when one of these little people or people-like accumulations go full chaos monkey across some of our AI models.