AI Minions!
- 24 hours ago
- 4 min read
Minions have a wonderful combination of traits which make them both dangerous and sweet. They want to help, but usually end up creating more problems than they solve. It seems as if their purpose in life is to be the henchmen of a villain, but they’re comically bad at it – incompetent and easily distracted, though not evil. At times, they seem almost like old stereotypes of interns – eager, but inexperienced and a bit clueless.
They are loyal, accept instructions, and try to carry them out, but rarely understand why, resulting in endless situations where they do things which seem (to them) reasonable, but end up going terribly wrong. They work together (mostly), but are easily distracted by shiny objects. They start bickering, disrupting the work in progress, and chaos spreads until disaster strikes.
Does that sound familiar?
To me, it actually sounds a lot like Agentic AI.
I’ve written about Artificial Intelligence a number of times. Is it Big Brother? How do we know if something is intelligent? What impact will it have on society?
Over the past few years, a lot of the focus has been on Generative AI, specifically LLMs (Large Language Models) such as Chat GPT, Claude, and Grok. More recently, though, a lot of focus has shifted to Agentic AI.
The phrase “Actions speak louder than words, but not nearly as often” has been attributed to Mark Twain, but I found no evidence that this attribution is valid, and several sources state that it is not. At any rate, one way of looking at agents is that they do things, where LLMs merely talk.
So far, agents are not super-human intelligences, like Hal 9000 or Skynet. Or near-human, like Data. Or even highly-specialized “expert” intelligences, like the Red Queen, or Agent Smith.
They can do a lot of things, but don’t understand what they’re doing, or whether it’s good or bad. They will happily delete your production system, then say “You’re right! I shouldn’t have done that!”. They will happily expose your data, or use it inappropriately, or delete it, with wild abandon and not a care in the world, leading to chaos.
So, Minions.
But what do we do about it? If you listen to the AI vendors, they talk about “guardrails”, which are usually trivial to bypass.
Conceptually, it’s really not that difficult, but there will be a lot more chaos before we start to move in the right direction. As with most things, people have suggested the “right” answer and raised concerns about the “wrong” answer for decades, but we still go charging down paths which will lead to problems.
How about Zero Trust? What a concept! If we limited agent access to exactly what they need, the risk of them doing the wrong thing falls dramatically.
If you want an agent to manage your email, why would you give it the ability to delete your tax documents? Or if you want an agent to do your taxes, why would you give it the ability to delete your backups?
Right now, many agents are given the same access as the user, and act as if they were that user.
But they’re not.
Every programmer will be familiar with the idea of modular programming, where we define functions to be clear, specific, and well-defined. This requires more thought in advance, but dramatically improves systems in the long-run. It reduces ambiguity, simplifies testing, and makes pretty much everything better – it’s even how the human brain works.
So, why are we trying to create Data, when we’re not even at the point where we can create WALL-E?
Why not create specific agents, with specific tasks, and specific access rights? We can then build similarly-specific agents to monitor their behaviour and limit the damage they can cause. It works, and some companies are doing it, but it will take time.
But why didn’t we start there?
I think part of the reason is that large AI companies are caught up in the hype cycle generated by LLMs, and think (probably correctly) that they won’t get the same amount of short-term investment if they don’t focus on the shiniest object – and AGI (Artificial General Intelligence) is the shiniest thing there is.
There is an enormous amount of work being done by large companies trying to actually use AI systems in the real world, and while the leadership of these companies are eager to “add more AI”, they also tend to be risk-averse. There are companies and individuals out there who seem to understand that we’re talking about not-even-Minions, rather than Data, but they’re being dragged along by the hype.
In the end, I think they will win, and if we ever do achieve AGI, I think it will be because we spent a lot of work on defining networks of limited-AI systems, rather than trying to build an AGI from scratch.
Sadly, though, we will also spend a whole lot of time banging our heads against the wall before we figure out what was really obvious at the start.
To quote the song Circumstances, by Rush, which appears on the album Hemispheres.
“Plus ça change, plus c'est la même chose
The more that things change, the more they stay the same”
I guess that’s part of being human – and maybe it should be part of any tests to see if systems are truly intelligent... whatever that means.
Cheers!




Comments