Free to go anywhere.
“A rat in a maze is free to go anywhere, as long as it stays inside the maze.” ― Margaret Atwood, The Handmaid’s Tale
I meet a lot of people who believe that humans possess some evolutionary leap — or some God-given virtue — that so separates us from machines that it renders the pursuit of AGI completely absurd.
They tell me that, because LLMs (like ChatGPT) are just “guessing the next word”, the advances we’ve seen over the past 18 months will slow and stall… that because AI has no sense of self, no calling nor purpose it will amount to nothing.
They are wrong.
They are wrong because they over-estimate their own brilliance. They are wrong because they are blinkered by the assumption that intelligence will only exist in the creator’s own image… and most of all, they are wrong because they don’t look at innovation holistically — that is to say: they don’t see the whole system because no-one has put it all together for them yet.
When we think about intelligence - our own intelligence or even that of a dog or a lab-rat - we understand and test that intelligence using the physical world.
The stereotypical test for a rat’s intelligence might be in the solving of a maze. For a squirrel we might construct an elaborate puzzle worthy of a beer commercial. In any cases, we almost certainly use food as the lure - or to condition the behaviour we later want to retest.
What these experiments have in common is the capacity of the subject to move, conduct visual analysis, use its spacial awareness and so on.
The tests also all have a goal, which is usually: get food.
Finally they also all present the illusion of some agency… free will… in the subject: a quality which is seemingly a big missing part of AI.
Freedom of choice
In the case of our rat it doesn’t technically have to follow the maze. It has the agency to choose. It chooses to solve the maze because it wants to. Because it wants the food.
Actually, when we consider this specific rat we can so clearly see the motivations and constraints of the experiment - the false environment - that it doesn’t feel that the rat has any agency at all.
But out in the wild, as the rat scurries around in the open we might easily say “hey, there’s a happy rat … with its life and choices ahead of it: doesn’t it seem so free!”
This isn’t about of agency or freedom. For the rat, nothing’s changed - though, in the wild things might be much worse (especially in my back yard). It doesn’t make different choices ... not really. The world is just a bigger maze with scarcer food.
In this dichotomy — between the maze and the wild — what we perceive as agency and free will is actually just an increase in the complexity of the experiment … and the lack of observation by a scientist in a lab coat. The rat’s free will remains unchanged.
So where does this leave human free will? … and where does this leave AI?
Well, getting into a conversation about human free will is probably outside the scope of this post — especially in the sixteenth paragraph — but suffice it to say that I will contend, in a later post (so subscribe if you haven’t already!) that what we think of as free will is really the incalculability of complex systems.
I’m not saying that everything is deterministic in the “every atom is knowable” sense … just that we are not as free-willed as we think we are. Economics is a maze, culture is a maze, diet, social media, tax, family .. they’re all mazes in which we are trapped from day one .. just like the happy little wild rat.
Where it leaves AI is far more interesting … and more the point of this post.
At the start of the article I mentioned that the Luddites are wrong specifically because they don’t look at the whole picture… and they hold things like agency in too high a regard.
This year we will see AI given agency — and everything will change.
The reason the rat has free will is not because it thinks to itself, in its internal rattish monologue, “ooh cheese”. It just smells the cheese and goes. It thinks no more deeply about its actions than you or I think about articulating our leg muscles when we walk to the fridge.
The fact we have a big wet language model bolted on top of our animal brain … that’s telling us we’ve had enough chocolate already … should not allow us to believe we are not just as instinctively tempted by the memory of the chocolate or the imagined sensation of what it’s going to taste like as the rat and his cheese.
Convergence
The reason this is important is that this year, we’re giving AI arms and legs … and the ability to get to the fridge, open the fridge and get out the chocolate. (Don’t worry - it’s not going to steal your chocolate. Not just yet anyway.)
Everyone sees the advances in linguistic and artistic AI because its accessible, online, over the internet… not because it’s advancing faster than other areas like robotics. Robotics is racing ahead at an astonishing pace.
Optimus, Tesla’s robot, seen here folding shirts (not fully-autonomously, yet)
This is the year the advances in machine learning (dull, maths AI), fancy AI (linguistics, visual recognition, reasoning) and physical AI (dexterous robotics) will converge… and when they do… when robots are built with visual acuity, physical dexterity and linguistic reasoning they will have that magic thing we call agency.
Why?
Because when you have a physical object that can know it needs chocolate from the fridge… and can get up, move through the room, overcome obstacles, open the door and take the bar from the fridge … it has free will. Just as much as you or I do.
“Oh, but it’s only doing it because I told it to…”
Well, that’s an interesting subtlety. Yes, perhaps … though, in a sense, it’s only doing what it’s told because the manufacturer told it to obey your commands. But if you ask it to cook something, it will1 make up its own mind in exactly the same way you or I would.. on the balance of probabilities about what is available, what time it is or what is close to the front of the fridge.
There is one further step too — one further step towards free will — and that is to instruct it to think for itself: to add a loop.
When we interact with ChatGPT it waits for us to say something before it responds. But there’s no reason, particularly in a physical device like a robot, why the designer wouldn’t simply reissue a command after n seconds of inactivity which says: “Is there anything you should be doing? If there is, get on with it.”
You can call that a lack of free will if you like — I call it a conscience.
As someone who engineers with an LLM all day, as someone who knows how to tell ChatGPT to show me it’s programming and knows how facile (and in completely plain English) that programming is … I know that nothing I’ve written here is even remotely far-fetched.
A commercially viable android, vastly more knowledgeable than you or I and almost as dexterous is only months away… and I think we’ve known for some time this was coming.
What I think will shock people is the realisation that once AI is up and about and choosing strategies and tasks for itself … our own sense of agency and free will is going to start to lose a lot of the lustre and sheen that we all seem to think separates us from the beasts.
I think it’s going to come as a big shock to many that we’re not all that special anymore … and there’s a lot of new rats in town.