What is an internal world model anyway?

|

There was a brief moment in my life when I thought it was obvious what an internal world model is. Self-evident, even! It’s the homunculus version of the cosmos as represented inside the confines of our skulls. Like this Emily Dickinson poem I found in one of Maria Popva’s always illuminating articles about consciousness:

The Brain — is wider than the Sky —
For — put them side by side —
The one the other will contain
With ease — and you — beside —

The Brain is deeper than the sea —
For — hold them — Blue to Blue —
The one the other will absorb —
As sponges — Buckets — do —

On the basic structure of reality, my internal representation of the world is probably fairly consistent with yours. For you, me and the late, great Emily Dickinson, the colour blue appears in ways that are consistent and determined by the laws of physics. But once the photon is captured by the rods and cones and passed along the optic nerve to the brains, we are also pretty consistent in assigning that information to a group of neurons that assign orientation, edges and meaning to identify whatever blue shape we’re looking at.

There are other ways our internal models share a base reality. Regrettably marketing and publicity is one factor – I imagine your idea of “Jennifer Aniston” is remarkably consistent with mine (unless you know Jennifer Aniston personally, in which case your truth will diverge sharply from my highly PR-managed perspective).

In all our personal dealings our internal world models will diverge sharply. I have a very different set of representations linked to the concept of “grandmother” than you do, thanks to the fact of our uniquely different grandmothers. I used those examples because there have been claims (and refutations and resurrections) that specific single neurons in both your brain and mine are devoted uniquely to representing these people in our mind’s eye. 

This all goes some way to understanding what the people at BrainWorlds – an interdisciplinary research center at the University of Freiburg – are up to. They want to know: how are these worlds constructed? Is there an identifiable physical substrate that underpins their construction? Like the way the brain perceives blue? Or like the way the brain (maybe) conceives of Jennifer Aniston?

Some things must be hardwired: take an insect’s ability to understand “up” and “down” straight out of the gate. They don’t go to insect school with little insect physics teachers teaching them navigation and how to account for wind speed. So that information has already got to exist there in the physical structure.

What drove me to attend a BrainWorlds conference was a throwaway comment I had heard at a previous conference. Amid all the existential risk chatter and fears of AI super intelligence, he was dismissive. “Talk to me when AI has an internal world model,” he scoffed.

I was immediately intrigued. As someone who knows nothing, zero, about the weighting systems of neural networks or backpropagation (in fact I’m a bit like ChatGPT about all this AI terminology in that I can complete all the right phrases but fuck me if I know what I’m actually talking about) – I felt like I suddenly had a handle on something broadly understandable. AI is autocomplete, roughly. We are not. (Well, except me when I’m talking about AI.) And maybe this is what’s special about biological intelligence. It takes in new information and integrates it through the complicated little simulated universe in our skull for analysis. An internal world model contextualises information. An internal world model lends us common sense. AI’s lack of one is why AI is considered brittle and lacking in common sense

(In fact there’s a whole very interesting area of neuroscience devoted to this: the idea that we build a miniature model of the world in our brain that runs largely on autopilot unless the stimulus we receive from the outside world contradicts the expectations set by that model.)

This simulated universe is our internal world model and every living creature has it. And in order to make AI that we can trust not to hallucinate, we will need to find a way to give it an internal world model.

I was so excited by this piece of solid information in my grasp. 

Then I brought it up to my tech friends, the sorts of people who know what backpropagation means because they have a robust mental model of it, not just what the words mean but some of the mathematics too. 

They immediately said the concept of an “internal world model” was meaningless, because of course AI has an internal world model of its own, what the hell do you think all those nodes and weights and back propagation algorithms are but a model of the world. In a brain that’s not like ours.

And then I realised I need to update my internal world model of what an internal world model is. And then I realised that I needed to update my internal representation of my self as I exist inside my internal world model, from a self that understands what an internal world model is to a self that does not.

I am typing this from under my bed. This is where I live now, subsisting on a diet of centipedes, spiders and the occasional mouse. I’ve had enough world and I would please like to speak to a manager now.

[this is cross-posted from my Substack, which I have just started after one and a half years of biting my nails about it]

One thought on “What is an internal world model anyway?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Categorized in: Curiosities, LWON, Mind/Brain, Sally