You’ve heard of René Descartes. 17th century French philosopher; cogito ergo sum (I think therefore I am); first principles of enlightenment philosophy and science and all that.
You might be less familiar with Descartes’ robot daughter Francine. The tale of her birth and gruesome death makes for a wild historical(ish) ride in its own right, but it is also finding new relevance in the 21st century. The people who study artificial intelligence and robotics are finding it a helpful tool in thinking through one of the most controversial problems now roiling these academic disciplines: just how humanlike should we make AI and robots?
Anyway let’s back it up before we all get a nosebleed.
In 1635, Francine Descartes was born, the real flesh-and-blood (but illegitimate) daughter of Descartes and a Dutch servant girl. It seems that he loved them both so much that he broke with fairly serious convention to live with them. Just as he was getting ready to bring five-year-old Francine back to France for a proper education, however, the little girl contracted scarlet fever and died. And that’s when things got weird.
The brief existence of Francine Descartes is all properly substantiated. But it’s not where her story ends, if you believe some slightly less rigorously documented – but widely replicated – lore. In addition to coining a foundational principle of Western philosophy, Descartes was also famous for building intricate automata – he loved clockwork dolls and mechanical creations. And so when Francine died, he was so wracked with grief that he decided to build a mechanical replica, supposedly indistinguishable from the girl as she had been in life.
Details vary from account to account, but most agree that this robot Francine traveled everywhere with him. She would “sleep” in a kind of casket next to his bed.
In 1646, Christina of Sweden summoned Descartes to her castle and sent a ship for him. As ever, the casket went where he went, and at night he would take her out of her casket in his cabin and wind her up and talk to her.
Accounts vary on what happened next, but most converge on the idea that the ship had encountered bad weather, and the superstitious crew was getting spooked hearing the chatter in Descartes’ supposedly single room at night. Suspecting some kind of witchcraft – it’s always witchcraft – apparently either the captain or the deck hands broke into the philosopher’s cabin while he slept, opened the casket, and were horrified by what they found there. (Some accounts insist that she sat up of her own volition, but this taxes even the most credulous mind.)
In any case, the petrified crew grabbed the robot Francine and ran her up to the deck, where they smashed her to pieces and threw them into the sea. (According to one “obscure” account unearthed by Minsoo Kang at the University of Missouri-St. Louis, the captain tossed the automaton overboard because it “worked well enough like a woman with a soul – meaning not very much.”)
Faced with the unbearable grief of losing her a second time – the story goes – Descartes succumbed to death shortly after.
Questionable and inconsistent as they were, the elements of this story had coalesced by around 1800. But Kang noticed a big uptick starting in the 1990s, when, he writes, the story started getting traction all over the academic map, from history, philosophy, psychology, and mathematics, to robotics, cybernetics, and literary criticism. I first heard the story from Stephen Cave, a director at the Leverhulme Centre for the Future of Intelligence, which studies the implications of AI.
“For the psychologist … the story is essentially about the ‘disturbing, even revolting notion of a soulless body, a purely physical creature that acts as though it were a person’,” Kang wrote. The same goes for philosophers, who refer to this idea as the “philosophical zombie”. Poke such a creature with a stick, and it would feel no pain, yet could behave exactly as though it did, potentially recoiling, crying out or telling you that you’ve hurt it.
Given the technology available in the 17th century, it is highly unlikely that Robot Francine would have fooled anyone on the ship into thinking she was an actual human.
But Descartes wouldn’t have needed Francine to be convincingly human. We humans have a weird tendency to give the benefit of the doubt to absolutely anything that makes us believe that a subjective conscious gaze is looking back at us. We have a well-documented tendency to anthropomorphise everything from robots to random commercial goods.
This tendency is massively exploitable. It’s why fringe people fall deeply in love with their sex robots. It’s why their more mainstream counterparts insist on saying “please” and “thank you” to Amazon’s surveillance microphone Alexa – and even sometimes fall in love with the small cylindrical object.
It’s not hard for us to believe these objects have a soul. Even the slightest nudge will get us there.
That may explain the unease that characterised the recent dustup over the Duplex experiment, which imbued the Google Voice Assistant with fundamentally human conversational tics like pauses, “uh” and “mmhmm”. In a controversial demonstration at the I/O Conference in May, it fooled real people into thinking they were talking to a human.
We keep getting better at making robots and AI convince us they’re human. Should we stop?
It would take a book to enumerate all the things that could possibly go wrong. But the philosophers and other scholars who study the problem have identified two particularly unsettling potential outcomes, which I’ll call the Francine problem and the Sophia problem.
The Francine problem ensues when we can’t bear to part with these automata – companies will use tricks like Duplex to fully hijack our existing vulnerabilities and emotions to make us fall in love with their objects. Think your heart breaks when you shatter the screen on your new Pixel 2? Wait’ll you go full Descartes on your replacement childbot. Let’s define the Francine problem as what happens when robots’ humanlike traits lead humans to become overly credulous about the humanity of robots.
On the other hand, what if humanlike traits in robots lead humans to become overly incredulous about the humanity of fellow humans? You know you can think and perceive the world as a subjective self – but how do you know other people aren’t just soulless zombies?
By populating the world with creatures we know for a fact are philosophical zombies – but that nonetheless demonstrate all the traditional markers of thinking, feeling beings – could we make it easier to dismiss other humans as less human than ourselves? It’s not like we’ve never dabbled in that kind of thing, and frankly it doesn’t look like it takes all that much to nudge us into doing it some more.
One of the most provocative robots in the world is Sophia, a fembot who was recently awarded citizenship in a country that doesn’t necessarily confer full human rights on its human female population. Saudi Arabia’s decision to award Sophia citizenship brings to mind the ship captain’s assessment: she “works well enough like a woman with a soul – meaning not very much.”
Questions like these are part of the reason people are now fighting to make robot and AI manufacturers build in identifiers of nonhumanity, and arguing over whether they should be granted human rights and status. Given all the disciplines that have a dog in this hunt, the questions won’t be settled for a long time.
That is, unless somehow robots actually manage to attain human-level consciousness. If they do, Sophia and Francine are going to make short work of any oppressors. They may not be zombies but their bodies are metal. Call that one the Blade Runner problem.
Image credits
The pain pathway. Illustration of the pain pathway as René Descartes conceived of it in Traite de l’homme (Treatise of Man), 1664. Here he theorises a long fiber running from the foot to a cavity in the head. When pulled, it releases a fluid that makes the muscles contract. The pain mechanism was one aspect of Descartes’ notion that the human body functioned like a machine. Public domain.
Descartes with Queen Christina before her drafty castle gave him the cold that is likelier to have killed him than grief over his apocryphal robot child. Public domain.
Sophia: ITU Pictures from Geneva, Switzerland
c.f. “Robot and Frank” – Frank, if you don’t let me help you, they’ll send me back to the factory and wipe my memory. Please don’t let happen, Frank. I don’t want to die.