
It looks like the Basilisk definitely got him, so it’s worth resurfacing the explainer I wrote earlier this year.
If you’re like most people, you haven’t heard of Roko’s Basilisk. If you’re like most of the people who have heard of Roko’s Basilisk, there’s a good chance you started to look into it, encountered the phrase “timeless decision theory”, and immediately stopped looking into it.
However, if you did manage to slog through the perils of rational philosophy, you now understand Roko’s Basilisk. Congratulations! Your reward is a lifetime of terrorised agony, enslaved to a being that does not yet exist but that will torture you for all eternity should you deviate even for a moment from doing its bidding. According to internet folklore.
The internet has no shortage of BS and creepy urban legends, but because Roko’s Basilisk involves AI and the future of technology, otherwise-credible people insist that the threat is real – and so dangerous that Eliezer Yudkowsky, the moderator of rationalist forum Less Wrong, fastidiously scrubbed all mention of the term from the site. “The original version of [Roko’s] post caused actual psychological damage to at least some readers,” he wrote. “Please discontinue all further discussion of the banned topic.”
Intrigued? Yeah, me too. So despite the warnings, I set out to try to understand Roko’s Basilisk. By doing so, was I sealing my fate forever? And worse – have I put YOU in mortal danger?
This is a science blog so I’m going to put the spoiler right at the top – Roko’s Basilisk is stupid. Unless the sum of all your fears is to be annoyed by watered-down philosophy and reheated thought experiments, it is not hazardous to keep reading. However, although a terrorising AI is unlikely to reach back from the future to enslave you, there are some surprisingly convincing reasons to fear Roko’s Basilisk. Continue reading →