“Well, you know we only use about 10 per cent of our brain.”
I don’t like when people tell me this. Someday, I hope to acquire the guts to issue the following rejoinder: “Which 10 per cent do you use?” But because I don’t like confrontation, I usually just make a face of mute disappointment and change the subject.
If you read LWON, you already know we use 100 per cent of our brain. That’s not the point of this post. But you know what is? I’ve spread similarly outrageous rumours about the brain.
This week, my esteemed colleagues will try to convince you that chemistry is the most nightmarish discipline to cover as a science journalist, or maybe archaeology or biology or physics. They will be wrong. The most dangerous science is neuroscience, because it gives journalists so much rope to use to hang ourselves.
The brain is such an agreeable little lump of meat. Swing a tennis racket and neurons fire in your motor cortex. Try not to smoke a cigarette after two glasses of mulled wine, and extra blood flows to the executive manager in the dorsolateral prefrontal cortex.
It’s so seductive to try to explain its complicated behaviours by pointing to specific functional areas as though they were cuts of meat on a butcher’s chart.
Thinking about math? That’ll be the pork loin. Trying to resist that piece of chocolate? You’re engaging the prime rib. Trying too hard to be funny in a post about neuroscience strains the neurons in the anterior flank steak.
It’s a bit more complicated than that. Greater activity in certain brain areas might reflect what you’re doing or thinking, but there are also an enormous number of connections among and between all the different areas, so many that it’s not that clear how meaningful it is when one lights up.
Anyway, what do we even mean when we accuse one of these cuts of meat of “lighting up?”
Susan Greenfield and I once had a chat about this. “They talk about the brain lighting up in certain area when certain thought processes are happening,” she told me. “I’m reminded that whenever I use my coffee maker, the light goes on. I know it’s making coffee by looking at the light but that light doesn’t tell me anything about how the machine makes coffee.”
And even that thin method has its limits. For example, fMRI studies of people trapped in clanking, claustrophobic machines for several hours have revealed that when they think about swinging a tennis racket, maybe some part of their brain gets a bit of extra blood flow, as revealed in some false-colour images.But when someone is in an fMRI machine, thinking about playing tennis, how similar is it to actually playing tennis? No idea, because no one’s ever managed to wear an fMRI machine during a game of doubles.
These thin results can get a bit dangerous when they’re overinterpreted, as Sharon Begley brilliantly explained a few years ago at the Daily Beast. She looked at some discredited neuroscience papers that had overrelied on spurious correlations and functional magnetic resonance imaging — fMRI, the neuroscientist’s favourite brain imaging technique — to draw questionable conclusions. “What’s striking about the discredited papers,” Begley said, “is how blithely they tend to vindicate the crudest of stereotypes.” For example, depending on what you’re trying to prove, fMRI data could show:
“that women love shopping because they’re “gatherers”, that girls have different kinds of brains and need to be taught separately, that gay men and straight women read maps similarly.”
By no means am I suggesting neuroscientists can’t do amazing things by looking at the brain’s blood flow changes and electrical signatures. It is possible to use fMRI to tell when a person who was thought to be in a vegetative state is thinking the word “no.” You can even apply that information; a thought-controlled prosthetic arm contains signal-processing algorithms that can divine your neurons’ intent to move your arm and translate it to an electronic replacement.
But figuring out how to use the brain’s signals to do something interesting is not the same thing as understanding the brain. The blood flow changes revealed by fMRI are not answers, they’re clues. The reason they can’t be answers is that there is as yet no general theorem for neuroscience. Neuroscience has no e = mc² or F=ma.
Or rather, it has lots of different ones. The human brain has about 100 billion neurons, each networked with up to 25,000 other neurons by way of communication channels called synapses. A synapse will only pass information to that of a neighbouring neuron after an electrical impulse called an action potential has motivated the transfer; then some ion channel will open and neurotransmitters will be released. This is the basic interaction that creates all of human experience. So that’s biology (neurons), physics (action potentials), chemistry (ion channels), pharmacology (neurotransmitters), and a lot of calculus. Some researchers believe quantum mechanical processes are involved too. Neuroscience, in other words, is all the sciences rolled up into one.
Which of them is the governing discipline? Whose equations are most important? These are important question when it comes to modeling anything. A model needs a basic unit. Is it the neuron? Depends on who you talk to. Henry Markram, a neuroscientist at EPFL who has spent almost a decade trying to simulate the human brain on a supercomputer, thinks the basic unit should be the neocortical column, a small cylinder that in human brains is comprised of 70,000 neurons. He likens it to the bricks that make up a house. But others think that brick is the atom. Or the molecule. The danger is that under enough scrutiny, any of these bricks could eventually begin to seem as complex as the house itself.
When there’s no equation to fall back on, there’s a lot of leeway to just make shit up. Falsely reassured by false-colour images and stretched metaphors, we science writers can stretch the data to draw convenient conclusions about human nature (“women love shopping!”), but that is only part of the problem: the metaphors are even worse. “The brain is wired very much like a microprocessor,” I once mansplained in a blog post for IEEE Spectrum, without sustaining the slightest clue that what I was parroting was absolute bollocks.
In the grand scheme of things, journalists inadvertently misrepresenting neuroscience is a bit of a first-world problem. The neuroscientists know the score; the journalists will keep getting either bamboozled or ignored. No one ever died from believing we only use 10 per cent of our brains.
There is one group, however, who will suffer disproportionately. Won’t someone please think of the Singularitarians?
Enabled by tech journalists who bang on about the brain’s “wiring diagram” and its similarity to microprocessor architecture, this group firmly believes that within ten, twenty, fifty years, computers will rival the sophistication and complexity of the human brain. that. As far as I can tell — because there are different sects — Singularitarians believe that this will be the magic moment when we will be able to upload our brains into machines and float up into the Great Big Cloud Computer of immortality. When you tell them that this is highly improbable, they get very angry.
The 10-percenters aren’t entirely off base. We may use 100 per cent of our brain, but I’ll be damned if we understand 10 percent of it. So the next time someone brings up that pernicious statistic, remember that it’s not their fault: it’s neuroscience’s fault for being the cruellest of them all.
Image credits
Brain illustration from Shutterstock
Cow figurine from Shutterstock
MRI machine from Shutterstock
Matrix brain from Shutterstock
Nice piece. I’ve got two comments. First, there’s quite a bit of work going on with higher level modeling – I’m told that there are neural network models of the auditory system that are accurate enough to show many of the auditory illusions. This is at a much higher level of abstraction than the neural column. It just doesn’t get as much ink.
That leads into the second comment: if you must use a computer analogy, a data flow computer model is much better than the current digital computer processor. Unfortunately, it’s not as well known.
Great piece about journalism and “science”.
And of course, the whole country where I live (Spain), believes that we use less than a 10% of our brain.
Well, nos I know they do. Or less.
Neuroscience gives journalism enough rope to hang themselves, and sure they use it all.
It happens the same with other science areas and specially with drugs and science: people believe first a newspaper highlight than years of study of a real sciencist about a matter.
Dumb readers, stupid journalists (don’t mean bloggers but big newspapers).
And many times I spend my time emailing to the “journalist” that wrote that nonsense or even a deadly advice, and usually they don’t give a shit about what they wrote: it’s water under the brigde.
Give’em more rope, let’s see a multi-hanging-journalist-day.
You hint at this, but this part of the problem is the journalist/neuroscientist interaction. The articles that get the most press and excitement are the ones that tell the simplest stories (i.e. “We have discovered the love brain region”) These are also the articles that have the weakest data behind them and are most rapidly proven wrong. fMRI is being used in nuanced ways in hundreds of studies, but few people write about these studies in major publications.
Even in this post, one of the few people you deemed worth naming is Susan Greenfield who has a track record of extremely simplistic neuroscientific statements. Neuroskeptic has been covering this in some detail: http://neuroskeptic.blogspot.com/search/label/greenfield
As an interesting aside, you also question the utility of knowing the brain regions that most change when one images playing tennis, but positively note the studies that use fMRI to communicate to people in vegetative states. This is the same study! Owen et al Science 8, Sept 2006. They had people imagine playing tennis vs moving around a room. The key wasn’t which brain regions activated for imaginary tennis it was that they were very distinct from the other task & allowed someone in a vegetative state to respond to questions. Such a study might not have been possible if the response to imaginary motions wasn’t reasonable mapped in earlier seemingly useless studies.
I’m not sure if I disagree with the main thrust of this post, but I’m more noting that even a most on oversimplifying neuroscience can’t help but oversimplify.
The writing is interesting, no doubt. And as usual, confusing.
All biological systems- a piece of DNA, muscles, neurons, glia, the digestive system or the nervous system- are complex. Trying to find a unique equation for Neuroscience can be a choice when there is idle time. Can we keep math apart from physics?
Not only Neuroscience, all sciences are usually intermingled. How we separate the physical forces when when a chemical, say NaCl, is dissolved in water?
It’s well said that linking some forms of brain activity and brain functions is often misleading. Still, brain mapping studies using different imaging/molecular/cellular techniques can identify the ‘important’ brain area for a particular task. For the time being, this information- the relative importance- may be helpful in some practical purposes, i.e., in medicine.
Thanks for the thoughtful comments, guys. John Roth, I appreciate the introduction to the data flow computer model. That will prove useful, so thank you.
Drogoteca: You’ve put your finger on one of the most frustrating things about science writing. What is a science journalist supposed to do when they have 400 words of allotted space for a story that should take into account 3 gigabytes of papers ad replication studies? To be truly honest about the significance the latest finding would require 1500 words of caveats. No one wants to slog through 1500 words just to get to the conclusion that, ho-hum, we haven’t solved any big mysteries, we just found another interesting clue, and more research is needed.
bsci: yes, great points all. Greenfield did make a good point though, about the infeasibility of modeling the brain. I agree with your point on the usefulness of the tennis study, but it was one specific useful application. I’m not saying neuroscience is flawed, I am saying it is incredibly difficult to write about it honestly, and without taking small and interesting results and overgeneralising them.
and Asim Bepari, yes: I was not saying there’s a problem with neuroscience. I was saying there’s a problem with writing coherently about neuroscience. Consider my contribution here an object lesson. 🙂
Neuroscience (like the other sciences) is in its infancy, but we’re all susceptible to the human push to produce a conclusion, a meaning, a concrete result.
My favorite question about brain activity is:
“When you give a person who’s sneezing and dripping from pollen allergies an impressively striped sugar pill, and then twenty minutes later their histaminic reactions subside, what has happened, at the molecular level, in their brain and body?”.
Not only do neuroscientists not have a clue about the brain’s mechanisms here, they also don’t yet have a clue about how to identify and delineate those mechanisms. The placebo effect, in it’s many forms, is still completely mysterious at the molecular level. So that question, which will eventually be answered, will have to wait for a few decades.
We don’t have the tools yet to observe what’s going on in the brain at the molecular level. And even if we had the observational tools, we don’t have the computing capacity yet to model more than tiny pieces of such cascades of complex molecular interactions. And, of course, the brain is an integral part of the body, so brain processes are just parts of larger processes which are constantly cascading through our bodies.
I think good neuroscience reporting should, at this point, avoid conclusions, and should instead concentrate on the questions we’re asking about the brain, how current studies are refining the questions, any successes at identifying specific molecular mechanisms in the brain, and any interesting developments in new tools to make less and less crude observations.
For some of us, that would be interesting reporting.
Honestly this is an incredibly eye-opening post. The reason is that most read articles online, the news findings, the research blah-blahs and yet somehow we miss to ask the questions that have been asked here.. and somehow the unique degree of vision that is opened up here is incredibly worth looking into. Somehow I am a newbee into neuroscience although I am closely related to it as I belong to psychology, but I can very well see the point here and that is Neuroscience is somehow being framed into a structure and dialogue that it does not belong to and rather it is far wider in scope and size to fit into this at all. We have not even started to read the book that explains the mysteries of the human brain and we cannot expect to get much from the whole lot of research as of now. We can however keep an eye open to see what is going on and yet nothing is certain as the explanations and truths are far deeper than what scientists can ever expect. The best standpoint at this stage is to watch the developments and gather the most of data for as long as possible so that the actual truth can be confirmed. Perplexing things have perplexing ways of opening up their secrets. Nicely done!