fMRI and Lying – An Interesting, Different Approach

The number of published, peer-reviewed articles exploring the use of fMRI as a lie detector is now around 20 and at least two companies continue to sell fMRI-based lie detection services in the United States (in my much-repeated view, very prematurely).  A new article, though, uses fMRI in a different and interesting way to explore lying.

Josh Greene and Joseph Paxton have just published a paper in the Proceedings of the National Academy of Sciences looking cleverly at some of the processes involved in one kind of lying.  Greene and Paxton, Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions, PNAS 106:12506-12511 (July 28, 2009).

Greene and Paxton enrolled subjects in what they thought was a study of paranormal abilities to predict the future.  They were asked, while in a scanner, to predict the outcome of computer-generated, random coin flips.  Some of the time the subjects had to record their prediction in advance (the “no opportunity” condition), other times they just reported whether their prediction was correct after they were told the “true” result (the “opportunity” condition).   Each trial was a gamble of $3, 4, 5, 6, or 7 – a correct guess won that amount, an incorrect guess lost that amount.

They got useful data from 35 subjects, each of whom had 70 repetitions under the “opportunity” condition.  Fourteen of them were classed by the investigators as “honest” in the opportunity condition because they were right about 50% of the time.  Fourteen were classified as “dishonest” –  they were “right” 69 percent of the time or more, which should have happened fewer than one time in a thousand.  (The investigators do not accept their own cover story about paranormal ability!) Seven, who were right “too much” but not enormously too much, were classified as ambiguous.

The investigators talk about two models for honesty – “will” and “grace” (thus, perhaps, betraying that at least one of them watched television during the 1998-2006 run of Will and Grace).  The “will” model assumed one’s brain has to work harder to overcome the temptation to cheat.  The “grace” model assumes that some people never even considered cheating and that the cheaters’ brains would have to work harder.

The investigators looked at reaction time and found that the honest group had no significant response time differences in the opportunity and no opportunity conditions or between their wins and losses.   The dishonest group took longer in its opportunity condition “losses” than in its “wins” – took longer to decide to be honest.  It also took longer in its opportunity condition losses than the honest group did.

The fMRI data showed that the dishonest group had greater activation in the dorsolateral prefrontal cortex when it had opportunity “wins” (when it could have cheated to win) than with no opportunity wins.  It also showed more activation in the control network (anterior cingulate cortex, dorsolateral prefrontal cortex, and ventrolateral prefrontal cortex) when it had opportunity “losses” (decided not to cheat) than when it had no opportunity losses.  The honest group showed no differences.

So – at least in this kind of trial, honesty might be a matter of not thinking about cheating, not of controlling an impulse to cheat:  grace, not will.  The article has a much richer discussion of its findings than this poor summary, including a good summary of the limitations of the experiment and I recommend it.  It is particularly noteworthy among the fMRI-based lie detection tests in one way – it is based on “real” lies, situations where the research participant, on his or her own, decides to lie without being instructed to do so (and, in fact, has been at least implicitly instructed to tell the truth).  As such, it may tell us more about at least some kinds of lying than the usual experimental approaches.