A Serious Game for Psychologists By Karl Mooney

A Serious Game for Psychologists

How to pimp up innovation and replication in psychological science?

We can come up with all sorts of explanations and for non-replicability of research findings, which have nothing to do with researchers doing a bad job, but the fact is that the report recently published in Science about the poor replicability of leading psychological research highlights some stubborn paradoxes in this academic discipline.

Another paper is cooler than a replication.

Knowingly publishing bad research is inexcusable, but is also extremely rare. However, it’s all too easy to understand the temptation to publish a large quantity of applicable research data on subjects with strong public appeal, rather than a small quantity of innovative, and carefully replicated fundamental research, since the current system rewards quantity with a glittering career, but hardly excellence and meticulousness. Gone are the days when you could get a permanent job or be made a professor by presenting one excellent innovative research result per year to the scientific community. And why should psychologists be the goody-two-shoes, so much more virtuous than the average professional?

If we want researchers to produce reliable and new findings and explanations , let us pimp up the image of the replicators we now imagine as otherworldly idealists doing their good works in splendid isolation on fixed term contracts. Obviously we psychologists have got to do something, given that none of us want to be consigned to this category, yet we do want replicable knowledge. But what can we do?

Science as a betting game

This is where the highly ingenious 'Weddenschapsmodel voor Wetenschapsbeoefening' comes in, the ‘Betting Model of Scientific Research’ devised in the 1980’s by Wim Hofstee, a professor of psychology at the University of Groningen. According to this model, scientific research starts with two flesh-and-blood scientists disagreeing with one another in predicting an effect. For instance: Dr A claims that married women often seek contact with single men around the time of ovulation. Dr B disagrees. They bet on it. A joint research project is set up that is acceptable to both of them as a way to settle their bet, and they set to work on their experiment. This form of competition – or to put it differently, ‘a serious game’ – effortlessly solves two highly pressing problems, with no force or moral standpoints required.

First, trivial and ‘obvious’ topics are simply no longer studied, as no scientist cares or disagrees about them, disagreement being a necessary condition for betting. So that neatly calls a halt to the proliferation of research studies demonstrating, for example, the influence of ambition on career development,… until, of course, a psychologist comes up with a fresh new idea that makes her predict that (under some circumstances) ambition does not affect career development, and she finds a colleague willing to accept a bet about her new idea. Secondly, the more unexpected and spectacular the hypothesis, the more reliable the research design and results. Indeed, the vested interest of both ‘players’ guarantees their sharp alertness to immediately counter any suspect move by the ‘opponent’ that might bias the experimental design in favour of the latter’s own interpretation.

Winner takes all (and is rewarded by some Reputation currency). Those who win frequently will end up with the highest Reputation Index, rake in the rewards of ample research funding, and why not: in a glittering career, a bevy of followers, and appearance on TV.

Hofstee, W.K.B. (1984). Methodological decision rules as research policies: a betting reconstruction of empirical research. Acta Psychologica, 56, 93-109