Ruud Abma SIT Q&C - May 30th 2013
Scientific fraud and normal science
Ruud Abma
Faculty of Social and Behavioral Sciences / Descartes Centre
Utrecht University
Science in transition - Workshop Quality and Corruption
1. Scientific fraud
The standard definition of scientific fraud involves: fabrication (faking data entirely),
falsification (manipulating data in order to make the results look good) and plagiarism (theft
of intellectual property). Implicitly, it is assumed that the actions have been done
intentionally. Intention also constitutes the boundary between fraud and sloppiness.
Until recently, the incidence of scientific fraud was unknown. Hard evidence is still
lacking, but we do know more: estimates range from 2% (Fanelli, 2009) tot 10% (John, 2011)
of people admitting to having falsified data themselves and about 14% having observed in it
colleagues.
Fraud is usually not detected by reviewers or referees. Almost always it is co-workers
who blow the whistle (Stroebe et al, 2012). Why? Because they are in close contact with the
perpetrator and his or her daily routine and also have an overview of the total of his research
activities.
There are some common traits to fraud cases. One warning sign is: productivity and/or
results that are too good to be true (‘Marvellous! How does he do it?’). For instance, the
medical researcher John Darsee, at 23 had more than 125 publications (most of them
fraudulent, see table)
Between brackets the number of fraudulent publications:
1. Yoshitaka Fuji, anaesthesiologist, Japan (172)
2. Dipak K. Das, heart surgeon, USA (145)
3. John Darsee, physician, VS, 1966-1981 (82 tot 104).
4. Friedhelm Hermann en Marion Brach, physicians, Duitsland, 1994-1997 (94)
5. Diederik Stapel, social psychologist, Nederland, 1997-2011 (69)
6. Jan-Hendrik Schön, physicist,VS/Duitsland, 1997-2002 (27 tot 35)
7. Scott Reuben, physician, VS, 1996-2009 (21)
8. Alirio Mendelez, immunologist, Singapore (21)
9. Stephen Breuning, physician, VS, 1975-1988 (20)
10. John Sudbø, dentist and oncologist, Noorwegen, 1993-2005 (15)
11. Roger Poisson, physician, Canada, 1977-1990 (14)
12. Luk van Parijs, biologist, België/VS, 2000-2004 (11)
13. Eric Poehlman, physician, VS, 1992-2002 (10)
14. Marc D. Hauser, primatologist, VS, 1995-2010 (9)
1
Ruud Abma SIT Q&C - May 30th 2013
The social psychologist Diederik Stapel had at least 69 fraudulent publications out of 143
publications in total. Many colleagues found his results ‘too good to be true’, but they
probably decided he had to be a researcher with a knack for experimentation.
One other trait often found in fraud cases is an environment that boosts a young
researcher’s production, either by being demanding with little time for supervision or by
admiring the young researcher: ‘[…] a mentor dazzled by the fit of his junior’s findings to his
own hopes. A younger scientist of brilliance, charm, and plausibility […] taken on as a
protégé and perhaps becoming a close friend’ (Judson, 2004, 144). In Stapel’s case this was
his mentor and supervisor Wim Koomen, with whom Stapel published 26 papers (almost 1/6
of his total track record). Fraudulent scientists furthermore have in common that they have an
outstanding intellect, are skilled experimenters well informed in their research field, and are
able to recognize the missing links that – if solved – would represent a breakthrough.
Goodstein, in his book On fact and fraud (2010, 3-5) names three risk factors:
Perpetrators usually (1) are under career pressure, (2) know what the answer to their research
question would be if they carried out their research properly, (3) are working in a field where
individual experiments are not expected to be precisely reproducible. A final observation is
that perpetrators of fraud have a habit of working alone, and that these solo adventures are
condoned by the institution they work in. Diederik Stapel fits all these criteria, as the
committee that investigated his scientific papers has demonstrated.
Obviously, scientific fraud is not just an automatic or mechanical outcome of career or
publication pressure. Rather it is the result of a personal history where career motives
gradually outweigh intellectual motives (presupposing people initially choose a scientific
career for intellectual reasons), in an environment where career trajectories are tied to specific
markers, such as number of publications and citations generated by an researcher. Individual
characteristics and contingent factors then determine the means chosen to reach the favoured
goals.
2
Ruud Abma SIT Q&C - May 30th 2013
2. Normal science
Is there such a thing as normal science? Let us, for the moment, assume that there is. It is the
day to day activity of researchers and scholars who follow their intellectual curiosity in their
research, while at the same time keeping an eye on their track record and financial position.
The latter is a precondition for the former, but should not dominate it, according to official
academic ideology.
In recent publications by the KNAW (Royal Netherlands Academy of Arts and
Sciences) the diversity of scientific approaches is emphasized. Two examples. Towards a
framework for the quality assessment of social science research (KNAW, 2013) reports a
‘growing awareness that quality assessment procedures need to allow for differences between
and within fields’ (11). And Carefulness and integrity in handling scientific research data
(KNAW, 2012) states that research cultures vary greatly between disciplines and that it is
doubtful whether one can give general guidelines to distinguish normal and questionable
research across disciplines. The boundary between normal research and sloppy science is the
object of discussion, negotiation and sometimes power play, as is amply demonstrated in the
report of the Levelt-committee and the subsequent responses by some social psychologists.
In their analysis of the relationship between fraud and normal science, Schuyt et al.
(KNAW, 2012) point to the impact of informal cultures and processes within scientific
communities – local, national and international. It is there that the informal quality
assessments of each other’s work are given, and the boards of scientific associations and
journals are appointed. These informal networks transmit and transform the various scientific
cultures, and also establish the norms and values that allow students and researchers to
distinguish questionable from normal research practices. Students are not just educated in a
technical or methodological sense, but also morally: they have to internalize the norms of
scientific integrity to the degree that they become a second nature – ‘Integrity is doing the
right thing even when no one is watching’ (C.S. Lewis). Learning the official integrity codes
is of course useful, but the development of sound personal norms about good practices in
science is more important. Exemplary behavior of professors as role models is a powerful tool
here.
In this respect Stapel’s double role as a teacher in the course ‘Scientific integrity’ for
psychology students in Tilburg and as a living example of quick and dirty research practices
(‘Never mind the data, now concentrate on writing your research article…’) is significant: the
informal socialization in research practice is more influential than an integrity course. What
some (i.e. the Committee-Levelt) regarded as ‘sloppy science’, was ‘normal science’ in the
3
Ruud Abma SIT Q&C - May 30th 2013
eyes of Stapel and his students. Also, many co-authors, reviewers and editorial boards
regarded these practices as normal and sometimes even stimulated them: ‘Not infrequently
reviews were strongly in favour of telling an interesting, elegant, concise and compelling
story, possibly at the expense of the necessary scientific diligence. It is clear that the priorities
were wrongly placed.’ (Committee-Levelt, 2012, 53).
Stapel himself presented the evidence early on. When he received the Jos Jaspars
Award for young social psychologists, in his speech (July 1999) he disclosed his way of
working, in an explicit defence of confirmation bias, exactly the type of sloppy science
criticized by the Levelt-committee – and got away with it: ‘We design an experiment and go
to our lab to test our conjectures. And then what happens? Our experiment fails. We don’t
find what we expected to find. Moreover, we find something we cannot explain. We tweak
and fine-tune the experimental set-up until we find something we do comprehend, something
that Works, something with a P-value smaller than .05. Champaign! Celebration! [..] The
leeway, the freedom we have in the design of our experiments is so enormous that when an
experiment doesn’t give us what we are looking for, we blame the experiment, not our theory.
(At least, that is the way I work). Is this problematic? No. […] we find what we are looking
for because we design our experiments in such a way that we are likely to find what we are
looking for. Of course!’ (Stapel, 2000, 6-7).
3. Remedies
Instances of scientific fraud usually give rise to two types of reactions. The first has to do with
socialization: (PhD) students should be taught about scientific integrity and the formal codes
of conduct. The second is: raise the level of accountability, by pressing researchers to be more
transparent about their data, research procedures, etcetera, with a threat in the background of
unannounced visits by ‘audit teams’. Exerting more control, even if effective, might do more
damage than good, by creating a climate of distrust. Moreover, new forms of control always
give rise to new forms of evasion.
When presenting the report on Carefulness and integrity, Kees Schuyt suggested a
different approach when he said: ‘In the Netherlands researchers are busy publishing to a
degree that they do not find the time to critically review and seriously comment on each
other’s research papers.’ His remedy: ‘Write less, read more’.1 There is another argument for
that: there are too many papers that are submitted and eventually get published. Many of them
1
NRC Handelsblad 22 sept. 2012.
4
Ruud Abma SIT Q&C - May 30th 2013
do not serve the aims of science, but solely the necessity of researchers to survive.2 Many
published papers, tens of thousands every year, are ignored, not quoted and not even read.
Such papers are not part of living science, on the contrary: they clog the arteries of scientific
communication, and create both diffusion and confusion.
As early as 1973, the psychologist Andrew Barclay, when reviewing the expansive
publication practices in his field, wrote: ‘This mass of information has been termed the
“knowledge explosion”, but it is more like an explosion in a confetti factory; everything gets
covered with little bits of paper, but they hardly matter in the long run; they are not even
whole sheets of paper. The situation’s outcome was obvious: Everyone became interested in
smaller and smaller bits of behavior.’
In brief, both normal science and prevention of fraudulent conduct might benefit from
a change in the publication culture. Right now, a business model is prevalent that in itself is at
odds with the aims of scientific work (which is about content and substance) (see also Radder
2010). This model urges researchers to publish as much as they can. Long publication lists are
seen as a sign of productivity, which in turn is considered a measure of quality. In recent,
more sophisticated assessments it is not the number of publications that counts but the number
of citations (but that still is not equivalent to quality – it only refers to impact). The scientific
research paper has become a form of currency (literally: it can generate money for new
research), and a currency reform is in order (see also Mummendy, 2012).
This reward system, which is based on external criteria, promotes calculating
behavior, including the ‘cutting of corners in the rush to publish’ (Hull, 1998). What remedies
do we have to prevent sloppy science, the breeding ground for fraudulent behaviour? First of
all, scientific authorities should prescribe not only a minimum but also a maximum number of
publications. It would make researchers more selective in their publication practices and
reopen the clogged communication channels of science. Secondly, it is necessary to revive the
critical function of science in the scientific domain itself, and to create an atmosphere wherein
scientists can regain their own norms and values, instead of living up to the business model
that has the universities and research institutions in its grip. Both science and the scientists
would benefit from that.
2
Miedema, NRC Handelsblad 29 sept. 2012.
5
Ruud Abma SIT Q&C - May 30th 2013
Three quotes, which also can be seen as starting points for discussion.
1. Scientific progress on a broad front results from the free play of free intellects, working on subjects
of their own choice, in the manner dictated by their curiosity for exploration of the unknown – Vannevar
Bush, Science – The Endless Frontier. A Report to the President on a Program for Post-war Scientific
Research, July 1945.
2. Melodramatic as allegations of fraud can be, most scientists would agree that the major problem in
science is sloppiness. In the rush to publish, too many corners are cut too often.
- David L. Hull, Scientists behaving badly, The New York Review of Books, 3 dec. 1998.
3. Integrity is doing the right thing even when no one is watching.
- C.S. Lewis, The inner ring, 1962.
References
Barclay, A.M. (1973) Death and rebirth in psychology, Contemporary Psychology, 1973, 333-334.
Committee-Levelt (2012) Flawed science. The fraudulent research practices of social psychologist Diederik
Stapel. Tilburg: University of Tilburg.
Fanelli, D. (2009) How many scientists fabricate and falsify research? A systematic review and meta-analysis of
survey data, PLoS ONE, 4(5), e5738.
Goodstein, D. (2010) On fact and fraud. Cautionary tales from the front lines of science. Princeton: Princeton
University Press.
John, L.K., G. Loewenstein & D. Prelec (2012) Measuring the prevalence of questionable research practices with
incentives for truth-telling, Psychological Science, 23, 524-532.
Judson, H.F. (2004) The great betrayal. Fraud in science. Orlando: Harcourt.
KNAW (2012) Zorgvuldig en integer omgaan met wetenschappelijke onderzoeksgegevens. Advies van de
KNAW-commissie Onderzoeksgegevens. Amsterdam: KNAW. [Carefulness and integrity in handling scientific
research]
KNAW (2013) Towards a framework for the quality assessment of social science research. Amsterdam:
KNAW.
Mummendey, A. (2012) Scientific misconduct in social psychology: Towards a currency reform in science,
European Bulletin of Social Psychology, 24 (1), 4-7.
Radder, H. (Ed.) (2010) The commodification of academic research. Science and the modern university.
Pittsburgh: Pittsburgh University Press.
Stapel, D.A. (2000). Moving from fads and fashions to integration: Illustrations from knowledge accessibility
research. European Bulletin of Social Psychology, 12, 4-27.
Stroebe, W., T. Postmes & R. Spears (2012) Scientific misconduct and the myth of self-correction in science,
Perspectives on psychological science, 7, 670-688.
6