How To Do Research - and How To Be A Researcher
How To Do Research - and How To Be A Researcher
How To Do Research - and How To Be A Researcher
How to Do Research
And How to Be a Researcher
Robert Stewart
Professor of Psychiatric Epidemiology & Clinical Informatics,
King’s College London
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© Robert Stewart 2022
The moral rights of the author have been asserted
Impression: 1
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other form
and you must impose this same condition on any acquirer
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2022943702
ISBN 978–0–19–286865–7
DOI: 10.1093/oso/9780192868657.001.0001
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third-party website referenced in this work.
With thanks to my academic line managers over the years
(Anthony, Martin, Graham, Matthew) and to everyone I have worked
with and supervised, for teaching me most of what I know.
2. Origin Stories 13
4. Careful Observation 36
6. Choosing a Solution 61
8. Consensus 88
Bibliography 253
Index 254
Image Credit Lines
1.1. Autotype of John Snow (1856; published 1887). Thomas Jones Barker,
Public domain, via Wikimedia Commons. 8
1.2. From Snow J., On the Mode of Communication of Cholera, CF Cheffins
(Lith.), Southampton Buildings, London (1854). Public domain via
Wikimedia Commons. https://en.wikipedia.org/wiki/File:Snow-cholera-
map-1.jpg 9
1.3. The Silent Highwayman (1858); Original: Cartoon from Punch Magazine
35: 137; 10 July 1858 This copy: City and Water Blog. Public Domain,
https://commons.wikimedia.org/w/index.php?curid=4465060 11
2.1. Source: John, L. (ed) (1999). The Oxford history of Islam. Oxford University
Press, Oxford. Al-Khwarizmi, Public domain, via Wikimedia Commons. 19
2.2. Statue of Zu Chongzhi. 中文(简体): 祖冲之铜像。(2016). Attibution:
三猎, CC BY-SA 4.0. https://creativecommons.org/licenses/by-sa/4.0, via
Wikimedia Commons. 20
2.3. Mort de la Philosophie Hypatie. From Figuier, L. (1872). Vies des savants
illustres, depuis l’antiquité jusqu’au dix-neuvième siècle. Hachette, Paris.
Public domain via Wikimedia Commons. https://commons.wikimedia.
org/wiki/File:Mort_de_la_philosophe_Hypatie.jpg 21
3.1. Contemporary photograph of James Frazer, 1933. http://www.
britannica.com/EBchecked/art/10799/Sir-James-George-Frazer-
1933#tab=active~checked%2Citems~checked, Public Domain, https://
commons.wikimedia.org/w/index.php?curid=4250992 25
3.2. Gustave Doré. (1892). Canto XXXI in H. Francis (ed), The Divine
Comedy by Dante, Illustrated, Complete. Cassell & Company,
London/Paris/Melbourne. Public domain via Wikimedia Commons.
https://upload.wikimedia.org/wikipedia/commons/d/d2/Paradiso_
Canto_31.jpg 28
3.3. Dmitrij Mendeleyev’s Periodic Table (1869). Original uploader: Den
fjättrade ankan at Swedish Wikipedia, Public domain, via Wikimedia
Commons. 34
4.1. Francis Bacon by Paul van Somer I (1617). pl.pinterest.com, Public
Domain, https://commons.wikimedia.org/w/index.php?curid=19958108 37
4.2. Maria Sibylla Merian by Jacobus Houbraken / Georg Gsell (c. 1700).
Public domain via Wikimedia Commons. https://upload.wikimedia.org/
wikipedia/commons/b/b3/Merian_Portrait.jpg 40
4.3. Robert Hooke. Schem. XXXV – Of a Louse. Diagram of a louse. In
Hooke, R. (1665). Micrographia or some physiological descriptions of
minute bodies made by magnifying glasses with observations and inquiries
thereupon. National Library of Wales. Public domain via Wikimedia
Image Credit Lines ix
Commons. https://upload.wikimedia.org/wikipedia/commons/1/10/
Louse_diagram%2C_Micrographia%2C_Robert_Hooke%2C_1667.jpg 42
4.4. Mary Anning. Credited to ‘Mr. Grey’ in Tickell, C. (1996). Mary Anning
of Lyme Regis. Public domain, via Wikimedia Commons https://upload.
wikimedia.org/wikipedia/commons/e/e7/Mary_Anning_painting.jpg 46
5.1. The inoculation of James Phipps by Edward Jenner. Lithograph by Gaston
Mélingue (c. 1894). Public domain, via Wikimedia Commons. 49
5.2. Photograph of Karl Popper (c. 1980). LSE Library/imagelibrary/5. 51
5.3. Author’s own work 54
5.4. Madame Du Châtelet at her desk (detail) by Maurice Quentin de La Tour
(eighteenth century). Public domain, via Wikimedia Commons. 57
6.1. Edward Gibbon, by John Hall, after Sir Joshua Reynolds (1780). Public
domain, via Wikimedia Commons. 62
6.2. Photograph of Sofja Wassiljewna Kowalewskaja (unknown; 1888). Public
domain, via Wikimedia Commons. 67
6.3. Sketch labelled frater Occham iste (1341). In Ockham Summa Logicae, MS
Gonville and Caius College, Cambridge. Public domain, via Wikimedia
Commons. 70
7.1. Location map of the Large Hadron Collider and Super Proton Synchrotron.
OpenStreetMap contributors, CC BY-SA 2.0 <https://creativecommons.
org/licenses/by-sa/2.0>, via Wikimedia Commons. 77
7.2. Photograph of Sir Austin Bradford Hill, unknown date, unknown author.
CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia
Commons. https://upload.wikimedia.org/wikipedia/commons/e/eb/
Austin_Bradford_Hill.jpg. 79
7.3. Data obtained from the Vostok ice column, created by Gnuplot 20 June
2010. Vostok-ice-core-petit.png: NOAAderivative work: Autopilot, CC
BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=579656. 86
8.1. Portrait of Dr Richard Price by Benjamin West (1784). Public domain,
via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/
commons/1/1d/Richard_Price_West.jpg 89
8.2. Photograph of John Maynard Keynes greeting Harry Dexter White.
International Monetary Fund (1946). Public domain, via Wikimedia
Commons. https://upload.wikimedia.org/wikipedia/commons/0/04/
WhiteandKeynes.jpg 91
8.3. Photograph of Svante Arrhenius. Photogravure Meisenbach Riffarth &
Co., Leipzig (1909). Public domain, via Wikimedia Commons. https://
upload.wikimedia.org/wikipedia/commons/6/6c/Arrhenius2.jpg 95
9.1. Illustration by Hablot Knight Browne, from Dickens, C. (1848). Dombey
and Son. Public domain, via Wikimedia Commons. https://upload.
wikimedia.org/wikipedia/commons/f/f9/P_595--dombey_and_son.jpg 104
9.2. Drawing from an Egyptian relief from Volz, P. (1914). Die biblischen
Altertümer, p. 514. copied from de:Hethitischer Streitwagen.jpg. Public
domain, via Wikimedia Commons 109
x Image Credit Lines
For better or worse, our history has been shaped by our ability to develop
and share knowledge. This is not simply a matter of learning about the
world around us—animals can do that perfectly well. Where we depart from
other species is in the communication of that learning: we no longer have to
repeat the same experiments and make the same mistakes all over again; the
many can benefit from the work of a few. Fire, the wheel, and bread-making
allowed early civilizations to grow because of our ability to pass on (or steal)
knowledge and skills. Later, the development of steam engines and telescopes
depended on earlier discoveries from working with iron and glass, and the
rediscovery of America by Europeans was the result of centuries of progress
in ship building and navigation. In turn, the way we view that rediscovery of
America and what happened afterwards is shaped by historians’ research and
the theories or frameworks applied to the evidence from source materials.
The accumulation of knowledge is more than just a series of inventions—it’s
about understanding how the world works. Knowing about micro-organisms
allows us to understand diseases, knowing about atmospheric pressure allows
us to predict the weather (to some extent), knowing about atomic struc-
tures allows us to develop nuclear power, and knowing about how wars have
started in the past ought to help us prevent them in future. Most of what we
take for granted in our daily lives has come about because people have made
observations or deductions, developed ideas, and passed these on to others.
The trouble is that any old notion can be communicated and shared—I
could spend an afternoon on the Internet wading through wild conspiracy
theories—so how do we sift the truth from the nonsense?1 This book puts
the principles of research alongside some of the diverse people who have
shaped them, and goes on to cover the more mundane, day-to-day compo-
nents of a research career—writing, networking, looking for funding, dealing
with institutions and their vagaries. Along the way, it also considers how
research, and therefore what we know or take for granted, is directed by
1 And incidentally, why are conspiracy theories so attractive in the first place? And why is it a good
thing for a scientist to be sceptical about what we think we know, but a bad thing for the general public to
be sceptical about scientists?
2 How to do Research
external forces—by money, politics, influence, and the sheer eccentricity and
bloody-mindedness of those involved. There is a tendency for academics2 to
portray the quest for knowledge as a high and noble endeavour, and perhaps
it’s useful to hold this up as an aspiration in our better moments. However,
in reality a lot of advances come from accidental findings (e.g. Alexander
Fleming leaving his petri dish lying around when he went on holiday and
discovering penicillin) or else they take a long time to be recognized (e.g. the
forty-plus years it took for the Royal Navy to supply sailors with citrus juice
after it was proposed as a treatment for scurvy by James Lind in 1753). If we
don’t understand the messiness, we will have a very poor picture of the way
we move forward.
A mission statement
I do hope that this book is easy to read. There’s actually nothing particularly
complicated about the principles underlying research; if it does get compli-
cated, this tends to be because of the more specific challenges of the discipline
in which you work.3 If so, you may need to supplement this with something
more specialized—there’s plenty of material around. What I intend to cover
here are the considerations that are common (more or less) to any field of
research. Whether you’re thinking seriously about research as a career or just
dipping your toe in the water through a student dissertation, I hope that you’ll
be able to see how your project fits into the general scheme of things. If you’re
already well into a research career, I hope this book provides some lighter
reading than the turgid academic output you’re probably wading through at
the moment, and that you can sit back a bit on a sunny afternoon and remind
yourself why you chose to belong to this slightly strange world in the first
place.
I also hope that this book makes sense to you regardless of your field of
interest; perhaps this is over-optimistic but I’ve tried my best. Research prin-
ciples and techniques are described most often in relation to the sciences,
and I’m bound to be biased in that direction from my own professional
2 I’m going to use ‘academic’ and ‘research’ quite loosely and interchangeably in this book, although I’m
aware that there are differences—not everything described as academic is or has been focused on research,
and not all research is carried out by people who would describe themselves as academics. Apologies if you
find this a problem; however, neither research nor what would be called ‘academia’ today have retained
very much connection with the mythical Academus or Plato’s Academy founded on his land in the fourth
century bce (see Chapter 15), so I think this allows some freedom with the term.
3 And, even then, the complexities are usually around the logistics of measuring something (getting
laboratory conditions correct) or having the right equipment. The objective of the study itself is often a
much more straightforward matter.
Introduction 3
I’m going to begin with the story of John Snow and the 1854 cholera epi-
demic. I feel an obligation, as he’s the ‘father of epidemiology’, my specialty,
and his example frames this discipline. In my early days as a researcher, I
used to have to explain to my family and friends what epidemiology was;
after March 2020 this became less of an issue and it’s amazing how many
people view themselves as epidemiologists nowadays. The story of John
Snow and cholera is an ‘old chestnut’ that is wheeled out repeatedly to
describe ‘the scientific method’, helped no doubt by the whiff of nineteenth-
century social reform and an age of impressive facial hair (on men, at least).
And yet what’s actually most interesting is what’s usually kept out. Yes,
at the start of the nineteenth century, cholera was believed to be a dis-
ease spread through bad air. And yes, by the end of the century, everyone
accepted that it was spread through contaminated water. But how did this
change in knowledge actually occur? John Snow is now widely credited as
the person responsible, as someone whose research most obviously tipped the
balance, but was he really so successful (and was he really the model, dispas-
sionate observer)? The change in opinion about cholera only really happened
after he had died, which is so often the way with pioneers, and it’s worth trying
to understand why.
London in the late summer of 1854 plants us firmly in the mid-Victorian
era. Britain was nearly a year into its Crimean War—the Light Brigade
would charge pointlessly but poetically into the valley of death that Octo-
ber. In Parliament, Palmerston as Home Secretary was passing reformist
legislation to reduce child labour in factories and making vaccination com-
pulsory, which would all sound very worthy if he wasn’t at the same time
fighting hard to prevent voting by the urban working class. Charles Dick-
ens was at the height of his creative powers, having finished Bleak House
the previous year and serialized Hard Times in the intervening period. Up
in Yorkshire, Charlotte Brontë, recently married, had outlived her siblings,
although had only seven months left herself before she died of pregnancy
complications. Queen Victoria, on the other hand, was between her eighth
and ninth childbirth, her oldest son Albert (the future Edward VII) was
approaching his thirteenth birthday, and the magnificent Crystal Palace,
which had accommodated her husband’s labour of love, the Great Exhibi-
tion, was in the process of being moved from Hyde Park to its new home in
Sydenham.
Only a quarter of a mile to the east of Hyde Park, Soho was not a good place
to be. Urban migration was in full flow as more and more people arrived to
scrape out a living in the Big Smoke. London had been the world’s largest
city since the 1830s, the first since ancient Rome to top a million inhabi-
tants. Its population grew threefold between 1815 and 1860, and the 1851
census showed that nearly forty per cent of its residents had been born some-
where else. Soho’s inhabitants lived in cramped unsanitary accommodation
and there was no sewage system to speak of. The only way to dispose of
human waste was in a pit beneath the cellar floorboards. These cesspools were
rarely adequate and frequently overflowed—often into the wells that had been
dug to provide public water supplies. Those households who noticed their
cesspool filling faster than it was emptying might pay to have their waste col-
lected and dumped in the River Thames, although this was just another place
from which drinking water was sourced. The devastation inflicted by infec-
tious diseases is entirely understandable now—so why did it take so long to
work out?
Introduction 5
King Cholera
Oddly, cholera tends not to be the first disease that comes to mind when
we think about the nineteenth century, and yet it was probably the most
devastating. Perhaps this is down to the art and literature of the time. ‘Con-
sumption’ (tuberculosis) is more suited to the plots of novels and operas, with
its slow decline, the accentuation of an already fashionable pallor, and the
visual image of fresh blood on a white silk handkerchief. Even today, all an
actor in period costume has to do is cough and you know what’s going to
happen next. Perhaps a disease has to fit in with its times to be remembered.
Bubonic plague certainly chimes harmoniously with our impressions of the
mediaeval world: all those grinning skeletons in paintings and sculptures.
And the ravages of syphilis in the eighteenth-century illustrations by Hogarth
and others were helpful for portraying immorality and decadence. Cholera,
as a messy and squalid disease, would probably not have put off an early nov-
elist like Henry Fielding, or an illustrator like James Gillray, but fashions had
changed by the time it arrived in Europe. It is ‘bad air’ (as well as typhus
and consumption) which causes the deaths we are meant to care about in
Jane Eyre, poor nutrition (and consumption) in Nicholas Nickleby, and small-
pox (and consumption) in Bleak House. In Victorian times people died from
cholera very rapidly, providing little opportunity for noble speeches between
the violent bouts of diarrhoea and vomiting. So, a disease that killed in the
millions barely receives a mention in what has been passed down to us, and
we can easily forget how much it must have shaped that century.
The cholera outbreak in Soho alone in the late summer of 1854 killed 127
residents in its first three days and over 600 by the time it subsided. It was
actually just part of what is now known as the ‘third cholera pandemic’—a
pandemic, as we now know only too well, describes an infection that is conti-
nental rather than national in scale. The cholera pandemics of the nineteenth
century were devastating waves that swept across the world in the same way
that plague had done earlier, and influenza would do later. Cholera (named
after the Greek word for ‘bile’ for obvious reasons) had probably been present
in the Indian subcontinent since ancient times, only spreading more widely
as trade routes expanded. The first pandemic in 1816–1826 mainly affected
India and east Asia and didn’t extend west of the Caspian Sea; however, the
second struck Europe with a vengeance, killing six thousand Londoners, for
example, in 1832 and a further fourteen thousand in 1849. Russia was the
country worst hit by the third pandemic, with at least a million deaths, but
the effect was widespread and over ten thousand lives were lost in London
alone during 1853–1854. Sitting in the twenty-first century city, it’s hard to
6 How to do Research
take in the magnitude of the carnage or to imagine the disease’s impact, par-
ticularly considering the smaller population at the time. Would our ancestors
have felt the same way as we would now about something that wiped out over
one in ten people in some areas within the space of a few months? Or was the
city so used to successions of diseases and disasters over the centuries, and all
the other hardships of nineteenth-century life, that it just gritted its teeth and
carried on? After the 1832 outbreak, the disease was known as ‘King Cholera’,
which suggests a mixture of respect and black humour not very different from
London attitudes today.
The problem with scientific breakthroughs is that, once they happen and are
accepted, it becomes hard to imagine the old worldview. Why on earth would
the intelligent folk of the 1850s not consider that perhaps it might be a good
idea to keep raw sewage out of the drinking water? However, we shouldn’t
be too hard on our ancestors—we don’t have to travel back too far ourselves
to a time when the health benefits of cigarettes were being widely touted in
advertising, or when the smoke from factories was felt to be a useful defence
against disease, or when people were hand-painting watch hands to display
the exciting new substance, radium, that glowed in the dark. It’s important to
realize that the ‘old view’ is often held for entirely logical and understandable
reasons; most ideas are. What it takes first of all is someone with vision who
can think outside the prevailing viewpoint and consider it objectively. Next,
they need to compare the evidence for competing viewpoints and, crucially,
find some way of clinching the argument. Finally, they have to get people to
listen to them. These three steps are at the heart of research.
‘Miasma’ was the predominant explanation for infectious disease in the
1850s and had been popular since ancient times. The word itself is simply the
Greek for pollution, having been written about by Hippocrates way back in
the fourth century bce, and the belief was that diseases like cholera, bubonic
plague, and even chlamydia were spread by ‘bad air’.⁶ Indeed, prominent
and influential supporters of miasma theory included Florence Nightingale
(who used it as an argument for freshening up hospitals) and William Farr
(the founding father of modern statistics who was instrumental in shaping
the national census and recording of causes of death). Miasma as an idea
⁶ And it wasn’t just diseases—there were theories around that you could put on weight by smelling food.
Introduction 7
John Snow (Figure 1.1) was the son of a labourer, born in a poor area of
York that was prone to flooding. Having worked as a surgeon’s apprentice
(and having witnessed the devastating impact of the 1833 cholera outbreak
in Newcastle), he graduated from the University of London and became a
member of the Royal College of Physicians in 1850 at the age of 37. He was
a true nineteenth-century polymath. As well as his work on cholera, he pio-
neered the use of anaesthetics and personally administered chloroform to
Queen Victoria for the births of her final two children. He was also one
of the first people to propose that rickets might be caused by diet rather
than general poverty. Snow was already sceptical of miasma theory by 1854
and was considering contaminated water as an alternative explanation—for
example, noticing that the usually healthy and clean-aired town of Dumfries
was badly affected by cholera in 1832 and that it might not be a coinci-
dence that its water supply from the River Nith was notoriously contaminated
with sewage. He had also compared infection rates between areas of Lon-
don drawing their water from different parts of the Thames (i.e. upstream
or downstream of the worst pollution) during the 1848 outbreak. So, when
cholera broke out again in Soho, he must have been determined to clinch the
argument.
His approach was a simple one—he just took a street map and, with the
help of a local curate, drew dots on it where cases of cholera had occurred
(Figure 1.2). These clearly showed a cluster around a particular water pump
on Broad Street, roughly in the centre of Soho.⁷ Some cholera cases were
closer to a neighbouring water pump, but it turned out that these people
used the Broad Street pump because they preferred the water or were chil-
dren who went to school there. None of the workers in the nearby brewery
⁷ Now Broadwick Street—much more up-market these days, and a haunt of well-heeled media types.
8 How to do Research
Fig. 1.1 John Snow in his early 40s, not long after he mapped out the 1854 cholera out-
break in Soho. Described as frugal and a health enthusiast (an early adopter of a vegan
diet for a while), he dressed plainly and remained a bachelor. His idea that cholera was
waterborne was not at all popular at the time and, unfortunately, he did not live to see
his proposals vindicated, dying from a stroke around two years after this picture was
taken.
Autotype of John Snow (1856; published 1887). Thomas Jones Barker, Public domain, via Wikimedia
Commons.
contracted cholera, but these were given an allowance of beer every day
(which contains boiled water) and apparently didn’t drink anything else. On
7 September, Snow met with local officials, removed the handle of the pump,
and the cholera epidemic subsided. Sometime later it was found that the pub-
lic well had been dug far too close to an old cesspit which had probably been
contaminated with nappies from a baby who had picked up cholera from
somewhere else.
Introduction 9
Fig. 1.2 John Snow’s map of cases of cholera during the 1854 outbreak in Soho. Broad
Street is in the centre and the offending pump is in the centre of Broad Street. John
Snow kept on trying to persuade his medical peers that this looked like a disease that
was spread in the water rather than the air, but still without success.
From Snow J., On the Mode of Communication of Cholera, CF Cheffins (Lith.), Southampton Buildings,
London (1854). Public domain via Wikimedia Commons. https://en.wikipedia.org/wiki/File:Snow-
cholera-map-1.jpg
quite a diffuse pattern of cases radiating out from an area or following the
prevailing wind direction (as seen, for example, in the aftermath of the 1986
Chernobyl radiation leak). Diseases transmitted by water should instead clus-
ter around the offending water sources, so a simple mapping exercise ought
to be able to distinguish the two, as it did for Snow. Finally, there is the actual
experiment—in this case, the removal of the pump handle—taking an action
and observing the consequences (the epidemic subsiding).
So, there are important principles here: of observation that is careful and
methodical, objective, and unbiased, and of putting ideas to the test. This ‘sci-
entific method’ is not just confined to medicine or even to physical science
more broadly; it is fundamental to all worthwhile research. A good scholar
in any field will collect evidence and consider objectively whether it supports
a prevailing theory or suggests a new one. For example, a historian might
be weighing up whether Elizabeth I really planned never to marry or was
forced into that position; an English scholar might be considering how much
Mary Shelley’s work was influenced by recent experiments in electricity; an
economist might be investigating whether state intervention is a good thing
or not at a time of recession; a sociologist might evaluate whether living in
a tight-knit community is supportive or stifling. These processes of assem-
bling and testing evidence are essentially how we have accumulated shared
knowledge as a species. But is this really the way things work? For example,
why can it take so long for knowledge to be adopted, why are ‘experts’ often
viewed with suspicion, and why does it seem that some controversies are
never resolved?
Even John Snow’s story is not a clear-cut example. The Broad Street pump
handle was replaced as soon as the cholera epidemic had resolved, there was
no attempt to correct the underlying contamination, and it was 12 years (and
several more outbreaks) before William Farr (the ‘father of modern statistics’
mentioned earlier and one of Snow’s chief opponents) accepted that water
was the problem and, having collected some supporting evidence of his own,
recommended that water be boiled. In the meantime, it was the ‘great stink’ of
the summer of 1858 (Figure 1.3)—an unpleasant combined effect of sewage,
industrial pollution, and hot weather on the river Thames—which led to the
construction of London’s still-monumental sewage system, and even this was
mainly done to remove the smell rather than improve the cleanliness of the
wells or river water. So, does that mean that it was miasma theory making
Introduction 11
Fig. 1.3 An 1858 lithograph on the state of the River Thames in London. The ‘Great
Stink’ occurred in the summer of that year when extended high temperatures caused
a fall in the water level, exposing the raw sewage on the riverbanks. Queen Victoria
and Prince Albert were forced to curtail a pleasure cruise and Members of Parliament
debated whether they would have to conduct business elsewhere. All of this resulted
in a massive sewer-building project, sorting out the health effects of water pollution
by applying miasma (‘bad air’) theory. John Snow’s alternative waterborne theory for
cholera transmission wouldn’t become accepted until 10–20 years later.
The Silent Highwayman (1858); Original: Cartoon from Punch Magazine 35: 137; 10 July 1858 This copy:
City and Water Blog. Public Domain, https://commons.wikimedia.org/w/index.php?curid=4465060
the difference in the end, even though it was incorrect? And was Snow really
the unbiased, objective observer if he was already writing articles suggesting
water-borne transmission? Either way, he didn’t live to see the public health
improvements he had argued for, dying himself in 1858 at the age of 45 just
a month before the ‘stink’. He is buried in Brompton Cemetery, and the John
Snow pub and replica water pump can still be found on Soho’s Broadwick
Street, close to where the original outbreak occurred.
It would be nice to think that this worthy piece of research really made the
difference, but you do wonder whether it was only honoured retrospectively
once others had come around to the idea via other paths. How much was
Snow’s research ignored at the time because prim Victorian society wasn’t
prepared to discuss faecal contamination of drinking water? How much was
12 How to do Research
1 Sorry, close to home, that one, but then childhood vandalism is perhaps easier to deal with if you
re-frame it as the beginnings of a potential research career.
14 How to do Research
Taking this a step further with a more specific example, cognitive behavioural
therapy (CBT), the most common talking treatment for depression, is essen-
tially based on self-examination and adopting a researcher’s perspective.
The idea behind the ‘cognitive’ part of CBT is that a lot of our anxiety
and depression stems from ‘negative automatic thoughts’—false assumptions
about ourselves and our experiences. So, if you’re at a low ebb when your boss
looks irritable one morning, it’s easy to slip into assuming you’ve done some-
thing to make them feel that way. But, if you force yourself to think about
it objectively, there may be all sorts of other explanations. So, CBT places a
strong emphasis on keeping diaries, recording thoughts like this, and weigh-
ing up the evidence for any underlying assumptions. Working with a good
therapist (it’s hard to be objective on your own), you might remember that
your boss did mention having a bad weekend, so perhaps that’s a more likely
reason for their irritability. And you might remember that you were praised
for your work the previous week (and it’s easy to forget praise when you’re
feeling low). Or perhaps you didn’t receive praise but everyone else in the
office thinks the boss is grumpy and there’s nothing unique about your own
situation. Whatever the alternative explanation, the more you work on these
thoughts, the quicker you’re able to catch them when they occur next time
and the less likely they are to bring you down. But really the process is rather
like becoming the objective researcher looking in on yourself, making a care-
ful observation (the incident, the thoughts you were having at the time, and
the feelings that accompanied them), identifying the different possible con-
clusions, and weighing up the evidence for each. The process isn’t necessarily
easy (it’s very hard to be unbiased) but then, neither is research.
Origin Stories 15
Of course, this is all very well, but are we really talking about ‘research’ here?
Well, perhaps. The Oxford English Dictionary defines research as the careful
study of a subject . . . in order to discover new facts or information about it. The
‘careful study of a subject’ part of that definition does tend to imply something
broader than a child learning what a light switch does, although it doesn’t
absolutely exclude it. We like to think of research as seeking universal truths
that, in time, will become accepted and contribute to new knowledge and a
change in the way people view the world. However, we don’t have to look very
far to find examples where that sort of widely held robust evidence comes
up against personally held truths and it all gets rather messy. We’ll revisit
this later (Chapters 16, 17), but it’s important to begin by acknowledging that
the definition of research is actually rather vague and that what we now call
research is a process that has emerged gradually and almost imperceptibly.
At some (also rather hard to define) point, it began to become aware of itself
and its need for an identity, and it wouldn’t be unreasonable to say that it’s
still finding its feet, whatever the ‘experts’ say.
To begin with, experimentation and discovery aren’t even exclusively
human activities. Bees fly out to explore the world around them, discover the
best flowers, and return to dance in the hive and communicate their findings
to the rest of the workers. Thrushes find a convenient rock to break open
snails and octopuses collect coconut shells to use as shields. Chimpanzees
make sticks into weapons and moss into sponges—again, not research as we
would normally define it, but you can’t really watch animals working on a
problem and not think of it as careful study to discover new facts. All that
we can say is that at some point in the last 300,000 years or so, Homo sapi-
ens became better at research than other species, or at least better at finding
ways to pass on knowledge and technology so that the next generation could
advance without waiting for genetic evolution to happen.
Of course, the origins of human research are lost in prehistory because a lot
of the passed-on knowledge would have been verbal or pre-verbal. At some
point someone discovered a way to make fire, but even that discovery might
have been lost a few times before it became widespread knowledge—there
may have been any number of our ancestors who were as gifted as Einstein
but whose knowledge died with them and their tribe. Thor Heyerdahl, the
Norwegian ethnographer, carried out a series of voyages in primitive boats to
show that prehistoric intercontinental travel was at least feasible, and argued
reasonably that a society’s lack of sophistication can’t be assumed just because
it didn’t leave a written record. Supporting this, it’s highly likely that advances
16 How to do Research
were made independently in several places. For example, the transition from
hunter-gatherer to agrarian societies (i.e. the discovery of cultivation and all
the knowledge and technology that needs to be in place to make it work)
occurred in a variety of world locations (Mesopotamia, the Indus valley,
China, Central America) within quite a short period of time and with no
likelihood of any one society learning from another. Simultaneous discovery
has also been quite frequent throughout the history of science—Hooke and
Newton for gravity, Darwin and Wallace for evolution by natural selection,
Scheele and Priestley for oxygen, among others. There’s definitely something
about the time being right for strands of knowledge to come together, so that
it’s almost accidental who happens to make the advance, although it may also
depend on who’s devious and self-promoting enough to ensure their legacy
(more about Isaac Newton later in Chapter 16). What we have to assume with
prehistory is that there were enough researchers around in the world to make
the same discoveries when the time was right: for example, at the end of the
last ice age 10,000 years ago when there was time to settle down. Of course, it
doesn’t always follow that an advance will be universal—the Inca civilization
famously reached the sixteenth century with sophisticated civil and hydraulic
engineering, but with no carts or other wheeled vehicle. They appeared to
know about the wheel as a concept; they just didn’t feel the need to take it
any further.
Mathematics has as good a claim as any for being the oldest academic
discipline—or at least the oldest one to be written down. Once you start
settling in one place to grow crops, you also tend to start trading with neigh-
bours. This requires a number system and the ability to add and multiply;
lengths and weights start to be used in manufacture; anticipating seasonal
events like temperature and rainfall changes requires calendars and astron-
omy; any significant building or engineering work requires geometry. As
your civilization develops you need to communicate larger numbers, so have
to decide on a ‘base’ system for this. The decimal (base 10) system we use
today is a logical extension of finger counting and seems to have been present
in early Egyptian and Indus Valley civilizations from at least around 3000
bce, but there were alternatives, and it wasn’t inevitable that we followed that
route. A sexagesimal (base 6) system divides up in more convenient ways
than a decimal system (e.g. into thirds and quarters) and was favoured in
Mesopotamia and Babylon; this persists today in the way we measure time
Origin Stories 17
(60 seconds in a minute) and angles (360 degrees in a circle). The system we
have today of repeating the same symbols for multiples of our numbers (e.g.
making the number ten out of a 1 and a 0 rather than its own new symbol)
also has its origins in the Sumerian civilization in Mesopotamia. Numbers in
Egypt at the time were more like what would become Roman numerals, but
you can see that doing a long multiplication is a lot easier when it’s written out
as 62 × 19 than as LXII × XIX, so it’s not surprising that the Sumerians were
more advanced in mathematics than the Egyptians—and that the Romans
had little use for it when their empire came about. In turn, this illustrates the
way in which research has been closely tied to the civilizations or societies
that foster it. Early advances in mathematics seem to bounce around between
Greece, Babylon, India, and China—presumably spread and exchanged along
old trade routes—and certainly, a great deal of progress had been made by
the time it started becoming an academic discipline with a surviving written
record. For example, the theory attributed to Pythagoras (about the square on
the hypotenuse) was probably well-known in Babylon long before the Greek
civilizations got going and a thousand years before Pythagoras was born, even
if the proof of the theory did come later.
2 Although this itself was quite a recent discovery, as the Hittites as a people were only mentioned
cursorily in surviving texts such as the Old Testament or Egyptian accounts, and they weren’t known
to Greek historians. In the late nineteenth century, a hypothesis was proposed about an early empire in
central Turkey which was subsequently supported by archaeological investigation (see Chapter 9).
18 How to do Research
Many homes
The rest of the early history of mathematics follows the rise and fall of
empires in its progress. Zeno and colleagues are succeeded fairly soon after-
wards by the Pythagoreans, following the rather obscure figure of Pythagoras
who gives his name to a number of ideas that probably weren’t his own.3
This school, in turn, has a strong influence on the Athenian philosophers
Plato and Aristotle (about whom more later) and then the first substan-
tial textbook, Elements, is written by Euclid at Alexandria in Egypt, around
300 bce. During the Roman Empire, despite little interest in mathematics
by the power base itself, relative peace and ease of travel allows the East–
West exchange of ideas to continue, fortuitously coinciding with a similarly
tranquil period in China under its Han dynasty. Alexandria continues as a
3 Mystical as well as mathematical—the migration of the soul after death and a rather nice theory that
planets and stars move in a way that produces inaudible music: the ‘harmony of the spheres’.
Origin Stories 19
centre of learning, supporting this exchange. Nothing lasts, however, and the
Christianization of Europe results in the suppression of mathematics because
of its close associations with paganism. Old texts are kept in Constantino-
ple at the heart of the Byzantine Empire, but not particularly developed.
In the meantime, the centre of activity moves to India in the hands of the
mathematician-astronomers Aryabhata (around 500 ce) and Brahmagupta
(around 625 ce), although possibly borrowing a lot from China. Around
this time the decimal position system becomes standard, as does the use of
the number zero. Later, the field is taken up in the Islamic Sassanid empire
and the ‘Baghdad House of Wisdom’. The mathematician Muhammed ibn
Musa al-Khwarizmi (around 825 ce) ends up translated and his latinized
name gives us ‘algorithm’ (Figure 2.1); similarly, the translated title of his
Hisab al-jabr wal-muqabala (science of equations) gives us the word ‘alge-
bra’, as well as bringing Indian numerals to the West. Other key figures
are Omar Khayyam (around 1100 ce), who works on cubic equations in
northern Persia as well as writing his Rubaiyat and other poetry, Ibn Al-
Haytham (around 1000 ce), who writes influentially on optics in Cairo, and
Al-Zarquali (around 1060 ce), who makes major advances in astronomy in
Cordoba. Further east, there’s a lot of mathematical activity during the Song
(960–1279 ce) and Yuan (1271–1368 ce) dynasties in China (Figure 2.2),
although not all information comes west. For example, the mathematician
Fig. 2.3 Hypatia was a philosopher and mathematician living in fourth- to fifth-century
Alexandria. Unfortunately, her reputation and wisdom meant that she was sought as
an advisor to the Roman prefect of Alexandria at the time. Her influence, and the fact
that she wasn’t a Christian, set her at odds with Cyril, the bishop of Alexandria. As
a consequence, was murdered by a Christian mob in 415 CE. She was re-invented in
more recent centuries as a martyr for philosophy/science and as a proto feminist; how-
ever, this nineteenth-century image of her ‘martyrdom’ clearly shows the dangers of
re-invention with its blatant (and borderline-absurd) racist overtones. Perhaps it’s bet-
ter simply to accept that Hypatia was a gifted intellectual who got caught up in turbulent
politics just as anyone else might have been, and who just happened to be female and
non-European.
Mort de la Philosophie Hypatie. From Figuier, L. (1872). Vies des savants illustres, depuis l’antiquité
jusqu’au dix-neuvième siècle. Hachette, Paris. Public domain via Wikimedia Commons. https://
commons.wikimedia.org/wiki/File:Mort_de_la_philosophe_Hypatie.jpg
Why 1500?
So, we’ve reached the point now where histories of science, and research more
generally, tend to start—the Western European Renaissance and the ‘Enlight-
enment’ period that followed. But it’s worth stopping to consider exactly what
started around that time. As discussed, research itself has probably been going
on throughout humanity’s development, and it seems reasonable to assume
that all the major discoveries along the way—fire, the wheel, agriculture,
bronze, iron, glass, etc.—were the result of at least some sort of systematic
enquiry and study. In the emergence of mathematics as a discipline, we know
that there were also centres of thought, learning, and the sharing of informa-
tion, as well as some level of communication between these centres—albeit a
little fragile and dependent on short periods of peaceful trading in between
the wars and revolutions. So, what’s special about the Renaissance? Most
importantly, when textbooks imply that everything starts off in the fifteenth
and sixteenth centuries, are they just propagating a Eurocentric historical
viewpoint, ignoring all the foundations of knowledge ‘borrowed’ from earlier
Middle Eastern, Indian, and Chinese work? There are historians much better
qualified than I to judge. However, most people like to start a story some-
where and there is certainly something that changes significantly from the
Renaissance onward. For example, although a mathematician might be able
to read Euclid with a sense of fellow-feeling and appreciation, most other
academic disciplines would struggle to find very much of direct relevance
before 1500.
What fundamentally changes from around 1500 onwards is an understand-
ing of how the world works, and a new way of developing that understanding.
Early on, this is in astronomy and the painful process of accepting that the
Sun, rather than the Earth, is the centre of the solar system. At around the
same time, people are beginning to investigate the workings of the human
body, particularly the circulation of blood. Telescopes lead to microscopes
and the beginnings of biology. Further application of mathematics to astron-
omy leads to gravitational theory, laws of motion, and the emergence of
physics. Vacuum flasks enable the composition of the air to be investigated
as the beginnings of chemistry. Geological researchers start to question the
age of the Earth. Naturalists (prototype botanists and zoologists) begin the
process of identifying species and think about how they might have emerged
over time. And so on. These happen to be a list of scientific disciplines, but the
change can be seen in other fields of research as well—for example, in histo-
rians moving from simply recording events as they hear about them towards
considering them more analytically and taking the quality of evidence into
Origin Stories 23
Fig. 3.1 James Frazer, the Scottish anthropologist, set himself the fairly formidable task
of compiling and making sense of all the traditions and superstitions he could lay his
hands on without travelling much. The resulting 12 volumes of material were influential
beyond his specialty (e.g. Sigmund Freud was, unsurprisingly, interested in his exam-
ples of totemism). His successors in anthropology weren’t so keen on his methods, but
the fact remains that these early beliefs of how the world might work as an internally
controlled system are the origins of what we now call Science.
Contemporary photograph of James Frazer, 1933. http://www.britannica.com/EBchecked/art/
10799/Sir-James-George-Frazer-1933#tab=active~checked%2Citems~checked, Public Domain,
https://commons.wikimedia.org/w/index.php?curid=4250992
So, what has The Golden Bough got to do with causation? Well, if research has
been going on since prehistory, and if working out causes and effects has been
a continued preoccupation (after all, how else do we learn to understand the
world?), then the same quest for understanding might be seen in customs and
26 How to do Research
rituals. Early on in his book, Frazer asks us to imagine a situation where we’re
in a primitive society with no knowledge of the world but a pressing need to
understand it. For example, we’ve begun growing and relying on crops and
need to know about the changing of the seasons. If the rains come, every-
thing’s fine; if they don’t, we starve. So, it seems sensible to try to work out
whether there’s anything we can do about it. Frazer’s proposition is that we
make a fundamental choice at this point between two routes of enquiry. One
possibility is that the cause of the rainfall is something external to the world—
for example, a god who can turn the taps on and off in the sky. The other
possibility is that the cause is internal, some sort of system within the phys-
ical world. The choice we make here determines our course of action. If we
think an external being controls the rainfall, then it’s obvious that we need to
get in a right relationship with him/her; hence, this takes us down the route
of religion. On the other hand, if we think that there’s some internal process
involved, then we ought to be working out what it is because we might be
able to influence it—after all, if we’ve already started diverting water courses,
creating paddy fields, or building fences to keep out predators, why not figure
out a way to control rainfall?
Frazer proposes that this second choice of an ‘internal system’ gives rise
to what he calls ‘magic’ and is reflected in the wide variety of customs and
superstitions compiled in his book. All you have to do is try to find the root
from which they’ve arisen. I’m not sure how original this idea was, and Frazer
rather over-eggs his arguments at times, but it does make sense as a general
principle. For example, quite a lot of traditional rain dances have involved
people beating the ground with sticks and there’s an obvious logic underly-
ing this. If you notice that the rains tend to fall after the ground has been
dry and parched for a while, and if the ground looks like it’s in pain when
it’s dry and parched, then it’s reasonable to assume that the ground’s pain
might influence the rains arriving. Therefore, if you inflict even more pain
on the ground by hitting it, perhaps the rains will arrive earlier. The expla-
nation might be wrong, but it’s a reasonable one if you don’t know anything
else about how the weather works. Like any superstition, the action (the rain
dance) is followed by the consequence (the rain) often enough to reinforce
it, and once the ritual is set up, it’s hard to break. It may be a noble idea to
try missing out the dancing for a year and see whether the rains still arrive,
but who’s going to risk it if you might starve as a consequence? And how
many actors and sporting celebrities freely admit to their own personal ritu-
als for similar reasons before a performance (and reluctance to experiment by
omitting them)? And how many of us haven’t done the same before an exam-
ination, or don’t have some superstition we’re aware is irrational but follow,
God or Clockwork? The Importance of Ignorance 27
The point is that this internally controlled system (the ‘clockwork’ of this
chapter’s title) is what research also focuses on, and its task from 1500
onwards has been to emerge from a world explained by magic and super-
stition, or at least by ‘any old idea’. For example, the way we understand the
planets and stars didn’t start from scratch in the European Renaissance; it
started from a very old belief that the Earth was at the centre of a concentric
series of orbiting bodies and that this explained what humans had been care-
fully observing for thousands of years in the night skies—which, of course,
were a lot more visible and personal in the days before light pollution.
In Dante’s Paradiso, written around 1320, the author ascends into heaven
through its nine spheres (Figure 3.2).1 Starting with the Moon, Mercury, and
Venus, Dante passes the Sun as the fourth sphere, then moves on to Mars,
Jupiter, and Saturn, before the known planets end and are followed by a sin-
gle sphere containing all the stars.2 The final ninth sphere beyond the stars is
called the Primum Mobile, the mechanism that keeps the heavens in motion,
and beyond this is the Empyrean where God lives. This widely accepted sys-
tem was drawn up by Claudius Ptolemy, a man of many talents living in
Alexandria in the early second century. Ptolemy claimed to be drawing on
over 800 years’ worth of observations and calculations by Babylonian and
Greek astronomers and produced tables to compute past and future positions
of the planets, as well as an updated chart of stars in the northern hemi-
sphere. Having been preserved in Arabic, his Almagest3 was translated into
Latin in 1175 and became highly sought after. Despite a veneer of religion,
Europeans were very fond of astrology—the idea that stars and planetary
alignments had influences on world events. Therefore, anything that pre-
dicted the positions of heavenly bodies was destined to be popular and several
famous pioneers of modern astronomy had to earn their living by writing
horoscopes. Anyway, Ptolemy’s system worked, more or less; the problem
1 Conveniently matching the nine circles of Hell going down to the centre of the Earth, through which
he’d travelled only a week previously in his Inferno—it was an eventful month for him.
2 And the fixed sphere of stars is fair enough—the stars act most of the time like smaller and larger
points on a single sphere and it was centuries before interstellar distances became apparent.
3 Almagest means ‘the greatest’—one way of describing a best seller.
28 How to do Research
Fig. 3.2 The mediaeval belief of an Earth-centred planetary/solar system appears with
all its splendour in Paradiso, the third part of Dante’s Divine Comedy completed in 1320.
Here, Dante and his guide Beatrice climb up past the seven planetary/solar spheres and
the fixed stars to reach God’s home, the Empyrion, on the other side. When Copernicus
and Galileo started chipping away at the Earth-centred system, they came into con-
flict with the Church (both Lutheran and Catholic), which was perversely but doggedly
following a pagan model drawn up by Claudius Ptolemy in second-century Egypt.
However, you can understand why Dante might have been disappointed.
Gustave Doré. (1892). Canto XXXI in H. Francis (ed), The Divine Comedy by Dante, Illustrated, Com-
plete. Cassell & Company, London/Paris/Melbourne. Public domain via Wikimedia Commons. https://
upload.wikimedia.org/wikipedia/commons/d/d2/Paradiso_Canto_31.jpg
Making a start
Copernicus is often cited as the first modern researcher, although it’s fair
to say that he doesn’t quite fit the mould. Yes, he took a rather problem-
atic popular view of ‘how the world works’ and proposed a better solution
as an alternative, but he didn’t try to test this or make any observations of his
own. Also, the idea of the Sun as the centre of the solar system wasn’t new⁴
—it had been brewing for a while in the period running up to the Renais-
sance, so would probably have emerged anyway around that time. Copernicus
also didn’t put much effort into communicating his ideas; he had proba-
bly formulated most of them over 30 years before publishing his snappily
titled De Revolutionibus Orbium Coelestium (‘on the revolution of the celes-
tial spheres’) in 1543, so it’s rather a matter of chance that his name is so
prominent; not that it made a difference to him—he died the same year.
One of the reasons for him taking so long to publish was that he was a
busy man with lots on his mind—he was the canon of a cathedral in Poland,
and for a while the commander of a castle, seeing off an invasion of Teutonic
knights from Prussia. There was no such thing as a research career in those
days and an income had to come from somewhere. Another reason for the
publication delay was that there were still problems with the idea that the
Earth was in motion rather than sitting stationary in the middle of everything
else—for example, if your hat was blown off when you leaned out of a carriage
window, why wouldn’t the Earth (moving much faster) result in hurricanes?
Why weren’t the oceans being whipped up into tidal waves? Why did objects
still fall straight down to the ground rather than backwards as you’d expect
from a moving platform?⁵
It’s also possible that Copernicus sat on his findings because he was worried
about controversy and the views of the Church, although this has generally
been overplayed because of the later hostility experienced by Galileo. At the
time Copernicus was beginning to communicate his theories privately, these
⁴ It had been proposed, for example, by the Greek astronomer Aristarchus around 250 BCE.
⁵ This was before anyone had actually tried dropping things off heights while in motion. It wasn’t an
easy experiment to carry out, although was eventually managed from the masts of boats moving on very
calm water.
30 How to do Research
were discussed with interest at the Vatican and a cardinal there wrote back,
urging him to publish (a letter he included in the finished work). It was actu-
ally the early Protestants, beginning with Martin Luther himself, who kicked
up the most fuss. The Catholic objections only came later on because of a
heretical sect, Hermetism, emerging around that time. This cult was keen
on some supposed mystical teachings from Ancient Egypt and was there-
fore understandably enthusiastic about the idea of a semi-deified Sun at the
centre of the universe. Giordiano Bruno, a prominent member, was caught
by the Inquisition and burned at the stake in 1600, and Copernicus’ De Rev-
olutionibus had joined the Vatican’s list of banned books by 1616, lingering
there until someone saw sense in 1835.
⁶ Jung’s 1944 Psychology and Alchemy has similarities to Frazer’s 1890 Golden Bough in trying to identify
underlying themes, although it’s a considerably harder read.
God or Clockwork? The Importance of Ignorance 31
⁷ And to be honest, a lot of the older theories are simply more fun, or at least psychologically appealing
(as Jung argued for alchemy, but this could be claimed for Dante’s world as well).
32 How to do Research
Not so long ago, in 2014, Yuval Noah Harari, an Israeli historian, wrote Sapi-
ens: A Brief History of Humankind, which I believe sold rather well. It was
certainly very visible in airport shops and seemed to be on a lot of people’s
bookshelves at the time (and probably still adorns the backgrounds of more
recent pandemic-era video calls). Anyway, along with a variety of transition
points along our species’ history, Harari considers two maps drawn either
side of 1500. The first, from 1459, is a typical mediaeval-era circle filled with
detail; if there’s a place where information is missing (e.g. southern Africa),
something is made up to go there. The second map is from 1525 and shows
the eastern coastline of parts of the newly discovered American continents
along with a whole lot of empty space—the acceptance of the unknown.
This does coincide quite conveniently with a shift in attitude that began to
favour the new research fields, because you can’t begin exploration with-
out the idea of a map to complete, and you can’t really make any progress
in advancing knowledge unless you accept your ignorance, or at least your
uncertainty.
There’s nothing new in this, of course; there’s rarely anything original about
any of the principles underlying research. Back in the early days of phi-
losophy, Socrates was held up as someone who was aware of, and honest
about, his lack of knowledge, and this awareness was considered as a first
step towards wisdom—‘I neither know nor think I know’. Rather fittingly,
we don’t know very much ourselves about Socrates directly. If he wrote any-
thing, it hasn’t survived, and all the details of his life and teaching come from
his successors. We do know that he was put on trial in Athens and sentenced
to death in 399 bce (we’re not sure what he was accused of), that he was
given exile as a choice but declined this (perhaps because he was already
an old man), deciding to drink the poison hemlock instead. A lot of what’s
attributed to Socrates comes from the writings of his pupil Plato, and it’s not
always clear what’s Socrates and what’s Plato. It doesn’t really matter for our
purpose.
Acceptance of ignorance is certainly fundamental to research in the sci-
ences, and I think it has to be just as important in the humanities. Why would
you be enquiring about anything if you felt you knew the answer? If you’re
just looking for evidence to bolster a particular viewpoint then that’s surely
God or Clockwork? The Importance of Ignorance 33
propaganda, not research. The trouble is, self-awareness is a high ideal and
I suspect all researchers fall short of it at least some of the time, becoming
bound up with pre-conceived ideas and failing to keep objective. This seems
to be ingrained from quite an early career stage—in my own research field,
I’m often supervising students who are investigating a hypothesis (e.g. X and
Y are associated) by analysing a database, and there’s a near-universal dis-
appointment if they carry out the analysis and find no support for the new
idea. However, a robust negative finding ought to be as important as a pos-
itive one if it’s the question you’re interested in, rather than a pre-conceived
answer. It would perhaps help if academic journals felt the same way and were
less swayed by exciting new associations that will play well with news media,
although I don’t know whether my students are thinking that far ahead. I
think it’s more likely that they have just absorbed a deeper-seated culture
where positives are all that’s important.
Fig. 3.3 Dmitri Mendeleyev was born in Siberia, educated in St. Petersburg, and worked
for a couple of years on the Crimean Peninsula by the Black Sea to help with his tubercu-
losis before returning north again to develop St. Petersburg University as an academic
centre for chemistry. The genius of his c1870 periodic table of elements lies in the blank
spaces—where there ought to be elements that haven’t yet been found (sooner or later
the gaps were filled in). At the heart of all research there has to be an idea of something
unknown (the blank space) that’s there to be discovered. If you spend your time trying
to prove that you’re right and shore up your theories, then you’ve rather lost your way.
Dmitrij Mendeleyev’s Periodic Table (1869). Original uploader: Den fjättrade ankan at Swedish
Wikipedia, Public domain, via Wikimedia Commons.
around 1500, and the impetus for geographic exploration to fill in the spaces
must have spilled over into other areas of enquiry. In a slightly similar way,
a stroke of genius in the 1869 periodic table of elements drawn up by Dmitri
Mendeleyev was that it left blank spaces (Figure 3.3) to be filled in—he
included known elements up to that point but predicted new ones with given
properties. Sure enough, those gaps were filled over time—for example, gal-
lium, scandium, and germanium were added within the subsequent 20 years.
Over the centuries, a number of theories have been proposed that have
lain dormant for a long time, waiting for the technology to emerge to fill
in the gaps. Neutrino particles were predicted as an explanation of atomic
beta decay in the 1930s but had to wait until the 1950s for their detection.
The continental drift hypothesis had to wait until the military mapping of
seabeds after the Second World War. Fermat’s last theorem famously took
over 350 years to prove. Possibly the best way to start any research career is
to find a field and try to identify the empty spaces. If it already seems full and
crowded, is it going to be the best use of your time? Might it be better to look
for a quieter, less explored area?
God or Clockwork? The Importance of Ignorance 35
⁸ And it’s probably no coincidence that John Milton published Paradise Lost, his epic about Adam and
Eve, in 1667 as the Renaissance moved towards the Enlightenment. In Book 8 of Milton’s poem, Adam is
specifically told off by the Angel Raphael for asking about celestial motions. However, both the angel in
the book and Milton, the author, are swimming against the tide.
4
Careful Observation
Induction
Francis Bacon was certainly a product of those times (Figure 4.1). Born in
London in 1561, the son of an aristocrat and the nephew of William Cecil,
Careful Observation 37
Fig. 4.1 Francis Bacon made a living as a politician, but his position enabled him to
write widely on a variety of topics, including emerging scientific theory. His political
career was always precarious, as I suppose was only to be expected in the late six-
teenth/early seventeenth century. Also, he was always incurring debts, which didn’t
help matters. His key works on empirical science were published at the height of his
influence, just before his political downfall in 1621, charged with 23 separate counts of
corruption. All things considered, he was lucky to be allowed a quiet retirement.
Francis Bacon by Paul van Somer I (1617). pl.pinterest.com, Public Domain, https://commons.
wikimedia.org/w/index.php?curid=19958108
it was caused by his being out in the snow while he was investigating whether
it could be used to preserve meat—so perhaps he really was a researcher at
heart.
Francis Bacon is particularly associated with the theory of ‘induction’ or
‘inductive reasoning’. This proposes that we derive knowledge from careful
observations, coming up with theories supported by these findings. Induction
therefore results in a conclusion that’s a probability, as opposed to ‘deduc-
tion’ where a conclusion is more definite and derived from logical argument
(think Sherlock Holmes or Hercule Poirot). As a rather broad generaliza-
tion, deduction has its origins in mathematics and lends itself to knowledge
that can be generated from first principles, put together by ‘if this is true
then this must follow’ type arguments. On the other hand, induction belongs
to the philosophy of empiricism, which is about deriving knowledge from
experience—whether observation or experiment. In Bacon’s time, what we
now call the ‘scientific method’ had not yet evolved, but the principles he
put together can be said to have begun that process—or at least he assembled
and systematized ideas that were already knocking around sixteenth-century
Europe, and that were likely to come together anyway.
A renaissance in observation
In a similar way, all the accreted false assumptions about the natural world
needed to be replaced with demonstrable facts.
The emergence of zoology and botany is a good example of this. If you take
the twelfth-century bestiary translated in T. H. White’s Book of Beasts (1956),
quite a lot of it contains fairly uncontroversial, if slightly eccentric, observa-
tions about animals like ravens (‘this bird goes for the eye first when eating
corpses’), horses (‘they exult in battlefields’), and dogs (‘he has more percep-
tion than other animals and he alone recognises his own name’), alongside
some rather strange pictures. But then there’s a whole lot of fabrication-
presented-as-fact about known animals—for camels, ‘if they happen to be
sold to a stranger, they grow ill, disgusted at the price’; for mice, ‘the liver
of these creatures gets bigger at full moon’; for hedgehogs, ‘when a bunch of
grapes comes off the vine, it rolls itself upside down on top of the bunch, and
thus delivers it to its babies’ (if only that were true). And then the descrip-
tions of genuine creatures are intermingled with similarly authoritative facts
about unicorns, griffins, satyrs, etc.
Animals had been systematically described for almost as long as humanity
had been making written records, particularly by the philosopher Aristo-
tle, who worked in Athens until, in 343 bce, he went to Macedonia to
tutor Alexander the Great. Among his many achievements, Aristotle listed
and categorized around 500 creatures and was known for deriving his ideas
from observation (unlike his Athenian predecessor Plato, whose philoso-
phy was more based on first-principles argument). Some of Aristotle’s work
was beginning to filter into Western Europe via Arabic translations from
the eleventh century (although probably not his animal classifications, so a
twelfth-century bestiary would have drawn on other material). The problem
was that knowledge derived from observation was being jumbled up with
knowledge derived from any old story (mermaids and all the rest of it) in
the same way that the old maps were completing their outer, lesser-known
regions with the products of sailors’ fables in order to fill in the space. So, just
as the new maps had to be constructed by the long, slow process of survey-
ing landscapes, zoology and botany had to begin with a long, slow process
of describing and logging facts about the living world. Botany, as a scien-
tific discipline, presumably had a simpler task than zoology, as there would
have been fewer fanciful or mythological descriptions clogging up the liter-
ature and at least it could draw on a more unbroken monastic tradition of
cultivating and describing plants for medicinal use, as well as accumulated
knowledge from agriculture.
40 How to do Research
Although long and slow and methodical, the process of describing as many as
possible of the plants and animals in the world was at least feasible and didn’t
require substantial advances in technology. Along with travel opportunities,
all anyone needed was a personal income and to be the sort of person who’s
happy to give some attention to detail. It also helped to be good at illustration.
Maria Sibylla Merian (Figure 4.2), for example, was born in Germany in 1647
and fortuitously grew up in an artistic family—her father was an engraver,
and her subsequent stepfather was a still-life painter who encouraged her
training. Combining these skills with a lifelong interest in insects (cultivat-
ing silkworms as a teenager), she began publishing illustrated descriptions of
Fig. 4.2 Maria Sibylla Merian combined artistic skills and training with an interest in
zoology, producing important early descriptive research, not least of the life cycles of
insects. In the early days of science, before the committees and institutions took over,
opportunities were a little more equal for women if you had the means to support your-
self, which Merian did by selling her paintings. She carried on her work while bringing
up her family and was supported by her home city of Amsterdam to travel to Suriname
for further investigations in entomology. This portrait in her 50s unapologetically shows
her scientific output (the books) and travel (the globe).
Maria Sibylla Merian by Jacobus Houbraken / Georg Gsell (c. 1700). Public domain via Wikimedia
Commons. https://upload.wikimedia.org/wikipedia/commons/b/b3/Merian_Portrait.jpg
Careful Observation 41
caterpillars, butterflies, and their life cycles from her early thirties. This work
was original and revolutionary at the time, as no one had given much thought
to insects and where they came from—they were generally thought to have
associations with the Devil and to be born spontaneously from mud. After the
end of an unhappy marriage, Merian learned Latin (the language of science at
the time), moved to Amsterdam, and was allowed to travel in her fifties to the
Dutch colony of Suriname in South America, where she worked for two years
and published hugely influential work on plant and animal life, setting the
standard for naturalist illustrators to come. A human rights and ecology cam-
paigner before her time, she also spoke out against the treatment of slaves and
the short-sightedness of relying on sugar plantation monoculture.
Art and science haven’t always had the easiest of relationships, but high-
quality illustration is obviously essential if you are to assemble and com-
municate accurate observations from life, in much the same way that a
good-looking graph makes the world of difference to the impact of a data-
based research paper. Research therefore benefited from the improvements
in draughtsmanship that accompanied the artistic side of the European
Renaissance, as the differences between twelfth- and seventeenth-century
illustrations clearly show. In natural sciences, perhaps the most profound
revolution came with the microscope—a technology that followed swiftly
on from the development of the telescope, and that was made possible by
advances in lens-making. Robert Hooke is now known for many areas of sci-
entific work (including optics, astronomy, and physics), but his collection of
illustrations from the microscope in Micrographia (1665) was an immedi-
ate best-seller and must have been quite something to read at the time (a
page-turner famous for keeping the diarist Samuel Pepys awake until the
early hours of the morning). As well as presenting a new world that had pre-
viously been invisible to its readers, the book, with the now-familiar images
of small insects like fleas and lice (Figure 4.3), was able to describe previously
unknown micro-anatomy (a fly’s compound eye, the structure of a feather)
and to establish that fossils might really be the remains of once-living things,
rather than rocks that just happened to look that way. Through this approach,
the simple observations started to shift towards new ideas—commonalities
and diversities between species and the fact that species have appeared and
disappeared over time—taking a while to sink in, perhaps, but ultimately
culminating in the natural selection theories of Darwin and Wallace around
200 years later.
42 How to do Research
Fig. 4.3 One of the famous illustrations from Robert Hooke’s 1665 work, Micrographia,
showing a louse holding onto a human hair. Both telescopes and microscopes opened
up new universes every bit as game-changing for Europeans as the discoveries from
world exploration. Robert Hooke was a man of many talents (e.g. he might have been
getting close to laws of gravity at the same time as Isaac Newton) and all he was really
doing here was drawing something seen under a lens. However, visualization can be a
powerful element of research and it’s easy to see why it created so much excitement in
the seventeenth century. I suspect the new Royal Society in London, set up by Charles
II only five years earlier, was relieved to have this book as its first major publication.
Robert Hooke. Schem. XXXV – Of a Louse. Diagram of a louse. In Hooke, R. (1665). Micrographia or
some physiological descriptions of minute bodies made by magnifying glasses with observations and
inquiries thereupon. National Library of Wales. Public domain via Wikimedia Commons. https://
upload.wikimedia.org/wikipedia/commons/1/10/Louse_diagram%2C_Micrographia%2C_Robert_
Hooke%2C_1667.jpg
and solitary enthusiasms who had enough of a personal income (and prefer-
ably some travel opportunities) to indulge them. There was work to be done
and academic territory to claim. Some of the titles for posterity are impres-
sive, such as ‘Grand Old Man of Lichenology’ held by Edvard August Vainio,
a Finnish researcher who published more than 100 works on the subject over
the late nineteenth and early twentieth centuries.
Not all contributions were adequately recognized. Maria Sibylla Merian
has a reasonable claim as a founder of entomology, as we’ve seen, although
William Kirby, a rector living a quiet life in Suffolk is often credited with
this, even though his first major work, on bees, was published in 1802, over a
century after Merian’s two-volume series on caterpillars. And Merian never
received a mention from Linnaeus in his classifications, even though he used
her illustrations. He did at least mention Jane Colden, a botanist working in
North America in the late eighteenth century, who compiled descriptions of
over 400 plant species from the lower Hudson River valley. She is the only
female researcher cited by Linnaeus, and perhaps it helped that she used his
classification system.
Transformations in geology
sell, and he had to spend some time in a debtor’s prison; however, eventually
he gained some overdue recognition and a comfortable enough retirement.
Fig. 4.4 Mary Anning was an expert fossil collector, working on the Dorset coast in the
early nineteenth century. Recognized now, if slightly over-romanticized, for her impor-
tant work in palaeontology, she led a hard, impoverished life. As a woman, she was
barred from academic institutions, although she did have friends in the scientific com-
munity, such as the geologist Henry de la Beche, who painted a prehistoric scene based
on Anning’s fossils and sold lithographs to friends and colleagues for her benefit. He
became president of the Geological Society of London shortly after Anning died and
published a eulogy to her, typically reserved for society fellows.
Mary Anning. Credited to ‘Mr. Grey’ in Tickell, C. (1996). Mary Anning of Lyme Regis. Public
domain, via Wikimedia Commons https://upload.wikimedia.org/wikipedia/commons/e/e7/Mary_
Anning_painting.jpg
processes such as fossilization and laying down of rock strata must have taken
place over quite some time—certainly a lot longer than seventeenth-century
assumptions of a few thousand years since creation. Induction therefore goes
from the specific observation (a fossilized plesiosaur skeleton), draws logical
inferences from the observation (this creature is nothing like anything alive
today and must have lived a long time ago in order to become fossilized)
and translates the inferences into a general theory (species extinction/non-
permanence). Although fundamental, it’s also rather simple and intuitive—
most research is, at heart. The difficulties lie in the practicalities of obtaining
the measurement (e.g. digging up a good quality fossil) and then, more
importantly, coming to the correct conclusion.
Smallpox, part 1
A final example of observation and induction comes from the work of Edward
Jenner and the development of the smallpox vaccine. Jenner was born in
Berkeley, Gloucestershire in 1749, the son of a vicar. Although he became a
physician and is known as the ‘father of immunology’ for his work on small-
pox, he also happened to be the first person to describe the life of the cuckoo
and that it was the newly hatched chick that got rid of other eggs in the host
bird’s nest, rather than the parent cuckoo (the previous assumption); he was
elected a fellow of the Royal Society in 1788 for this work, which might have
been enough for most people. If you think about it, the level of careful obser-
vation required to make this discovery, as well as the slight oddity of having
that sort of interest in the first place, did mark Jenner out as an ideal person
for scientific enquiry and it was a continued zoological interest that led him
on to study the various animal versions of smallpox. In particular, taking the
common observation that milkmaids had very low levels of smallpox, Jenner
inferred that this might be because they contracted cowpox, the bovine ver-
sion of the disease which could be passed to humans but in a much milder
form than smallpox. Therefore, before anyone knew about antibodies or its
other components, Jenner was using inductive reasoning to propose acquired
immunity. He also understood the implication that giving people exposure
to cowpox might prevent them developing smallpox. The name he gave to
cowpox was Variolae vaccinae (literally smallpox of the cow) which gives us
the word ‘vaccine’, and his theory was published in 1798, just ten years after
his observations of cuckoo chicks. Almost 200 years later, in 1979, smallpox
was certified as eradicated as a result of Jenner’s theory of immunity and the
vaccines that followed.
5
Ideas under Pressure
Smallpox, part 2
The story of Edward Jenner in Chapter 4 didn’t stop with him deriving his
theory; he went on to test it. Having decided that the pus in cowpox blis-
ters might be providing the immunity to smallpox in milkmaids, he needed
proof for his theory. Jenner took some fluid from the blisters on the hand of a
milkmaid and inoculated the eight-year-old son of his gardener (Figure 5.1),
going on to challenge him with various sources of infection.1 This is clearly
not the sort of study design that would get past a research ethics commit-
tee today, but thankfully the boy was unaffected, and Jenner’s theory was
supported. He then went on to test this on 23 more people, including his
own 11-month-old son. Remarkably quickly after Jenner’s 1798 publication
(many discoveries take a lot longer to translate into action, as we’ve seen with
John Snow and cholera; Chapter 1), the vaccine roll-out began. For example,
a Spanish expedition in 1803–1806 inoculated millions of people in South
America and East Asia.2 On a literary note, the plot of Charles Dickens’ novel
Bleak House involves one of the principal characters developing smallpox.3
Published in instalments from 1852 to 1853, Bleak House coincided with the
1853 introduction of compulsory infant vaccination in England; whether that
was a coincidence or not, I don’t know, but it is difficult to imagine that it
wasn’t knocking around in conversations of the time.
If the vaccine had been ineffective, the world would have known soon
enough from subsequent cases (smallpox was a common and very visible
disease) and the theory would have drifted into obscurity. However, vaccina-
tion wasn’t appearing as a completely new concept. People knew that if you
survived certain diseases then you didn’t get them again, and the prevention
of smallpox prior to Jenner’s experiment involved ‘variolation’—trying to
1 It wasn’t actually the first smallpox vaccination, although was the first to be described properly in a
publication.
2 A worthy gesture to South America, albeit a little belated after the decimation it had suffered from
smallpox, a disease that was brought over by European settlers in the first place.
3 Probably smallpox, at least—it isn’t named in the book but it’s similarly disfiguring. Like cholera,
smallpox is another common nineteenth-century disease that doesn’t make it into literature very often.
Ideas under Pressure 49
Fig. 5.1 A rather lurid nineteenth-century French depiction of Edward Jenner adminis-
tering the smallpox vaccine to eight-year-old James Phipps, the son of his gardener, on
14 May 1796. The stoical young lady with the bandage on the right is presumably Sarah
Nelmes, the cowpox-affected milkmaid from whose hand Jenner had taken the fluid for
the vaccine. Phipps lived to see his 65th birthday, attended Jenner’s funeral, and was
buried in the same churchyard. Smallpox vaccination had a rapid international impact
despite European political turbulence, hence the French interest here. Napoleon, for
example, vaccinated his entire army and awarded Jenner a medal, despite being at war
with Britain at the time. Jenner asked instead for the release of two British prisoners,
which was granted.
The inoculation of James Phipps by Edward Jenner. Lithograph by Gaston Mélingue (c. 1894). Public
domain, via Wikimedia Commons.
induce a mild (but still potentially fatal) infection by injecting pus from
affected people into a scratch in the skin. This had been carried out since
at least the tenth century in China, was common practice in the Ottoman
Empire and was communicated to Western Europe in 1717 by Lady Mary
Wortley Montagu, the enterprising wife of the British ambassador there who
had already inoculated her five-year old son by that method. Jenner’s new
vaccine was accepted rapidly as a much more preferable solution (since it
resulted in nothing more than a mild fever at worst) and variolation was
banned in Britain by 1840. The point is that smallpox vaccination wasn’t just
50 How to do Research
a theory Jenner derived from observation—it was a theory that had been put
to the test.
Although the ideas of Francis Bacon and his contemporaries about induc-
tive reasoning were influential and an important starting point for a lot of
modern research, it wasn’t long before other thinkers were picking holes in
them. Leading this charge was David Hume, the eighteenth-century Scottish
philosopher. Hume worked as a librarian at the University of Edinburgh, pub-
lishing extensively but getting repeatedly turned down for academic positions
because of his suspected atheism. It was only when he produced a best-selling
history of England that he finally achieved some financial security from the
royalties. Hume was unhappy with inductive reasoning as a way of under-
standing the world, particularly because induction (drawing a conclusion that
can only be said to be probable) could never be the same as deduction (draw-
ing a conclusion that can be argued as a logical and certain consequence).
Instead, he proposed that repeated observations of past behaviour (e.g. parac-
etamol has relieved my headaches previously) are assumed to predict future
events (e.g. paracetamol should help the headache I have now), but it’s best
to view these as beliefs, rather than proofs, and to remain sceptical. After a
whole lot of development of research in the intervening period, these ideas
eventually matured into ‘falsifiability’, a principle underlying a lot of research
nowadays and worth spending some time on.
Falsifiability
Fig. 5.2 Karl Popper—the science philosopher particularly associated with empirical
falsification. Getting his first publication was an urgent deadline for Popper, as he
needed this to obtain an academic position in a country that was safe for someone of
Jewish descent. In 1937 he managed to get out of Europe, taking up a philosophy lec-
turer post at the University of New Zealand in Christchurch, as far away as possible from
the Nazis and the Second World War. After the war, Popper worked in the UK at the Lon-
don School of Economics as a professor of logic and scientific method and ended up
living to the ripe old age of 92.
Photograph of Karl Popper (c. 1980). LSE Library/imagelibrary/5.
with repeated experiments at sea level until you take your kettle to the top of
a mountain and find that it boils at a lower temperature.
In Chapter 2, we considered the sort of everyday ‘research’ we do as part
of growing up and learning about the world. As discussed there, the idea of
the repeated experiment is a powerful and intuitive one. Popper’s theories
have been extremely influential in the way that today’s research is conducted,
in scientific disciplines at least, and in the standards expected of researchers.
However, like a lot of research principles, they are not very different to natu-
ral human behaviour. We spend most of our early lives repeating experiments
of one sort or another to understand the world around us, and the truths we
obtain from others via textbooks and other media are tips of the iceberg com-
pared to those we have (often unconsciously) found out for ourselves. If one
day you were to clap your hands and make no sound, or if you were to let go
of a stone and see it fall upwards, the shock and disturbance might almost
52 How to do Research
Science or magic?
⁴ The beginning of the Enlightenment era is a bit vague. Some date it from 1637 when the philosopher
Descartes wrote ‘I think, therefore I am’; some date it from 1687 when Newton published his Principia, or
from the death of the French king Louis XIV in 1715.
Ideas under Pressure 53
earlier learning. Perhaps this is why the old mystical beliefs took quite a while
to disappear. For example, among his more scientific work, Issac Newton was
an enthusiast for alchemy and for predicting the date of the world’s end from
a careful reading of the Bible.
What does start to distinguish modern from ancient research is the princi-
ple of putting ideas under pressure. If you stop dancing, the rains still arrive.
If you remove the cockerel, the sun still rises. If you take your kettle up the
mountain you have to revise your theory about the boiling point of water
and bring atmospheric pressure into the equation. Jenner took his theory of
acquired immunity from cowpox and developed the first vaccine. John Snow
tested his emerging suspicion that cholera might be a water-borne disease by
mapping cases of the outbreak in Soho.⁵
Not all research is about cause and effect, but quite a lot is, particularly in
the sciences, and nowadays the ‘hypothesis’ is a central component, aris-
ing directly from Popper’s falsification principles. If you write an application
for research funding, the chances are that your study’s hypothesis will be
requested as one of the first pieces of information. Similarly, higher impact
journals will expect this prominence early in a research paper, usually shortly
after an introductory summary of the rationale for the study. The hypothe-
sis is a distillation of the broader theory you are bringing to the project, and
a prediction arising from the hypothesis is what your study will be testing,
putting the idea under pressure. Figure 5.3 summarizes this experimental
‘cycle’ (which is really quite straightforward). Having formulated a hypoth-
esis, a prediction is made from it (if X is true, then we should observe Y).
An experiment is then carried out to test that prediction. If the findings are
supportive then the hypothesis holds, but another prediction and experiment
might then be planned to test it further—back around the cycle, keeping the
idea under pressure. If the findings are not supportive then the hypothesis
needs to be refined, further predictions made, further experiments carried
out to test these. And so on . . .
If you’ve decided on a field of research, you’ll tend to find that the textbooks
on your subject focus on the design of the experiments and the conclusions
you might draw from your findings. They may well acknowledge the impor-
tance of the central hypothesis, but there doesn’t tend to be much written
⁵ And chaining up the offending water pump as an experimental intervention, although as mentioned
in Chapter 1, that part of the story is unfortunately a little too good to be true.
54 How to do Research
Observation
Hypothesis
Prediction
Experiment
Re-tested Refined
Fig. 5.3 The hypothesis cycle. Note that hypotheses have to come from somewhere in
the first place—it’s all very well for falsification enthusiasts to criticize induction, but
both principles are required. It has been said that the ‘health’ of a research field can
be judged by the extent to which new hypotheses are being developed (induction) and
tested (falsification). If you find yourself in a field that’s not really going anywhere, you
might want to have a think about this—is it a lack of new ideas (in which case, per-
haps you should work on conducting better observations), or is it because new ideas are
not being adequately put to the test (in which case, perhaps work on designing better
experiments)?
Author’s own work.
about the hypothesis itself, and this is because there isn’t very much more to
say. It’s possibly the most important element of many fields of research, but
it’s hard to teach. In the end, we have the scientific method that has emerged
over the last 500 years—simply the way things are done and the way that many
fields of research have emerged and justified themselves. Perhaps someone
will come along one day with a different framework to Popper’s and perhaps
that will change the way research happens. I’m not so sure, because Pop-
per’s falsification theory isn’t actually a huge leap forward from the earliest
principles of knowledge generation. ‘Socratic dialogue’ in the old Athenian
marketplace (attributed to Socrates but it could as well be Plato’s) involved
one person proposing an idea and the others around him trying to knock it
down with argument, putting the idea under pressure in the same way that
scientists are putting their own hypotheses under pressure around 2,500 years
later.
So, once again, nothing much is new. Although accounts of the scientific
method today tend to reference Popper’s work, his falsification principle was
simply describing what had emerged anyway, just as Bacon was describing
a way of thinking that was already being adopted in the sixteenth century.
Ideas under Pressure 55
In the discovery of secret things, and in the investigation of hidden causes, stronger
reasons are obtained from sure experiments and demonstrated arguments than
from probable conjectures and the opinions of philosophical speculators.
Gilbert’s suggestion sounds very close to putting ideas under pressure and
not relying on induction alone. Bacon published most of his theories around
20 years later and was not very complimentary about Gilbert—he was scep-
tical about magnetism and didn’t like Gilbert’s support for a Sun-centred
planetary system. Or it’s possible that they’d just fallen out at court at some
point; Bacon was rather prone to making enemies.
Constructing a hypothesis
There are at least some features of the hypothesis that are important and can
be taught. For example, it helps if a hypothesis is expressed in as clear and
simple a way as possible, because you want to make sure that your readers
actually understand what it is you’re testing—because if a finding can’t be
communicated and picked up by someone else, is it really worth anything?
And ideally, other researchers should be able to take that idea and test it with
their own experiments—because who’s going to believe something that only
comes from a single group? Also, it’s best practice if the hypothesis is for-
mulated and recorded before the research project is carried out—sometimes
called an a priori hypothesis. The alternative is a post hoc hypothesis that
is formulated after the findings have emerged—a possibly valid but weaker
situation. This simply reflects the two approaches to research that we’ve dis-
cussed. Inductive reasoning is when a study is carried out and a theory is
derived from the observations made, which we have seen is fine in itself, but
potentially flawed because the process of deriving that theory doesn’t make
it true. However, taking a theory and testing it puts the study in the realm of
experimental falsification and makes the results a lot more credible. If you’re
looking at other people’s research findings, do consider this one carefully.
56 How to do Research
Sometimes the wording isn’t clear on a research paper, and you have to do
some reading between the lines to work out whether they really were testing a
preconceived idea or simply carrying out a project and seeing what happened.
There’s a difference.
When testing a hypothesis, you do sometimes have to put some thought
into what it is you’re investigating; this is because it’s often hard to nar-
row down an experiment so that it puts only one theory under pressure.
For example, around 150 years after Isaac Newton proposed fundamental
laws of motion and gravitation in his 1687 Philosophiae Naturalis Principia
Mathematica (usually abbreviated to Principia), astronomers were observing
the orbit of the planet Uranus (discovered much later in 1781) and found
that it wasn’t fitting in with what was predicted from the positions of all the
other planets and the carefully calculated Newtonian gravitational forces. If
you had observed this, you might view it as a test of Newton’s laws and a rea-
son to go back and refine them. However, the alternative possibility is that
you don’t yet have all the necessary observations for that calculation. Look-
ing again at the mathematics, another undiscovered planet was proposed,
with its gravitational effects providing an explanation for the observations.
Sure enough, Neptune was discovered in 1846. The fact that a previously
unknown and invisible object could be predicted mathematically in this way
was clearly a major support for Newtonian theory, just as Edmund Hal-
ley (who played a major role in helping persuade Newton to publish his
Principia in the first place) was able to apply Newton’s laws to the comet he
observed in 1682 and predict that it would return in 1758. He was vindicated
(and the comet was named after him) when it duly did so, 16 years after
his death.
For a figure so prominent, it’s ironic that Isaac Newton’s personal approach
to research was quite far away from what would later become the scientific
method. Newton says:
The best and safest method of philosophizing seems to be, first to inquire dili-
gently into the properties of things, and to establish those properties by expe-
riences and then to proceed more slowly to hypotheses for the explanation of
them. For hypotheses should be employed only in explaining the properties of
things, but not assumed in determining them; unless so far as they may furnish
experiments.
Ideas under Pressure 57
I’m not completely sure about the last few words, but most of this sounds
like Bacon’s inductive reasoning—to make observations (on ‘properties’) and
derive hypotheses that explain these. Newton wasn’t really in the business
of putting his ideas under pressure by experimentation; he was primarily
interested in deriving principles from mathematics that explained what was
already being observed. The hypothesis testing was left to his successors, and
perhaps it’s an advantage that the verification came from independent sources
and clear predictions that could be tested, as with Halley’s comet and the
discovery of Neptune.
Important further testing and development of Newton’s theories came par-
ticularly from France, although with a bit of a delay because of national
enmities and the reluctance of French academics to embrace theories (or any-
thing else) from Britain. The first French translation of Principia was carried
out by Émilie du Châtelet (Figure 5.4), although wasn’t published until 1756,
seven years after she died in childbirth. Her translation included a personal
commentary and additional principles of her own around conservation of
energy, which Newton had missed. Du Châtelet was much more of an exper-
imentalist, investigating energy and momentum by dropping balls onto soft
clay from different heights. She wrote her own widely read physics textbook,
as well as completing an analysis of the Bible and sustaining a long-standing
relationship with the philosopher Voltaire, among other achievements.
Pierre Simon Laplace was born in 1749, the year of Du Châtelet’s death, and
is sometime referred to as ‘the French Newton’. He seems to typify the quiet,
reserved researcher, recognized to have a prodigious mathematical ability
that he applied particularly to astronomy, working out why the Solar System
wasn’t going to collapse in on itself,⁶ solving observed irregularities in the
movement of Jupiter and Saturn, developing equations for tidal forces, and
considering the likelihood of black holes.⁷ Interestingly, Laplace managed to
become celebrated in his country just for being self-evidently gifted and with-
out much of a need to press his cause, and this may be why he survived within
quite high-ranking positions across all the upheavals of the French Revolu-
tion, from monarchy to republic to empire and back to monarchy again—his
survival almost as much of an achievement as his mathematics. It also seems
to have helped that he was an excellent communicator, with his 1796 Expo-
sition du Système du Monde account of physics theory at that time being
famously readable as well as comprehensive. He certainly wasn’t a modest
man and was quite happy to be seen as the best mathematician in France;
however, he did look after younger researchers and fostered talent. Presum-
ably he also survived because he didn’t play aggressive political games and
got along with his peers (for example, he collaborated on chemistry exper-
iments with Antoine Lavoisier (Chapter 10), who didn’t fare so well in the
Revolution).
Even though some of Newton’s theories were corrected or developed over
the years by Du Châtelet, Laplace, and others, the general assumption seems
to have been that knowledge inexorably advanced and that Newtonian prin-
ciples had ‘sorted out’ this corner of his field. So, when Albert Einstein
proposed his theory of special relativity in 1905, it was viewed as revolution-
ary for highlighting and solving a situation where Newtonian models did not
work (to do with the speed of light being constant, no matter how fast you’re
travelling towards or away from the light source). For this reason, it tends to
⁶ Whereas Newton had assumed that God intervened every few hundred years to stop this—yes,
seriously, that’s what he thought.
⁷ Or at least bodies so massive that light could not escape. Black holes technically were coined later as
a potential inference from Einstein’s general relativity theory.
Ideas under Pressure 59
⁸ Well, almost everything; I think his wilder ideas about alchemy and predicting the end of the world
might have been quietly set to one side.
60 How to do Research
the wave theory, given the hostile attempt to knock it down. It would be nice
to say that the panel had a ‘road to Damascus’ moment and were instantly
convinced, but history is a little more complicated. All the same, the theory
of light as a wave had taken hold and gained acceptance within a reasonably
short period of time.
Interestingly, another example of support for a theory from hostile exam-
ination comes in the opposite direction because, of course, light can be
described both as particles and as a wave. One of Einstein’s papers in his
1905 annus mirabilis (the same year as his paper on special relativity and two
other ground-breaking reports) described light as quanta of energy. This was
as revolutionary as wave theory had been and, understandably, it had its crit-
ics. One of these was the physicist Robert Millikan working at the University
of Chicago who carried out a series of experiments trying to prove Einstein
wrong. He kept at it for ten years but had to give up in the end and accept the
truth. However, at least he was awarded a Nobel prize for his efforts (this work
and calculating the charge of an electron) in 1923, only a year after Einstein’s
award for the theory he was trying to refute.⁹
In conclusion
Thus far, we’ve been considering research drawn from observations and
experience—deriving theories through inductive reasoning and then putting
them to the test by experiment. Although mostly described for the sciences,
the principles seem common enough ones to me and eminently applicable to
humanities. History research is obviously focused on working out cause and
effect, just as much as any science discipline, and a lot of research in art and
literature will be drawing on principles and practices derived from history.
For example, one of the things you don’t expect when embarking on War
and Peace is the amount of space (in that already-lengthy novel) devoted to
Tolstoy’s musings on the turbulent events of the Napoleonic era and on what
caused them—was it just the actions of single men like Napoleon himself, or
was it the product of populations and many smaller actions? The theorizing
is fine in itself, although it does rather interrupt the story.
Like the research fields we’ve been thinking about so far, ‘historiography’
(the methods used by historians) had its beginnings in the ancient world,
suffered a bit of a drop in quality during mediaeval times in Europe, and
then developed steadily increasing rigour throughout the last 500 years. For
history, the mediaeval problem tended to be the inclusion of any old story
without much checking—very similar to what the bestiaries were doing for
zoology. An example of this would be the uncritical incorporation of legends
about King Arthur and other speculative tales into Gesta Regum Anglorum
(Deeds of the English Kings), finished by William of Malmesbury in 1125,
around the same time as the bestiary discussed in Chapter 4. Again, as with
the bestiary and as with the old maps, there didn’t seem to be too much con-
cern back then whether a given fact or story had been proven to be true or
not; it was just thrown in anyway—good for readability perhaps, but not really
serving the development of knowledge.
In the eighteenth-century ‘Age of Reason’, it’s not surprising that the objec-
tivity and rigour developing in the sciences were also being adopted in history
writing—all academic disciplines would have been strongly influenced by the
philosophy of the time. David Hume wrote on both science and history (as
well as economics), pushing for precision and scepticism in thinking and
62 How to do Research
Fig. 6.1 An engraving of the historian Edward Gibbon based on his 1779 portrait by Sir
Joshua Reynolds. In the eighteenth century, the emerging scientific method was being
shaped by intellectuals like Hume and Voltaire, who were writing and thinking about
history just as much as science. When he compiled his magnum opus The History of
the Decline and Fall of the Roman Empire, Gibbon was following a trend towards more
careful consideration of primary sources of information, akin to the systematic obser-
vation and inductive reasoning being applied in the sciences. However, he also wasn’t
averse to throwing in an opinion or two, including some quite scathing comments on
the early church (which, unsurprisingly, got him into the eighteenth-century equivalent
of a Twitter storm).
Edward Gibbon, by John Hall, after Sir Joshua Reynolds (1780). Public domain, via Wikimedia
Commons.
1 So presumably his beliefs didn’t come in the way of book sales, just university appointments.
2 Voltaire, incidentally, had attended Isaac Newton’s funeral and had clearly made some rather personal
enquiries with the attending physicians, establishing to his satisfaction that Newton had never ‘had any
commerce with women’, a characteristic far removed from his own experience.
Choosing a Solution 63
Fall of the Roman Empire in 1776, there was at least an expectation that a
historian would consider information critically and be cautious about con-
clusions drawn from secondary and potentially biased sources (i.e. inductive
reasoning). Also, academic communities were emerging where ideas could be
debated and at least put under some pressure, even if history, as a discipline,
doesn’t lend itself to experimentation.
Not all discoveries are drawn from observations and not all research uses
the types of reasoning we’ve been considering so far. Chapter 2 discussed
how mathematics emerges out of prehistory based around fundamental
truths and then develops as a field through deductive, rather than induc-
tive, reasoning—working out what logically follows as a consequence of a
given truth and steadily building up discoveries by proof, rather than exper-
iment. Sometimes mathematics research is carried out with a specific end
in mind, particularly when it’s required to solve problems in other research
fields (such as the inverse square law of gravitational force explaining the
movements of planets); other times, the research is carried out for its own
sake with no particular end in mind.3 And sometimes the research carried
out for its own sake turns out to have unforeseen applications further down
the line.⁴
We’ve discussed an acceleration since 1500 in applied research, along with
some of the context that might explain why this particularly happened in and
around Western Europe for a large part of that period (change in worldview,
discovery of old learning, acceptance of blank spaces on the maps, etc.). I
rather suspect that progress in mathematics would have continued its devel-
opment regardless of the European Renaissance; after all, it seemed to have
been progressing well enough up to that point outside Europe, and perhaps
there are always people who are gifted in that way and who will inevitably
push the pure field forward. Printing undoubtedly helped to connect the
research community, rather than relying on unstable Eurasian trade routes
for the exchange of ideas. Also, the growing need for applied mathematics in
disciplines such as engineering and economics presumably provided some
impetus for more theoretical developments, or at least supported university
was true—Gauss wasn’t a prolific writer and didn’t like to publish anything
until he was quite sure about it; however, it put understandable strain on the
relationship. Not that it mattered too much, as Gauss was highly influential
in many other ways—for example, developing the ‘method of least squares’,
fundamental to the regression models in statistics that many of us have to
grapple with, although used at the time to predict the movements of plan-
etoids. As well as these achievements, Gauss managed to persuade junior
colleague Bernhard Riemann to abandon theology in favour of mathemat-
ics. Riemann importantly further developed the non-Euclidean geometry of
objects on curved surfaces—a necessary knowledge base upon which Ein-
stein would later build his general theory of relativity. The sad end of the
University of Göttingen’s prominence came with David Hilbert who, beyond
his own work, was particularly known for presenting in 1900 a list of current
problems or challenges for mathematics at that time. He retired in 1930 but
lived to see the Nazi purges of universities and the emigration of most of his
colleagues and successors, who were replaced with government-appointed
staff. When Hilbert was later asked by Hitler’s Minister of Education whether
his Mathematical Institute had really suffered so much from ‘the departure of
the Jews’, he replied that it no longer existed. He died in 1943 and the epitaph
on his tombstone is as good a mission statement as any: Wir müssen wissen.
Wir werden wissen (We must know. We will know).⁵
So, it does seem that pure mathematics thrives with active mentoring, as
well as the personal gifts of those involved. For example, if Gauss hadn’t been
working at Göttingen, Riemann would presumably have gone on to become
a theologist (he remained a devout Christian throughout his life and viewed
his academic work as service to God). Across its wider history, mathemat-
ics does seem to have flowered in particular places at particular times. This
would be understandable for a research field that depends on the necessary
equipment and facilities being available, or the skills of laboratory staff, so
it’s interesting that the same need for the right environment is present in a
subject that depends so much on thinking and logic. In the eighteenth cen-
tury, the centres of excellence were particularly oriented around the courts
of monarchs who might be called ‘enlightened despots’—Catherine the Great
(St. Peterburg), Frederick the Great (Berlin), and Louis XV–XVI (Paris, where
Laplace worked). By the nineteenth century, the universities had taken over
as the main employers and therefore the sustaining soil for local research
expertise, exemplified by the succession at Göttingen. This continued into
⁵ Hilbert also seems to have got quite close to general relativity theory before Einstein got there first—a
little like Locke for Newton, or Wallace for Darwin.
66 How to do Research
the twentieth century, broadening from Europe to the US, Japan, and wider
international settings, as well as moving beyond universities to encompass
commercial enterprises. In theory, problem-solving in pure mathematics
might be the sort of research that could be done by anyone at home with the
right type of mind, time to spare, and access to the relevant work of others. It
therefore remains to be seen whether Internet-based connectivity might pro-
vide the necessary community for future progress and bypass the limitations
of academic career structures.
Fig. 6.2 The Russian mathematician Sofja Wassiljewna Kowalewskaja. While she did
benefit from parental support and a good private education at home, she had to work
hard and relentlessly to make her way in the male-dominated world of nineteenth-
century mathematics, impressing professors at a number of universities and cadging
what private tuition she could negotiate when she was not allowed in lectures. After
five years of quite an itinerant life between European centres of excellence, she pre-
sented three papers to the University of Göttingen and became the first woman to be
awarded a doctoral degree in her subject.
Photograph of Sofja Wassiljewna Kowalewskaja (unknown; 1888). Public domain, via Wikimedia
Commons.
be whether the Riemann hypothesis had been proved—that’s how much these
things matter if you’re in the field. There have been several similar problem
lists compiled since Hilbert’s, a prominent one being the Millennium Prize
Problems posed by the Clay Mathematics Institute, New Hampshire—just
seven challenges this time (still including the Riemann hypothesis), but with
an added reward of USD 1 million for any correct solution. So far, one of
them has been solved, by Grigori Perelman from St. Petersburg, who cred-
itably refused to take the prize money because he felt it should be shared
with another contributor, Richard S. Hamilton from Columbia University,
New York.
For a research field based on proof rather than opinion, you would expect
that pure mathematics would be relatively peaceful, apart from the usual
academic arguments over who thought of something first. However, there
do seem to be some quite strong words exchanged at times. One of David
Hilbert’s achievements was to promote and help establish some ideas devel-
oped originally around 1874–1884 by Georg Cantor, a mathematician who
worked at the University of Halle, not far from Göttingen. Cantor’s set the-
ory provided an underpinning link between several areas of mathematics and
68 How to do Research
has ended up accepted as a standard, but his ideas did cause quite a stir at
the time. Leopold Kronecker, Cantor’s counterpart in Berlin, was particu-
larly exercised, describing Cantor (of all things) as a ‘scientific charlatan’ and
a ‘corrupter of youth’.⁶ It does seem a little odd, to someone on the outside,
why sceptics didn’t just read the theory and work out whether the proofs
were robust; perhaps the building of knowledge through deduction is more
complicated than it sounds. Cantor also proposed the concept of ‘transfinite
numbers’ (larger than finite numbers but not necessarily infinite), which got
him into trouble with the philosopher Wittgenstein and some theologians (to
do with a divine monopoly on the infinite), even though Cantor was a devout
Lutheran and believed that his theory had been communicated from God. All
of this disagreement might have contributed to the bouts of severe depres-
sion Cantor suffered, although he had received considerable recognition and
awards by the time he died in 1918.
Mathematics and philosophy have had a fairly close relationship over the cen-
turies, perhaps understandably. One of the long-running debates they have
shared in their different ways has been around the idea we’re considering
here of deriving knowledge from ‘first principles’, sometimes called rational-
ism, and whether this is preferable to the ‘knowledge from observation’, or
empiricism, that has been the theme of previous chapters. Francis Bacon, for
example, was a Renaissance empiricist, although this way of thinking can be
traced back to Aristotle and even earlier (to 600 bce India if you want). There-
fore, as we’ve seen, he was a firm believer in drawing ideas from observations.
His near-contemporary, the philosopher René Descartes, on the other hand,
tends to be seen as laying the foundations of rationalism (although that can
also be traced way back—e.g. to Plato and earlier). Descartes was French born
but mainly worked in the new Dutch Republic. He is best known for his 1637
‘Cogito, ergo sum’ (I think, therefore I am) statement, and for his 1641 Medi-
tationes de Prima Philosophia, in which he starts by stripping away everything
that is not absolutely certain (presumably down to something like ‘I think,
therefore I am’), then tries to build on that foundation with certainties and
logical reasoning. The full title of the work translates as Meditations on first
philosophy, in which the existence of God and the immortality of the soul are
demonstrated, so you can see where he’s going with his argument. Aside from
⁶ As well as blocking all his attempts to secure an academic position in Berlin—a depressingly common
practice over the years and a good reason why researchers are best kept away from positions of power.
Choosing a Solution 69
At the heart of its problem solving, the task of mathematics is to get from
one piece of knowledge to another. There might well be more than one possi-
ble path to take and conventionally the simplest is chosen. This is sometimes
referred to as the principle of parsimony or ‘Occam’s razor’.⁸ William of Ock-
ham (Figure 6.3) was a Franciscan friar living around 1300 who wrote on
the process of reasoning, which gave his name to the metaphorical razor,
although he never used that phrase himself. Again, there was nothing par-
ticularly new here, as Aristotle had already proposed that ‘Nature operates
in the shortest way possible’. The ‘simplest is best’ principle is worded more
precisely as ‘entities should not be multiplied beyond necessity’. So, if you can
Fig. 6.3 Possibly a portrait (or self-portrait) of William of Ockham—or at least it’s on a
fourteenth-century manuscript of one of his works and says ‘frater Occham iste’. Is it just
me, or does the face look like a Picasso? Brother Occham, an English Franciscan friar,
expressed his thoughts a little too independently at times, which got him repeatedly
into trouble, and he was sent to defend himself from unorthodoxy in Avignon, southern
France. A few years later he had to escape to Bavaria when the Franciscan belief that you
shouldn’t own property came into conflict with Pope John XXII (whose church owned
rather a lot). Brother Occham wrote on a range of topics including what would now be
called philosophy, mathematics, and politics, but it was his arguments for parsimony
(i.e. of opting for an explanation with the fewest possible factors) that gave his name to
‘Occam’s razor’.
Sketch labelled frater Occham iste (1341). In Ockham Summa Logicae, MS Gonville and Caius College,
Cambridge. Public domain, via Wikimedia Commons.
get away without extra material in your theory, do what you can to get rid of
it and strip the theory down to what’s essential. Going back to Copernicus
and the discussion in Chapter 3, Ptolemy’s pre-existing model of planetary
movements placed the Earth at the centre, which mostly worked but needed
extra tweaks to account for irregularities. Copernicus and his successors pro-
posed that a model with the Sun at the centre was a simpler one that didn’t
need the tweaks (or at least not so many of them).
All of this sounds very neat and it’s a strong tradition in mathematics
where there’s generally some way of quantifying the size and complexity
of equations and assumptions in order to work out what’s simplest. How-
ever, mathematical tradition doesn’t actually claim that one approach is true
and that another is false—it just says that a more complicated pathway is
unnecessary because you have a simpler one. It depends on what you want
to do with the model. All objects in space are relative to each other, so
Choosing a Solution 71
Casting your mind back to Chapter 3, you might remember that we con-
sidered the hypothetical situation James Frazer proposed of a person not
72 How to do Research
knowing anything about the world but trying to understand it and having
to make a choice between an externally or internally controlled system—
God or clockwork. We’ve followed the second choice so far and the way in
which science has sought to differentiate itself from other clockwork theories
(planetary spheres, alchemy, unicorns, and the rest of them). Perhaps unsur-
prisingly, over the centuries some people have felt that the remit should be
broader and that we should be testing clockwork against God. I’m not going
to say very much because tempers tend to run high in this particular argu-
ment and I could do without the aggravation; however, equally I don’t think
the question can be ignored entirely because God has featured in research for
as long as it has been written about and the bickering doesn’t show any sign
of going away.
Back in the early days of science, God was very much part of the clockwork.
As mentioned in Chapter 5, Isaac Newton hadn’t quite solved the problem
of why the Solar System didn’t collapse in on itself, so he simply proposed
that God intervened every now and then to stop this. Around 100 years later,
Pierre-Simon Laplace solved the problem and removed the requirement for
divine intervention as an explanation. The story goes (quite possibly myth-
ical) that Napoleon was talking with Laplace about his work and asked him
where God fitted in; Laplace replied that he had ‘no need for that hypothesis’.
In the days when the Christian Church wielded political power in Europe
(including in university appointments), science was often quite wary about
picking a fight. For example, in relation to the age of the Earth,⁹ the evidence
against the ‘6000 years since Creation’ hypothesis had stacked quite high
before a challenge was made. This was left to Georges-Louis Leclerc, Comte
de Buffon—a French aristocrat and socialite who was also a natural scien-
tist, when he found the time. He wrote his Histoire Naturelle in 44 volumes
from 1749 until his death in 1788 just before the Revolution (which probably
wouldn’t have treated him well; they guillotined his son). In amongst a wealth
of other material, he ran with the idea that the Earth had been thrown out of
the Sun, calculated how long it might take to cool down and estimated its age
to be at least 75,000 years—a little short of 4.5 billion, but a step in the right
direction. You get the sense that Buffon was the sort of person who was quite
capable of side-stepping a bit of controversy, although even he presented his
ideas as speculations. He was also an early proponent of evolution, a theory
which formed the next battleground with the Church. Nervousness about this
conflict might also explain the notoriously long time Charles Darwin sat on
⁹ Which, as previously mentioned, doesn’t seem to have much to do with religion, being more about
some odd seventeenth-century calculations involving Kepler and Newton, as well as a few bishops.
Choosing a Solution 73
1⁰ Although not quite ‘clockwork’ anymore—quantum mechanics rests on probabilities and uncertain-
ties, albeit subject to mathematical laws. Despite being a contributor, Einstein was never happy with this,
saying that ‘God does not play dice’. It’s also worth noting that general relativity predicts singularity, a
point of infinite density, as a result of gravitational collapse. A quantum theory of gravity might resolve
the paradox, but this hasn’t quite been worked out.
74 How to do Research
To be fair to Science, the God side in the debate (or at least the Western
Christian Church) has hardly covered itself in intellectual glory, from the days
of the sixteenth-century Vatican hierarchy hanging on grimly to an Earth-
centred planetary system that came from pagan Alexandria in the first place,
or seventeenth-century Protestant bishops deciding that the Bible was written
as an instruction manual for dating Creation. And rationalism hasn’t fared
well either from attempts to prove the existence of God by first-principles
reasoning, from Descartes in the seventeenth century to C. S. Lewis in the
twentieth.11 The argument that certain fields of knowledge, like the origins
of the Universe, somehow ‘belong’ to theologians rather than scientists also
seems particularly vacuous.
The problem, in my (very humble and inexpert) opinion, is that both sides
are arguing over what can’t be argued and that there are many better things
to be doing with limited time and salary. Any religion that depends on (or is
tempted to think it might be enhanced by) scientific proof, seems a contradic-
tion in terms. And quite what business scientists have in attempting to falsify
what can’t be falsified, I’ve no idea. Worse still, there’s no sense of dispassion-
ate objectivity on either side of the argument—which I’m afraid comes down
worse on the Science side, because if there’s one thing you should take pains
to avoid in research, it’s testing a question when you’ve already decided the
answer. The arguments have been running since 1500 at least and there’s still
no end in sight, so if there’s anything appropriately sixteenth-century to say
about the debate, it’s ‘a plague on both your houses’.
11 To be fair, C. S. Lewis, the Christian essayist and Narnia author, repented of his unwise decision to
prove God by argument; I don’t know about Descartes.
7
The Ideal and the Reality
To begin with, inductive reasoning isn’t enough (Chapter 5). It’s nice to imag-
ine clarity of thought applied to meticulous observations resulting in elegant
mathematical models like Newton’s laws of motion. The problem is that this
relies on the clarity of thought happening to come up with the right answer
and with observations that are sufficiently generalizable to all circumstances.
To be fair, Newtonian mechanics received no particular challenges for over
200 years, which isn’t bad going, even if some of his other theories held back
progress (e.g. for wave theories of light). And to be fair, the laws still apply
in a lot of circumstances—quantum theory is only needed as a replacement
at the subatomic level. However, Einstein’s theory of special relativity was
still quite something for early twentieth-century scientists to get their head
around, because it demonstrated that Newton’s laws were not universal. It
was Einstein’s theory that indicated you couldn’t take anything for granted
indefinitely, that the dream of the Age of Reason had finally passed, and that
even the strongest-held hypotheses were fallible and needed to be kept con-
tinually under scrutiny. Hence, Popper’s principle of falsification in the 1930s
was a description or codification of what had already been discovered, rather
than proposing an entirely new direction.
The difficulty with falsification is that it’s all very well for the basic sciences—
physics, chemistry, etc.—where processes can be tightly controlled. However,
its application is difficult in fields where observations are necessarily messier
and subject to error, and where experiments are more difficult (or impossible)
to design. If you’re working on particle physics at the Large Hadron Collider
(Figure 7.1), the 27-km circumference tunnel deep under the French–Swiss
border, you can fire one particle into another and be fairly sure that what
you observe happening afterwards is a result of that collision and nothing
else. If your theory predicts one thing and you find something else, then your
hypothesis is falsified and you can move rapidly along to refining it—or at
least that’s the way it sounds; perhaps it’s more complicated.1 Because par-
ticle physics theories can be tested so robustly in this way, I’m sure that a
sizeable number of Science and Nature papers have accumulated as a result.
If, on the other hand, you’re a health researcher who happens to find that
1 For example, I assume there’s some probability theory involved in the falsification.
The Ideal and the Reality 77
Fig. 7.1 A map of the Large Hadron Collider (LHC) built under the French–Swiss border
by the European Organization for Nuclear Research (CERN) over a ten-year period and
operational since 2010 (the smaller circle is the Super Proton Synchrotron, also built
by CERN). The LHC was built for particle physics research; ‘hadron’ particles include
protons and neutrons, as well as more recently discovered pions and kaons. The Higgs
boson particle was identified there after a 40-year search in 2012, and the world didn’t
end as some had predicted. Particle physics may be hard work, but at least you know
(or ought to know) what’s going on in a vacuum—that is, what’s cause and what’s effect.
The trouble is most research fields can’t isolate and control their observations and
experiments to that extent and so have had to find other solutions.
Location map of the Large Hadron Collider and Super Proton Synchrotron. OpenStreetMap contribu-
tors, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons.
the random ‘noise’ in the system, is a far cry from the carefully controlled
environments in laboratories or particle colliders.
So, there’s a problem here. A fundamental issue for any hypothesis or wider
theory is that you can’t claim it’s true (Chapter 5); you can only say that
it’s consistent with observations. An inconsistent observation means that
you have to refine your hypothesis—for example, taking your kettle up a
mountain to show that water doesn’t always boil at the same temperature.
However, as we’ve just seen, outside basic science an inconsistent observa-
tion may not be enough to falsify a hypothesis. So, if we can’t prove that
something is true (as a principle) and we can’t prove that it’s false (because
of the messy world around us), how can we hope to advance knowledge?
I’m going to focus on health research in this chapter, because it’s had to face
up to this challenge for most/all of its history, so quite a lot of thought has
been put into solutions, or at least workarounds. As will hopefully be obvi-
ous, many of the principles are intuitive and apply more widely. We’ve come
across the issue already in Chapter 1 when we thought about John Snow’s
work. Quite clearly there was a prevailing current theory—that diseases were
spread by bad air—and Snow’s findings challenged this for the 1854 cholera
outbreak in Soho, but they weren’t enough for the hypothesis to be refined.
It took quite a lot longer for water to be considered as a carrier of disease and
for the necessary measures to be put in place to prevent this—which sub-
sequently saved millions of lives and might save millions more if clean water
was more equitably available around the world. Health research is mostly hav-
ing to deal with cause–effect relationships (what causes a disease and how can
it be prevented or treated), but most of these need to be investigated in the
natural world. In other words, investigations must consider people with all
their varying lifestyles and behaviours, where one risk factor is likely to clus-
ter with plenty of others, where treatments don’t always work (because they
just don’t work the same way in everyone, or because people don’t always take
them properly), and where disease outcomes themselves are many and varied,
influenced by all sorts of things besides medical care. So, it’s complicated . . .
Judging causality
Sir Austin Bradford Hill (Figure 7.2) was an epidemiologist and a successor
to John Snow, although he worked around a century later, setting up some of
The Ideal and the Reality 79
Fig. 7.2 Sir Austin Bradford Hill was Professor of Medical Statistics at the London
School of Hygiene and Tropical Medicine in the 1950s and 1960s. He was prevented
from studying medicine because of lengthy treatment for TB as a young man, so took
a correspondence degree in economics instead, after which he developed a career as a
statistician, and then as an epidemiologist. Like many disciplines, health research has
to rely on studies carried out in ‘real world’ situations that exist outside carefully con-
trolled laboratory environments. Working out cause and effect is therefore complicated,
and Bradford Hill formulated many of the approaches that are now taken for granted.
Medicine’s loss was medicine’s gain.
Photograph of Sir Austin Bradford Hill, unknown date, unknown author. CC BY 4.0 <https://
creativecommons.org/licenses/by/4.0>, via Wikimedia Commons. https://upload.wikimedia.org/
wikipedia/commons/e/eb/Austin_Bradford_Hill.jpg.
the early studies that indicated the connection between tobacco smoking and
lung cancer. As we know from the old adverts, smoking was viewed in a very
different way back in the 1950s than it is now. Pollution from domestic and
industrial smoke was at high levels (visible, for example, in the regular Lon-
don smog) and cigarettes were actually thought to provide some protection
against this; indeed, so many people smoked that it was sometimes hard to
find non-smokers as a comparison group. Bradford Hill worked at the Lon-
don School of Hygiene and Tropical Medicine, which was, and remains, a key
training ground for epidemiologists in the UK, and he wrote influentially on
the emerging discipline. Popper strongly influenced his thinking, and Brad-
ford Hill was preoccupied with how we might provide evidence for causal
relationships in health, given all the challenges mentioned about imprecise
experiments in the natural world. In 1965, he came up with nine criteria that
he felt provided evidence for a causal relationship. These are mostly quite
intuitive, but they have been influential, and it’s worth considering them in
turn.
80 How to do Research
Strong associations
The first of Bradford Hill’s ‘causal criteria’ is strength. This is simply say-
ing that very strong associations are more likely to be causal. There is a type
of cancer, for example, called mesothelioma, which affects the lining of the
lungs. It’s thankfully quite rare and when it occurs, it’s nearly always in people
who have been exposed to asbestos—a fibrous material that used to be incor-
porated in building materials, particularly for fire prevention, until its risks
became recognized. Asbestos isn’t used any more, but it’s important to check
old buildings carefully before any walls are knocked down or interfered with
(in case the fibres are released into the air), so exposure still crops up from
time to time. The fact that mesothelioma is so strongly linked with asbestos
exposure makes the causal relationship very likely—it’s hard to think of any
alternative explanation for the observation (although you might want to take
things like cigarette smoking into account before drawing conclusions from
a study of this). There are quite a few other strong relationships, like asbestos
and mesothelioma, particularly with certain exposures in the environment
and/or from occupations. It’s fairly obvious, for example, that radiation expo-
sure causes thyroid cancer, simply from its recorded occurrence in people
working with radioactive material without sufficient protection, or in the
high levels seen in those who survived the nuclear bombing of Japan or the
Chernobyl nuclear disaster in the former Soviet Union. Quite a few of the old
industries carried specific health risks that were well-recognized without the
need for research—for example, the Mad Hatter character in Lewis Carroll’s
Alice in Wonderland is believed to be based on common symptoms observed
in that occupation because of the use of mercury in the manufacture of felt
hats and the poisoning that resulted (which causes slurred speech, memory
loss, and tremor, among other effects). Again, all of these strong associations
between an exposure (the toxin) and an outcome (the disease) are almost
self-evidently causal.
Consistent associations
thing. The risk factors for diseases that we take for granted—smoking and
cancer, high cholesterol levels and heart disease, pollution and asthma, heavy
alcohol consumption and liver disease—have all been found repeatedly in
many different studies, and usually from fairly varied populations around the
world.
Specific associations
2 Or the leukaemia that caused at the age of 58 the death of her daughter, Irène Joliot-Curie (who also
worked extensively with radioactive materials).
82 How to do Research
Graded relationships
Some limitations
when reports began to emerge that it was hardly ever seen in older people
from these countries. A research group at the University of Ibadan, Nige-
ria, therefore investigated this further, led by the neurologists Oluwakayode
Osuntokun and Adesola Ogunniyi, and working together with colleagues at
the Indiana University School of Medicine. Through careful and painstaking
fieldwork, the two groups established that Alzheimer’s disease was present
in Nigeria, but at considerably lower levels in given age groups than was
observed for African-American communities in Indianapolis. Again, this
demonstrates the value of inconsistent findings in thinking about causation,
as well as the importance of research carried out in a properly international
context and not just in Western Europe or North America.
After the first five criteria, there are a few added by Bradford Hill that over-
lap a bit. The sixth ‘plausibility’ criterion requires that there is a reasonable
explanation for the observation, the seventh ‘coherence’ criterion requires
that the observation fits in with evidence from other fields (for example, with
laboratory science), and the ninth ‘analogy’ criterion requires broader sim-
ilarities between different associations. On the topic of smoking and lung
cancer again, there is clearly a plausible mechanism involved (components
of the cigarette smoke reaching the parts of the lung where cancers form)
and coherence with other research (biological evidence showing that com-
ponents of cigarette smoke have the effects on cell DNA that are recognized
to underlie cancer). There are also analogies to be drawn between the cancers
arising in the lung, throat, and oesophagus where direct contact with cigarette
smoke occurs. In the middle of these, the eighth (and perhaps most impor-
tant) criterion is experimental evidence, which will be discussed at greater
length in Chapter 10.
4
Temperature variation(ΔT)
2
0
−2
°C
−4
−6
−8
−10
400 350 300 250 200 150 100 50 0
300
Carbon Dioxide
280
260
ppmv
240
220
200
180
400 350 300 250 200 150 100 50 0
1.8
1.6 Dust concentration
1.4
1.2
1
ppm
0.8
0.6
0.4
0.2
0
400 350 300 250 200 150 100 50 0
Thousands of years ago
Fig. 7.3 Graphs of temperature, atmospheric carbon dioxide, and atmospheric dust
over the last 400,000 years, estimated from ice cores drilled and extracted at the Russian
Vostok Station in Antarctica. Like health research, climate change theory is not some-
thing that can be readily tested in a laboratory, so has to rely on this sort of information.
The coinciding peaks of temperature with carbon dioxide, but not with dust, concen-
trations fulfil the strength, consistency, gradient, and specificity criteria for a causal
association. So, they do look very suggestive, but they remain observations. Anthro-
pogenic climate change is not something that can be demonstrated in a controlled
laboratory environment, so inevitably has had to rely on findings that are support-
ive, but not 100% conclusive. The trouble is, as we know only too well, any residual
uncertainty will be exploited by those with vested interests to argue otherwise.
Data obtained from the Vostok ice column, created by Gnuplot 20 June 2010. Vostok-ice-core-
petit.png: NOAAderivative work: Autopilot, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.
php?curid=579656.
In conclusion
In conclusion, we’re faced with the fact that experimental falsification is all
very well in theory, but it is harder to achieve in practice in many fields of
research. However, these causal criteria and other features of observed rela-
tionships do at least help to rank findings in importance. They are therefore
worth considering when designing studies, because you want your work to
be as impactful as possible. There’s probably not much you can do about the
strength of an association (it is what it is), but you can certainly consider
The Ideal and the Reality 87
Although experimental falsification seems nice and clear the way Popper puts
it, it doesn’t sound likely that a single person’s finding has made the differ-
ence very often. Even the famous examples are more like a tipping point—the
change in viewpoint happens and one person’s work is credited with achiev-
ing this, but then when you look into it more closely, the change was probably
going to happen anyway and often there’s more than one person involved,
even if the fame is unevenly distributed. Copernicus is credited with kick-
starting modern science, but the understanding of planetary movements and
the model of the Sun at the centre was as much a product of the subsequent
careful observations made by Danish astronomer Tycho Brahe (quite possibly
with his sister Sophia playing a significant role) alongside the mathematics of
Kepler in Germany, and the understanding and development of the theory
by Galileo working in Venice. A lot of the work on gravity and astronomy
published by Newton in his 1687 Principia, might well have been completed
or at least enabled by his senior colleague Hooke; however, they had a partic-
ularly fractious relationship (see Chapter 16) so Newton didn’t credit Hooke
and did what he could to obscure his legacy from history after he died. Dar-
win delayed publication of this theories on evolution by natural selection for
around 20 years until he heard that Wallace was coming to the same conclu-
sions and was persuaded to rush them out. Einstein’s work built heavily on
the preceding achievements of James Clerk Maxwell, whose theories brought
together light, electricity, and magnetism into single unifying models.
The principle of falsification may therefore be best thought of as a process,
rather than a single event. Yes, the idea is that a single inconsistent observa-
tion ought to be enough to falsify a hypothesis and require it to be refined;
however, historically there don’t seem to be many examples of single incon-
sistent observations that have been enough on their own to force the issue.
Even in clear examples of experimental falsification, such as Fresnel’s demon-
stration of light behaving as a wave (contrary to the Newtonian theory of
particles; Chapter 5), it still took a while for new ideas to take on. When
it comes to it, no one likes to have to change their mind. Academics ought
to be better at it than everyone else, given the principles we’ve discussed of
objectivity and acceptance of ignorance; however, unfortunately, the forces
of conservatism are as strong in research communities as elsewhere, if not
Consensus 89
stronger. So, we’re still stuck between a high ideal and a messy reality—once
again, how do we make this work? How do we advance knowledge if we can’t
prove something or disprove it either?
1 Which sounds good to me—never trust an academic who doesn’t change their mind.
Consensus 91
changes in thinking have happened quicker than others (i.e. more like Pop-
per’s falsification principle), particularly when there were fewer researchers in
the world. In other cases, the ‘paradigm’ seems to have shifted already before
the research happens that might be labelled as revolutionary.
Kuhn’s theories are particularly applied in accounts of developments in
social sciences and in economic and political thinking. An example some-
times cited of a paradigm shift is the sizeable change in economics theory
influenced by the academic work of John Maynard Keynes (Figure 8.2) in
the 1930s. Keynes was an English economist whose reputation had earned
him a senior position in the Treasury during the First World War, but he fell
out with the government over the punitive conditions imposed on Germany
Fig. 8.2 The economist John Maynard Keynes (right) photographed in 1946 with Harry
Dexter White, a senior US Treasury department official who was a key player in direct-
ing American financial policy during and after the Second World War (albeit while also
suspected of acting as a Soviet informant). Keynes’ work underpinned what can be seen
as a ‘paradigm shift’ in economic thinking—possibly influential on US policy during and
after the Great Depression of the 1930s, more probably influential on a number of gov-
ernmental responses to the more recent 2007–2008 financial crisis. Keynes died of a
heart attack about six weeks after this photograph was taken.
Photograph of John Maynard Keynes greeting Harry Dexter White. International Monetary Fund
(1946). Public domain, via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/
0/04/WhiteandKeynes.jpg
92 How to do Research
after the war and supported himself as a journalist writing generally anti-
establishment economic tracts during the 1920s.2 The ‘Keynesian revolution’
that bears his name was a particular result of his General Theory of Employ-
ment, Interest and Money, published in 1936 at the height of the Great
Depression. The prevailing economic theory up to that time was broadly that
there was nothing governments or treasuries could or should do to alleviate
unemployment. Keynes argued otherwise and influenced a lot of subsequent
government interventions during recessions. The influence becomes clearer
as time goes by. It would be nice to say that it had an immediate impact
on President Roosevelt’s interventions in the 1930s US, as Keynes and Roo-
sevelt clearly respected each other; however, it seems to have taken longer
than that to filter through into orthodoxy. Either way, Keynes’ revolution is
a fairly clear example of a paradigm shift—revolutionary research3 resulting
in a change in thinking—although clearly views have swung around a fair
bit since then. Keynesianism fell substantially out of favour during the 1970s
and 1980s, when monetarism was on the ascendancy within the Reagan and
Thatcher regimes; however, there was some revival of interest after the 2007–
2008 financial crisis, and a lot of current pandemic-era policies look distinctly
Keynesian, as if we’re back in the 1930s all over again.
2 Winston Churchill was Chancellor at the time and very much in Keynes’ firing line.
3 ‘Research’ in this sort of economics work involving, I’m guessing, a certain amount of induction
(deriving theories from observations) and a certain amount of rationalism (building theories by logical
argument and deduction). We consider the idea of the ‘experiment’ in economic theory in Chapter 11.
Consensus 93
more active extraction than simply searching journals. Because most of the
reviewing activity tends to focus on the results of medication trials,⁴ there is
also suspicion around selective publication by agencies with a vested inter-
est (primarily, but not exclusively, pharmaceutical companies) and therefore
stronger regulations to promote ‘open science’,—that is, ensuring that if a
research study has been carried out, its findings can be found. For example,
it’s now expected for all medication trials to be pre-registered to stand any
chance of subsequent publication, which at least provides some chance of dis-
covering what’s out there, even if the reviewer may still have to write around
requesting unpublished results. The Cochrane collaboration and library were
set up as a charitable organization to provide a repository for volunteer sys-
tematic reviews of healthcare interventions and diagnostic tests in order to
make this combined evidence publicly available, rather than relying on the
vagaries of medical journals and their editorial decisions. However, the ini-
tiative has its critics and there are concerns whether this repository and the
wider reviewing culture might act to obscure or suppress findings that run
counter to prevailing opinion (which is the whole point of falsification in the
first place).
Perhaps one of the most important, but rather fraught, consensus processes
recently has been in the field of global warming and climate change. Like
health research, the disciplines involved (e.g. meteorology, geology) are deal-
ing with information from the natural world: information that is subject
to any amount of random error and that can be quite difficult to measure
directly. Also, there’s no recourse to experimental data unless the course of
world events over the last century or so can be viewed as one large experiment
to see what happens (Chapter 7). Moving the field forward, and identifying
the priorities for intervention, has required a consensus to be achieved—
not easy by any means when there are so many voices, as well as quite a
sizeable amount of misinformation linked to agencies with vested interests
in opposition to the process. Even attempting to include it as an example
here feels risky, but on the other hand it’s simply too important a topic to
ignore.
⁴ That is, answering the questions of whether a given medication works, how well it works, and
(sometimes) how frequent are its side-effects.
94 How to do Research
⁵ Which wasn’t wholly novel—similar ideas can be attributed to Shen Kuo, writing much earlier in
eleventh-century China. It seems a straightforward conclusion to make—that there have been demon-
strable changes in the landscape (e.g. something that was once a seabed that’s now miles away from any
sea) that must have taken a very long time to occur. Chinese thinking of course wasn’t constrained by
seventeenth-century Christian bishops who had decided that the world was 6000 years old, but on the
other hand, there isn’t a sense of these sort of abstract eleventh-century ideas in China being taken forward
as something worth developing further.
Consensus 95
Fig. 8.3 Svante Arrhenius, a Swedish scientist and one of the founders of ‘physical
chemistry’, applying this to investigating the relationship between atmospheric car-
bon dioxide and global temperature. Arrhenius’ earlier doctoral work on electrolyte
conduction had not impressed his doctorate examiners at Uppsala University, who
awarded him a third-class degree. Undaunted, he extended this and ended up winning
the 1903 Nobel Prize in Chemistry—an encouragement, surely, for anyone encountering
a difficult PhD examination.
Photograph of Svante Arrhenius. Photogravure Meisenbach Riffarth & Co., Leipzig (1909). Public
domain, via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/6/6c/
Arrhenius2.jpg
being retained on Earth, but not on the Moon. However, at the time there was
very little human carbon dioxide production, so Arrhenius thought that any
appreciable warming was a long way off and might actually be a good thing.
A few years later, in 1899, University of Michigan geologist Thomas
Chrowder Chamberlin published An Attempt to Frame a Working Hypoth-
esis of the Cause of Glacial Periods on an Atmospheric Basis, which pulled
together evidence available that explicitly linked climate in the past with
atmospheric carbon dioxide. These ideas weren’t immediately adopted, as the
limited experiments of that time struggled to distinguish effects of carbon
dioxide from those of water vapour, and it was assumed that carbon dioxide
would be rapidly absorbed by oceans. In the early twentieth century, changes
in solar activity were thought to be a more promising explanation for climate
fluctuations, although this fell out of fashion after predictions failed. Car-
bon dioxide was taken more seriously from the 1950s after evidence showed
there was very little water vapour in the upper atmosphere—thus removing
it as a competing explanation. Improved understanding of ocean chemistry
96 How to do Research
also showed that absorption of carbon dioxide was much more limited than
previously assumed. By 1959, scientific presentations were beginning to pro-
pose that global temperatures would rise, and polar icecaps melt, as a result
of human carbon dioxide production. Improvements in spectroscopy and
computer modelling allowed development of Arrhenius’ original theory and
more precise relationships calculated between carbon dioxide levels and
temperature.
From the mid-1960s through the 1970s, popular opinion was reversed for a
while. A survey of temperatures from weather stations indicated an increase
from 1880 to 1940 but then a decrease after that point. At the time, there
was parallel concern about increasing use of (coolant) aerosols, as well as a
theory that the world was heading for a new ice age, based on previously
observed recurring patterns. The majority of scientific opinion was still in
favour of warming rather than cooling, but this was obscured at times in
popular reporting.⁶ By the early 1980s, levels of aerosol pollution were begin-
ning to drop—there has even been some suggestion that these might have
masked the warming caused by greenhouse gases for a while. Also, there was
increasing acceptance that the survey results on post-1940 cooling had been
overly influenced by a few severe winters in northern hemisphere countries
and weren’t adequately reflecting global temperatures. Evidence from ice core
studies, such as the one carried out at the Antarctic Vostok Station (Chapter 7)
(Figure 7.3), provided better long-term measures of temperature, as well as
of carbon dioxide levels. Methane also joined carbon dioxide as a gas of con-
cern in the 1980s and the Intergovernmental Panel on Climate Change was
set up in 1988 to take the process forward.
Broadly speaking, therefore, the process has been one of growing consensus
on climate change, although clearly the formidable challenges in the source
research (geology, palaeontology, meteorology, chemistry) have sometimes
impeded this process, or given rise to distractions and variability in opin-
ions. This is hardly surprising—researchers investigating the health hazards
of tobacco smoking similarly had to deal with the fact that many of those
affected by its cancer and heart disease outcomes were living in cities with
high levels of domestic and industrial smoke pollution, eating high choles-
terol diets, and experiencing levels of work stress and malnutrition that
⁶ I certainly remember hearing more about cooling than warming when I was growing up in the 1970s.
Consensus 97
are much less common today; at the time, living beyond the age of 70
was considered quite an achievement. Finally, both fields of research issued
recommendations that impinged on personal lifestyles (reduced carbon foot-
print and smoking cessation) and required governments to do things they
would rather avoid if at all possible. Governments are persuadable but tend
to have a high level of inertia, and this inertia will be compounded by any
hint of a lack of ‘expert’ consensus. All government ministers will have had
their fingers burned at some point because of following the wishes of one
well-meaning pressure group, only to come under fire from another group
wanting the opposite—so they’re understandably cautious.
And this is just at the altruistic end of human behaviour. It assumes that
the research community ultimately wants what’s best, but just needs to make
sure that it’s acting on a genuine consensus (rather than one group of the com-
munity simply shouting louder than the others). However, climate change is
clearly a lot more complicated, and is unsavoury as a topic because implica-
tions of the theory collide head-on with the vested interests of certain sectors
that have large amounts of money to spend on protecting these interests. Fur-
thermore, unfortunately, it takes a lot less effort to undermine consensus than
to build and maintain it. This is, of course, not new. Any researcher work-
ing on climate change needs only to glance at healthcare research in the late
twentieth century to see similar examples of overt or covert financial support
for activities to undermine consensus. The tobacco industry was the first to
come under fire for this in the days when smoking-related harms were still
under debate. More recently, certain sectors of the food and drink indus-
try have also been scrutinized as public health attention has swung in their
direction. This has been accompanied by concerns about the sometimes less-
than-satisfactory vigilance of the pharmaceutical industry to adverse events
and problematic off-label use, and then you can extend it (if you want) to
ongoing debates around the mental health impacts of the dieting industry,
the gambling industry, the social media industry, and so on. The Christian
Church may come in for justified criticism over its inhibition of research
at times, but I can’t think of any action of the Church that has brought as
much coordinated power and money to bear as some commercial sectors on
unwanted research findings.
I don’t feel that there’s any need to name names for the players involved in
climate change research—those parties interested in disrupting a consensus
that threatens their interests know very well who they are. And, if it hasn’t
started already, the number of sectors with vested interests under threat is
likely to multiply, as what is being asked of governments and populations
moves beyond energy use to dietary choices and farming practices. Of course,
98 How to do Research
now that we’re in the lovely twenty-first century, the debate has become as
polarized and acrimonious as ever, magnified by social media’s echo cham-
bers. One side categorizes the other position as ‘climate change denial’; those
on the other side don’t like this because it sounds too much like ‘Holocaust
denial’—they’d prefer the term ‘climate change sceptics’ because it sounds
more like traditional scientific scepticism. The first side feel that this is inap-
propriate when pronouncements equate to the denial of established scientific
facts, and then of course there’s the question of where the dirty money’s
coming from to foment the argument in the first place.
I’m skating on thin ice here—even an attempt to describe two sides of a debate
is to imply that there is some genuine debate and then to fall into the trap of
being seen to belong to one of the parties involved (and the one to which I
would really rather not give any time). I guess it’s just an illustration of what
consensus can mean nowadays, when the financial and societal stakes are
so high. Galileo and Darwin were no strangers to controversy, but I’m sure
they never dreamed it could become as poisonous. Historically, the process of
consensus has broadly reflected how many researchers and other interested
parties there were at the time. From 1500 to 1800 there weren’t that many
people with any influence, so achieving consensus just involved being a lit-
tle politically assertive and ideally publishing a well-written book that caught
the interest of a small, literate public (as Hooke, Newton, Laplace, and oth-
ers managed to do). By 1900, academia had become more of a profession
and the numbers of active researchers had expanded considerably. Unfortu-
nately, this meant that the community had become more conservative, and
it was more difficult for new ideas to filter through, particularly those that
really were new and required a change in direction. It also probably meant
that you were a lot less likely to exert any influence if you were outside the uni-
versity system. Nowadays, it’s reasonable to feel unsure about our position,
and to wonder where we’ll go next. Researchers work in industry nowadays
as much as in universities, and perhaps increasing numbers are working in
their own homes with no affiliation at all, connected as communities online,
rather than as academic departments. Publication submissions are spiralling
beyond what’s possible to peer review, fuelled (at least in my sector) by an
equivalently spiralling number of journals touting for business, particularly
now that the profits are shifting towards authors paying for publication rather
Consensus 99
than relying on sales to university libraries. Whatever the world is, it’s very
different from what it used to be.
As mentioned, consensus can be quite an aggressive process at times. While
there may be very good justification for this, it doesn’t feel quite right. Leav-
ing climate change well alone and switching back to the less-charged territory
of health research, if I was to carry out a robust study and find a disease
where tobacco smoking actually had some benefits,⁷ I might find myself in
difficulties. For a start, I might feel reluctant to publish in the first place,
because of the worry about giving the wrong public health message after all
the years spent by my colleagues and forebears achieving a consensus that
smoking is universally bad. Then, even if I did attempt to publish, I would
expect a fairly hard time at the hands of journal editors and reviewers, who
probably wouldn’t appreciate the message and the implied challenge to a
hard-won consensus. Or perhaps not—perhaps people are no longer so con-
cerned about a massive tobacco lobby campaign to persuade everyone again
to buy cigarettes. In fact, from what I remember, one of the last pro-tobacco
stands in the health debate was the possibility that it might be protective
against Alzheimer’s disease (which was based on a reasonable hypothesis to
do with stimulation of nicotine receptors boosting cognitive function). I’m
not sure how much industry funding went into it; my recollection is that it
turned out to come from studies that hadn’t been particularly well designed,
and the hypothesis faded without a fight.⁸ Anyway, hopefully you can see the
point—when it starts becoming too assertive, consensus risks drifting away
from the dispassionate position we’re supposed to occupy as researchers, and
I think it can sometimes be taken too far, no matter how noble the reasons.
We’ll revisit this when we think about lessons learned from the COVID-19
pandemic (Chapter 17).
So, what can we do? Does consensus necessarily run against Popper’s
principle of falsification? I think there are a few points worth making. First,
it’s fine to be sceptical but perhaps keep it to yourself unless you’ve got
some decent evidence of your own to the contrary (i.e. put up or shut up).
Second, if you really have to criticize someone else’s findings, try to suggest a
realistically better way of carrying out the research, rather than just carping
about poor study design when there wasn’t really an alternative—better still,
do it yourself. Third, try to avoid taking dodgy money—that is, funding that
might influence the way you carry out or report your research. Fourth, if
⁷ Unlikely, I know, but let’s remember that we’re trying to be open-minded like Socrates, so anything’s
possible.
⁸ Although it continues to crop up in the occasional ‘I thought smoking was protective’ comment from
members of the public, so maybe it’s still around somewhere.
100 How to do Research
you really have to take the money, be up front about it and make sure it’s
public so that others can make up their own minds about you. And try to
direct your research away from what might be influenced (i.e. don’t carry
out a study if you want the findings to turn out one way rather than another).
Finally, be polite to each other—we are supposed to be living in civilized
societies, after all.
That’s enough about academic bickering. A more fundamental problem
with consensus as an answer to the original problem (how to work with grad-
ual accumulation of knowledge) is the ‘whose consensus?’ question. I don’t
know anything about Sir Austin Bradford Hill as a person—he might have
been a very amiable and liberal-minded fellow. However, my strong suspi-
cion is that when he spoke about a verdict of causality in the 1960s, he was
thinking about a jury full of relatively old white men. The ‘scientific com-
munity’ that people have in mind when they talk about consensus nowadays
might be a little more diverse than it used to be in the mid-seventeenth cen-
tury, when the Royal Society and Académie des Sciences were founded by
Charles II and Louis XIV in London and Paris, or in the mid-eighteenth cen-
tury, when the American Philosophical Society was founded by Benjamin
Franklin in Philadelphia. Regardless, the scientific community is still elite.
Perhaps it’s right that it’s up to those actually doing the research to determine
their own consensus—it’s just that it’s a rather difficult concept to communi-
cate nowadays when the public’s patience has run a little thin with ‘experts’.
The conversation continues . . .
9
Designing Research
From Description to Theory-Building
We’ve spent quite a long time thinking about the principles underlying
research in the last few chapters, so we now move on to actually doing
research. The next few chapters consider some of the ways in which research
projects are designed and put together. Clearly what follows is inevitably an
overview. While there are things to be said that do cut across many research
fields, there is also a wealth of material out there that’s more tailored to indi-
vidual disciplines and you’re likely to need to read more widely for detail on
your particular field of interest.
On that subject, if you’re here as someone just starting research, I’m actu-
ally not sure how much you need to read up on research design in advance, at
least initially. It’s a little like pouring over the Highway Code and expecting to
be able to drive a car or ride a bike. Instead, the textbooks tend to be more use-
ful as a reference when you’re underway, just as you might leave the Highway
Code aside until you’ve at least figured out how to work the clutch and accel-
erator. Early in a research career there’s really no better learning experience
than actually carrying out a study, talking to other researchers, and getting
good supervision. If they can be taught at all, discipline-specific research
methods are best learned from a course with a strong emphasis on practi-
cal exercises. However, do try to apply what you learn as soon as possible, as
memory fades very quickly.1 The trouble is that courses and specialist text-
books don’t tend to cover the wider context for research very much, which I
guess is why I ended up writing what you’re reading now.
Getting underway
How do you start a piece of research? The first step must be determining the
reason for your study in the first place. What is it trying to achieve? What
have other researchers done in that area? How have they done it? Could it
1 Statistical methods for data analysis are particularly prone to being forgotten if you don’t apply them
quickly, although I suspect it applies more widely.
102 How to do Research
be done better? All of this goes back to Socrates’ ‘I neither know nor think
I know’ statement and the idea of blank spaces on sixteenth-century maps
(Chapter 3). Research begins with the idea that there’s something to be dis-
covered and it’s important to put a little thought into exactly what that is.
As I’ve mentioned before, ‘acknowledged ignorance’ and a genuinely objec-
tive and enquiring attitude are rare states of mind in practice—difficult to
achieve and even more difficult to sustain. It’s not going to get any easier as
you immerse yourself in your project, so this period at the beginning is the
best time to have a go, even if you only achieve enlightenment briefly over a
cup of coffee one particular morning. Where exactly are those blank spaces
on the map? Which particular shoreline are you going to explore? And what
resources are you going to need to explore it?
Again, if you’re beginning a research post at the early stages of your career,
you’re likely to be joining a study that’s already been designed. That’s fine—
you can still try to get a feel for the context. Ask your supervisor what exactly
it is that you’re trying to find out. If they’re any good as a supervisor (and if you
catch them at the right moment) they’ll get enthusiastic at this point, because
‘finding out’ is what really makes research fun and the reason why most aca-
demics put up with its more wearying aspects (who ever said that exploration
was easy?). The learning curve may be steep but do try to become familiar
with what’s out there and how others have approached similar challenges in
the past. You’re hopefully going to do something different and move the field
forward a little (otherwise why do the study at all?) but it’s still important to
understand the audience you’ll be talking to when you’ve finished everything
and are preparing your papers and presentations. Some people like Einstein
have the gifts to match their arrogance and can take a research field forward
on thinking alone with minimal reliance on the knowledge and experience
of others. However, most of us are not Einstein.
Induction or deduction?
and the natural sciences for empiricism. However, life is rarely that simple,
and you won’t be surprised to learn that most research sits in murky terri-
tory somewhere in the middle. For example, you may find that your field
is labelled as primarily empirical, and you may find yourself carrying out
individual pieces of research that involve taking observations and drawing
conclusions. However, beyond your own conclusions from your own par-
ticular findings, if you’re bringing your observations together with those
from others, and trying to think more broadly, you might well find your-
self deploying quite rationalist arguments (e.g. if X and Y seem to be true
from observation then Z ought to follow from logical deduction). This is
particularly the case if you end up carrying out any review of the literature
with a narrative component—that is, one that goes beyond simply pool-
ing observations from other studies but is actually trying to use the pooled
findings to propose new knowledge. It’s also almost inevitable that an indi-
vidual research project will require you to consider your own findings in
relation to those from past research, and you’ll then probably end up mak-
ing an argument about implications and what further research might be
most useful. These sorts of mental steps forward are going to sound more
convincing if they’re carefully considered and argued. In other words, it’s
best for them to employ logical, deductive, rationalist principles, just as
if you’d been putting mathematical constructs together to solve Fermat’s
theorem.
So, although most of the next few chapters on methods focus on the empir-
ical end of research, the rationalist approach can’t be ignored entirely. The
two strands become particularly entangled in humanities. If, for example, you
were wanting to research the author Charles Dickens and the rather odd and
unsatisfactory way he tended to portray young women in his novels, your
research is obviously going to build on ‘observation’ (i.e. a close reading of
the novels and the characters in question, assembling material for case exam-
ples). Angelic, dutiful, rather bland, disturbingly childlike, and profoundly
unrealistic female characters are not difficult to find. You could begin with
obvious ones like Florence Dombey (Figure 9.1) (Dombey and Son), Ada
Clare (Bleak House), Amy Dorrit (Little Dorrit), and Lucie Manette (A Tale of
Two Cities), and then move on to consider earlier prototypes like Little Nell
(The Old Curiosity Shop) or Kate Nickleby (Nicholas Nickleby), or less typical
examples like Lizzie Hexam (Our Mutual Friend), Dora Spenlow (David Cop-
perfield), and Esther Summerson (Bleak House).2 However, although drawing
2 Esther Summerson, as a first-person narrator in Bleak House, is at least given a voice, albeit to rather
unsatisfactory effect. Incidentally, I’m referring here to the characters portrayed by Dickens in his nov-
els. Adaptations for film and television nowadays try their best to reinstate some agency in his heroines,
although clearly sometimes it’s a bit of a struggle.
104 How to do Research
Fig. 9.1 An illustration from Charles Dickens’ Dombey and Son. Florence Dombey (pic-
tured on the right) is one of Dickens’ commonly occurring two-dimensional angelic
young heroines, who pose a bit of a challenge for modern readers and adaptations.
So, one hypothesis might be that Dickens simply couldn’t write young female charac-
ters. On the other hand, another hypothesis (not necessarily contradictory) might be
that these characters were primarily written as a means of showing up their inade-
quate fathers, who are the real focus (e.g. the emotionally impoverished Mr Dombey,
pictured here), and that the subject was too close to home for Dickens to portray in any
other way (he certainly steers well clear of father–son relationships). Addressing this in
research is likely to require a mixture of empirical observation (systematically assem-
bled source materials), inductive reasoning from these observations, and rationalist
deduction (arguments developed from first principles).
Illustration by Hablot Knight Browne, from Dickens, C. (1848). Dombey and Son. Public domain,
via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/f/f9/P_595--dombey_
and_son.jpg
Description or hypothesis-testing?
descriptions are repeatedly being taken of the natural world in botany, zool-
ogy, geology, meteorology, astronomy, and others. All of these are simply
measuring how things are at a given time (or changes over time, or varia-
tions between locations/regions). The knowledge is useful in itself, and there
doesn’t need to be a theory or hypothesis that’s under evaluation. However,
it’s helpful to be clear about this when you’re embarking on such a study.
The next step along the path (if you can view it that way) takes us to studies
that involve observation but are beginning to think about cause and effect
(i.e. moving on from just describing what’s there to understanding why it’s
there or what it’s doing). In my field of epidemiology, we talk about ‘descrip-
tive’ vs. ‘analytic’ studies to draw this distinction, but the boundaries can be
quite blurred. For example, John Snow set out to describe the distribution
of cholera cases in the 1854 Soho outbreak (Chapter 1), but he was using
this description to test whether it fitted an air-borne or water-borne model of
infection (and he had ideas about this from earlier studies suggesting that the
miasma theory might need to be refined). Similarly, the ice columns drilled
at the Antarctic Vostok Station have been used to describe temperatures dur-
ing and between the last few ice ages; however, they have also been used to
estimate methane and carbon dioxide levels, among other measurements,
and the interrelationships between proposed greenhouse gases and global
temperatures are obviously key to testing climate change hypotheses.
Research in social sciences in particular, involves both describing and seek-
ing to explain a situation. The studies involved often carry out what is called
qualitative analysis (i.e. using recorded information from open-ended inter-
views with relevant people or groups to see what themes emerge from the
responses and describing that narrative3). A commonly used framework for
qualitative research is ‘grounded theory’, which was originally developed in
the 1960s by the US sociologists Barney Glaser and Anselm Strauss. In many
respects, the approach is pure description. If you carry out research in this
way, you are encouraged to keep yourself deliberately free of preconceptions
in order to improve the quality of the information you obtain. You will def-
initely need a supervision process that challenges you to stay objective and
avoid introducing personal bias, and the general rule is not to review any
background literature in advance of the study for the same reason. The idea
3 As distinct from ‘quantitative’ analysis that might come from surveys administering pre-prepared
questionnaires with checkboxes, allowing results to be analysed numerically and via statistical methods.
108 How to do Research
⁴ That is, if you’re desperate enough to find something, you’ll probably end up finding it, but that doesn’t
make it true.
⁵ Not a problem—as mentioned before, it might be messy to have differing views and a bit of debate,
but you should also be cautious about too close or cosy a consensus.
Designing Research: From Description to Theory-Building 109
Fig. 9.2 The Hittite Bronze Age civilization was a classic ‘lost empire’. Until the late
nineteenth century, all that was known about them came from a few references in the
Old Testament, and some Egyptian mentions, like the picture here of a Hittite chariot.
Research in this case follows an approach more akin to scientific theory: initial observa-
tions (similarities between as-yet-unreadable hieroglyphs across a wide area of central
Turkey), inductive reasoning and hypothesis generation (that there had been a size-
able kingdom there, as yet unknown), followed by further enquiry and support for the
hypothesis through new archaeological findings. The result was a transformation in our
understanding of that place and time.
Drawing from an Egyptian relief from Volz, P. (1914). Die biblischen Altertümer, p. 514. copied from
de:Hethitischer Streitwagen.jpg. Public domain, via Wikimedia Commons
110 How to do Research
hieroglyphs across Turkey and proposed in 1888 that they might indicate the
presence of a hitherto-unknown empire. This wasn’t viewed with favour at
the time because he couldn’t find a way to decipher the language. However, it
led to interest in excavations in that region and around 20 years later, archae-
ologists Hugo Winckler and Theodore Makridi discovered a stockpile of clay
tablets in the Hittite language around Boğazkale, central Turkey, which were
enough to allow a translation process to begin. This, and further excava-
tions, established that this site was indeed the capital of a major empire in
the fourteenth and thirteenth centuries bce, supporting Sayce’s hypothesis.
Another way in which history moved towards putting ideas under pressure
from the twentieth century onwards was through taking known evidence and
rereading it with a fresh perspective. This was a particular concern for W.
E. B. Du Bois (Figure 9.3), a sociologist and civil rights activist as well as a
historian, and, in 1895, the first African-American to earn a doctorate at Har-
vard. Later in 1935, while he was working as a professor at Atlanta University,
Du Bois published what was perhaps his most important work, Black Recon-
struction in America. This took on a widely held account of the 1865–1877
Reconstruction period after the American Civil War that portrayed the uni-
versal Black voting rights at the time as being a failure, collapsing into chaos
until it was ‘saved’ by White southern Democrats.⁶ Clearly this accepted his-
tory underpinned a great deal of inequality and frank racism, mixed together
with a bit of Social Darwinist pseudo-science to suggest that Black people
couldn’t/shouldn’t provide political leadership. Du Bois, in response, went
back to the source material and retold the story, refuting the idea that the
post-emancipation South had descended into chaos, citing examples of pub-
lic health, education, and welfare measures, as well as the continuation of
many post-Reconstruction policies by Democrats for a long time afterwards.
This different narrative took a while to take hold nationally (the original
‘Dunning School⁷’ view was still taught into the 1960s) but became in time a
key element of the Civil Rights movement.
⁶ Interestingly portrayed as a battle against the reforming liberal Republicans in those rather different
political days.
⁷ William Archibald Dunning, working at Columbia University, New York, in the late 19th and early
20th Centuries, was by all accounts an influential and supportive teacher who was much respected by his
students, many of whom went on to become leading scholars themselves in Southern US universities,
hence the ‘Dunning School’. The trouble was that the theories he so supportively propagated were very
Designing Research: From Description to Theory-Building 111
Fig. 9.3 W. E. B. Du Bois, sociologist, historian, and civil rights activist, particularly
known for the challenges he raised against accepted narratives of the time about
African-American contributions to the Reconstruction era after the American Civil War.
The competing hypotheses can be said to be derived from inductive reasoning based
empirically on source observations. The prevailing theory was that Reconstruction had
been a failure, leading to an inference that African-American people were not fit to hold
power. Du Bois’ contribution involved a reappraisal of the source historical records,
demonstrating that the pre-existing observations were unsatisfactory (they were selec-
tive and biased) and that the conclusions drawn from them were therefore unsound.
Research in history can be as much about putting ideas under pressure as it is in
the sciences, although the challenge of adjudicating between competing theories (e.g.
according to whose consensus?) is likewise fraught with difficulties.
Photograph of W. E. B. DuBois by James E. Purdy (1907). National Portrait Gallery Public domain, via
Wikimedia Commons. https://npg.si.edu/object/npg_NPG.80.25
weighted towards a narrative of Southern Confederates and plantation owners being oppressed by North-
ern Republicans and freed slaves (a view of US Civil War history that was further popularised in the 1930s
by the novel and film of Gone with the Wind). I guess this illustrates the potential downside of effective
academic mentoring.
112 How to do Research
and individuals who, for reasons best known to themselves, attempt to under-
mine well-attested atrocities and claim that this is just academic scepticism,
wilfully ignoring (at best) the way this is used to fuel racism and its atten-
dant intimidation and threat. That’s the unsavoury end of things but easier
to counter because it’s clearly visible. In more balanced debates about con-
flicting theories (i.e. each with their own proponents and each with bodies
of evidence to back them up), you would be forgiven for wondering where
‘academic consensus’ ends and where ‘the fashionable position’ begins. Per-
haps there needs to be a little more acceptance that conflicting theories can
coexist, without getting angry about it, just as light can be usefully described
as behaving like waves in some respects and like particles in others.
In summary, many fields of research begin with description—the colla-
tion and organization of relevant information (Chapter 4). Next, there are the
theories built from description (Chapter 4). And then there may be compet-
ing theories built from more description that require some process to decide
which theory to hold. In most academic fields the decision between theo-
ries relies on consensus and a rather nebulous process of acceptance by some
community of experts and fellow researchers (Chapter 8). Sometimes this
consensus process is civilized, and sometimes it’s acrimonious; sometimes
logic and rationalism can hold sway if the equations or theories are water-
tight; sometimes there isn’t that luxury and it’s as easy to argue one way as
another; sometimes the consensus is plain wrong—for example, just follow-
ing a fashionable way of thinking, or swayed by an individual or group with
the loudest voice or the strongest political influence.
What can you do about all of this when you design your studies? Well,
smoothing the path to knowledge is everyone’s responsibility. At any stage
in your research career, you can try your best to maintain integrity:
no point fighting this, but keep it uppermost in mind and try to stay
objective and truthful, nonetheless.
If you’re trying to work out causes and effects, there’s really nothing bet-
ter than an experiment involving an intervention (and I’m going to use
evaluations of interventions and ‘experiment’ interchangeably here).1 We’ve
talked a lot so far about researchers taking observations and deriving theories
(Chapter 4), even putting those theories to the test by repeated studies with
more observations (Chapter 5). However, at the end of it, you’re left feeling
not completely sure what’s true, even with the best observational studies—
there’s still a lingering uncertainty whether the conclusions have been drawn
correctly, or whether there might be alternative explanations aside from the
theory you’re proposing or testing. On the other hand, if you do something,
make a change, and then see what happens, you’ve got a much better chance
of working out a cause and effect. It goes back to Chapter 2 and the little child
playing with a light switch. The child might notice adults using the switch to
turn the light on, but she’s not going to be convinced about the connection
between the cause and consequence until she’s had a few tries herself (i.e. has
moved from observation to experiment). A successfully run experiment is the
closest you can get to actual proof, and for some questions it may only take a
single experiment by a single researcher to make the difference (although it
helps if others can repeat it to make sure).
As mentioned, not all research fields have recourse to experiments, and
we consider some of the alternative approaches in Chapter 11. However,
the message remains that if you can test something experimentally then you
should try to do so. The trick in designing an experiment is being sure at
the end of it all that you actually did what you were trying to do. You’re
starting with a situation, changing it in some way, and seeing what happens.
And you’re hoping that whatever you observe at the end is a consequence
of your intervention and there aren’t any alternative explanations. So, you
1 Some people have a broad view of ‘experiment’ as encompassing any new study, including observa-
tional research, that’s set up to test a hypothesis; however, I think that most would restrict the term to
studies involving an intervention of some sort, so that’s what I’m implying here.
Designing Research: Experiments 115
need to be sure what the conditions really were at the start, you need to
know what exactly was introduced (i.e. actually changed) in your interven-
tion, and then you need to know what exactly happened as a consequence.
The design of an experiment is partly a matter of keeping the conditions
under tight control—so that you know it was your intervention that changed
things and not some contaminating factor—and it is partly a matter of accu-
rate measurement—making sure you can identify what happened as a result
of the intervention.
Experiments have, of course, been carried out for as long as we’ve been
a species and go back even further to learning in animals. Conditioning
behaviour (e.g. Pavlov’s dog salivating to the sound of the bell because it
expects the food when the bell goes off) is essentially all about learning by the
repeated pairing of cause and consequence. However, the pairing of cause and
consequence won’t occur unless the animal has done something new in the
first place (i.e. has carried out an experiment, of sorts). This in turn requires
the attribute of curiosity, and it’s reasonable to argue that Homo sapiens has
come to its current level of dominance as a species through a particularly high
level of curiosity,2 accompanied by conceptual thinking to accelerate learn-
ing, and by the communication skills to bypass the need for repetition of the
same learning with every generation (and I suppose opposable thumbs have
helped as well).
2 We certainly seem to be singularly unable to stop finding things out, even when the consequences are
potentially species-threatening.
116 How to do Research
of modern chemistry, was one of the pioneers in the field and laid a lot of
the groundwork with his painstaking experiments—showing, for example,
the relationship between the pressures required and the volume of air com-
pressed or expanded (the law that bears his name), as well as demonstrating
that life and combustion depended on the presence of air (or whatever it was
that a vacuum pump was removing).
Boyle was also able to show the relationship between air pressure and the
boiling point of water, providing the experimental evidence to explain why
this changes at the top of a mountain. I’m not aware that there was a relevant
body of observation in advance of Boyle’s findings, but you can imagine that
simply showing that water has a different boiling point at the top of a moun-
tain than at sea level doesn’t provide any single underlying theory—because
it’s just an observation combined with speculation, however inductive the
reasoning. On the other hand, clearly there’s the opportunity to carry out
an experiment here—you could be heating water in an identical location,
applying different air pressures, and seeing what happens to its boiling point,
knowing that nothing else has changed. The difficulty lies in how to design
that experiment practically with limited seventeenth-century equipment. The
water needs to be heated in a sealed container, because you need to be able
to vary the air pressure, but you also need to be able to measure the tem-
perature of the water at boiling point, as well as the actual pressure of the
air in the container. So, Boyle’s experiment involved creating a quite compli-
cated apparatus incorporating a mercury barometer and thermometer inside
a sealed glass vessel—a sort of ship-in-a-bottle construction (Figure 10.1).
This is indicative of the experiments of the time, which were as much about
having someone good and inventive to manufacture your glassware and other
equipment as it was about the study design. Boyle was able to put together his
own team of assistants (including a young Robert Hooke) and essentially con-
struct his own research institute, thanks to the income from his late father’s
extensive estates in Ireland.3 Experiments do tend to need funding.
As one of the first members of the British Royal Society in 1662, Boyle was
at the heart of the early revolution in Western European thinking that laid the
foundations for modern science. He was an avid admirer of Bacon’s writing,
and Boyle’s 1661 masterwork The Sceptical Chymist became widely credited
as beginning the slow shift from alchemy and mysticism to chemistry and
Enlightenment thinking. The opening sentences of his first chapter are worth
3 Boyle therefore belonged to the hated class of absentee landlords in Ireland, although was personally
benevolent and philanthropic as far as we can tell.
Designing Research: Experiments 117
Fig. 10.1 Early science is as much about its equipment as its practitioners. Robert Boyle
was interested in things that were taken for granted like air, atmospheric pressure, and
the boiling point of water. In order to be able to investigate these, he needed an appa-
ratus that would allow temperature and atmospheric pressure to be varied, hence his
air pump pictured here. In those early days, nothing came very close to achieving a
complete vacuum—a ‘low air pressure chamber’ would be a better description of his
apparatus, but you can see the beginnings of an environment being assembled to allow
experimentation, which depends on being able to impose the cause, measure the effect,
and keep everything else as constant and unaltered as possible.
Drawing of Robert Boyle’s air pump. From Boyle, R. (1661). New experiments physico-mechanical:
touching the spring of the air and their effects. Public domain, via Wikimedia Commons. https://
upload.wikimedia.org/wikipedia/commons/3/31/Boyle_air_pump.jpg
a quote. Although he was demolishing the ‘earth, air, fire, and water’ theory
of matter popularized by Aristotle, you can almost sense Socrates nodding
away in the background:
I perceive that divers of my Friends have thought it very strange to hear me speak
so irresolvedly, as I have been wont to do, concerning those things which some
take to be the Elements, and others to be the Principles of all mixt Bodies. But I
118 How to do Research
blush not to acknowledge that I much lesse scruple to confess that I Doubt, when
I do so, then to profess that I Know what I do not.
Fig. 10.2 By the time Joseph Priestley was carrying out his investigations in the 1770s,
the quality of apparatus had advanced considerably to the extent that different gases
could be isolated, and their effects investigated on combustion and life (hence the two
unfortunate mice in the foreground). Priestley was able to obtain the necessary envi-
ronment for his work on oxygen through aristocratic sponsorship following his earlier
discovery of carbon dioxide and invention of soda water. Unfortunately, he fell out with
his sponsor for reasons unknown and his later life, although productive, was dominated
by a disappointing scientific conservatism, unable to accept much of the subsequent
progress in his field.
Drawing of Joseph Priestley’s equipment. From Priestley, J. (1775). Experiments and observations on
different kinds of air. Public domain, via Wikimedia Commons. https://commons.wikimedia.org/w/
index.php?curid=7123539
personal fortune.⁴ Unlike Boyle, he was a shy and retiring individual who
was clearly more enthused by the process of discovery than by telling the
world about it; consequently, although he did publish occasionally and was
widely respected, a lot of his discoveries weren’t appreciated until much later.
One of these arose from experiments combining the gases that we now know
to be hydrogen and oxygen to make water, using a technique and appara-
tus developed by Alessandro Volta (the Italian physicist and chemist who
also invented the electric battery and founded the field of electrochemistry).
This involved a sealed copper or glass vessel into which mixtures of gases
could be introduced and then ignited with an electric spark—therefore ensur-
ing that it could be weighed before and afterwards with nothing escaping
and nothing introduced (as might have happened if a flame had been used).
By carrying this out repeatedly with different ratios of hydrogen and air,
and measuring the pure water that resulted, Cavendish was able to demon-
strate the H2 O formula (two atoms of hydrogen combined with one atom of
oxygen). Others were engaged in similar activity at around that time (notably
James Watt, the inventor of the steam engine) and tending to get into print
more rapidly, but Cavendish’s prominence in the discovery became generally
acknowledged.
To complete the story of the beginnings of modern chemistry, one of the
scientists simultaneously publishing with Cavendish (and not always giving
him the credit he deserved) was Antoine-Laurent Lavoisier (Figure 10.3)
working at the Académie des Sciences in Paris. Lavoisier was particularly
interested in heat and one of his experiments showed that the volume of
ice that melted as a result of the body heat of a guinea pig was equivalent
to the ice melted by burning enough charcoal to make the same amount of
carbon dioxide. This demonstrated that animal respiration could be consid-
ered as a form of combustion, combining carbon and oxygen to produce
energy (heat) and carbon dioxide. Lavoisier coined the names of oxygen
and hydrogen (among other substances), developing the language of chem-
istry in much the same way as Carl Linnaeus (Chapter 4) had defined the
language of biology, moving it finally away from its origins in alchemy⁵
and setting it up for Mendeleyev’s Periodic Table around 70 years later
(Chapter 3).
It’s important to note that a significant component of Lavoisier’s achieve-
ments was made possible by his wife, Marie-Anne, who worked with him
in his laboratory and made valuable translations for him of works by his
⁴ Cavendish was described at the time as ‘the richest of the wise, and the wisest of the rich’.
⁵ As well as disproving phlogiston theory—a seventeenth-century idea that there was a fire-like element
released during combustion, accounting for the visible changes in burned materials. This fell into disuse
once oxidation was understood.
Designing Research: Experiments 121
Fig. 10.3 Sometimes the apparatus for experiments simply has to be big and expen-
sive. Antoine Lavoisier (pictured in goggles) took Priestley’s discoveries forward but
needed to produce heat and combustion in his sealed containers without introducing
extra material so that he could demonstrate the conservation of mass. The apparatus
here uses solar energy for this purpose. Lavoisier, like Priestley, found himself on the
wrong side of politics in his home country for reasons that were nothing to do with his
research. Priestley escaped into exile whereas Lavoisier was executed.
Antoine Lavoisier with his solar furnace. From Lavoisier, A. L. (1862). Oeuvres de Lavoisier.
Publiées par les soins de son excellence le minister de l’instruction publique et des cultes. Paris,
Imprimerie impériale. Science History Institute. Public Domain. https://commons.wikimedia.org/w/
index.php?curid=64707215
contemporaries Priestley and Cavendish; she may also have contributed what
would now be considered co-authorship (although uncredited at the time) on
his widely read 1789 Traité Élémentaire de Chimie (Elementary Treatise on
Chemistry). Unfortunately for Antoine-Laurent, although his family fortune
helped his academic career, it drew him into unwise political connections, so
that he fell afoul of the French Revolution and was guillotined in 1794.⁶ How-
ever, Marie-Anne continued his legacy to the best of her ability, publishing
his memoirs and, once the worst of the troubles were over, ensuring that his
confiscated notebooks and apparatus were recovered for posterity.
⁶ As we’ve seen (Chapter 5), unlike his colleague and sometime collaborator Pierre-Simon Laplace, who
more prudently kept out of Paris when times were tricky.
122 How to do Research
Experiments in vivo
Perhaps the most important questions facing modern medicine are around
treatments and whether they work or not. Consequently, a sizeable clinical
trials industry has built up, along with a broad consensus on standards for
study design and conduct. The issue here, again, is one of cause and effect.
If a potential new drug treatment has been developed and been found to be
safe in early-stage evaluations then we need to be absolutely sure whether it
works (and how well it works) before it starts getting prescribed and has to
be paid for (whether via taxation, health insurance, or personally).
The observational approach to this question would be to start the new drug
and compare people who happen to be receiving it or not, following them up
to see who gets better and looking at the difference between the two groups.
However, the problem is that if you did find a difference (e.g. that people
receiving the new drug were more likely to get better), you would still be left
unsure whether it was really the effect of the drug. There might be all sorts
of other differences between people in the comparison groups—for example,
perhaps physicians are more likely to give the new drug to people with milder
illness or some other reason that gives them a better outcome. Even if you
‘adjusted’ for differences between the groups, you could only adjust for the
Designing Research: Experiments 123
things you’d measured and there might be other unmeasured factors at play.
Finally, it could be quite an expensive undertaking to do all of this follow-up,
so you would end up spending a lot of money and still not be sure whether
you’d answered the question. Hence the development and routine use of the
randomized controlled trial (RCT), which still tends to be expensive, but at
least gives a more accurate answer.
Part of the advantage of the RCT is simply that it’s an intervention—that
is, you’re not just observing people who happen to be taking one treatment
or another. Instead, you’re actually assigning the treatment groups. Interven-
tion studies go back quite a long way in medicine. For example, as discussed
in Chapter 5, having formed the theory that cowpox might be protective
against smallpox, Edward Jenner went straight on to an intervention study—
inoculating his gardener’s eight-year-old son and exposing him to sources of
smallpox infection in order to demonstrate the effect (hardly ethical, but it
was the late eighteenth century, and William Wilberforce was still having to
argue that the British slave trade wasn’t a good thing). Perhaps more ethi-
cal, but still unwise, interventions were going on in chemistry around that
time, with researchers freely exposing themselves to new substances to see
what happened. Humphry Davy, for example, was a prolific discoverer of
new elements and compounds, including sodium, potassium, and calcium,
going on to give rather flamboyant public displays at the newly established
Royal Institution in London. He also took an interest in nitrous oxide (laugh-
ing gas) which had been synthesized in 1772 by Priestley. Davy inhaled it
copiously as a research subject, making notes and trying it out for different
uses (including as a cure for hangovers) before considering it as a potential
anaesthetic. He less sensibly inhaled nitric oxide and carbon monoxide to
see what would happen and was lucky to survive each of those experiments.⁷
These are all intervention studies of sorts, although before–after comparisons
can’t really be said to advance the methodology from what had been carried
out in medicine since ancient times. After all, presumably traditional herbal
remedies, leeches, and other old treatments had been adopted along the way
because someone had tried them out and felt that there was a benefit.
Probably the first controlled trial in medical research was carried out by James
Lind in 1747. Scurvy, now known to be caused by a deficiency of vitamin C
⁷ He described himself ‘sinking into annihilation’ following the carbon monoxide inhalation and had
enough presence of mind to take his pulse and describe its quality (fast and ‘threadlike’) in his laboratory
notes.
124 How to do Research
in the diet, was a major problem for the expansion of sea travel at that time.
For example, in 1740–1744 a squadron of eight ships were sent off to sail
around the world as part of a conflict between Britain and Spain. It resulted
in a popular written account of the voyage and all the various adventures
along the way, but one of the stark facts was that only 188 men survived from
the original 1,854 who set sail—most of the losses being due to suspected
scurvy. Citrus fruit in the diet had been suggested as a possible means of pre-
venting the disease but this was not widespread practice. Lind was a ship’s
surgeon on board HMS Salisbury while it was patrolling the Bay of Biscay
and decided to test the intervention by taking sailors suffering from scurvy
and assigning groups of two each to receive one of six dietary supplements—
cider, vitriol, vinegar, seawater, barley water, and citrus (two oranges and a
lemon)—all added to an otherwise identical naval diet. The two sailors who
received citrus had nearly recovered within six days (when the supply of fruit
ran out) and, of the other groups, only the sailors who received cider showed
any improvement at all. Lind published his findings in 1753 but the medical
establishment was conservative in those days, and his discovery took a while
to become established. Eventually, a successful 23-week voyage to India in
1794 took place with lemon juice issued on board and no scurvy outbreak
reported; this created enough publicity to change practice. Vitamins, inci-
dentally, weren’t proposed as entities until 1912, and vitamin C was isolated
from 1928 to 1932—named ‘ascorbic acid’ after the Latin word (scorbuticus)
for scurvy.⁸
The ‘controlled’ part of the clinical trial is a necessity because the inter-
vention is being administered in the natural world, outside a laboratory
environment. If you’re dealing with chemicals, biological samples, or sub-
atomic particles, you can design an environment where nothing happens to
them, apart from your intervention—by doing so, you know that the out-
come is definitely a result of the intervention. When you’re intervening in
living creatures, or in any less predictable system, you can’t impose that sort
of experimental environment. A simple comparison of the situation before
and after an intervention isn’t enough—a person might have got better any-
way without your medicine. Therefore, in order to draw conclusions, you have
to compare groups receiving an intervention with those who aren’t. Further-
more, you need groups, rather than individuals, because interventions in the
natural world are often shifting the probability of an outcome, rather than
determining it absolutely. Jenner was lucky that his smallpox vaccine was
⁸ Two groups were particularly involved in this discovery, and their leads won Nobel prizes: Albert
Szent-Györgyi from the University of Szeged, Hungary, and Walter Norman Haworth from the University
of Birmingham, UK.
Designing Research: Experiments 125
highly effective. If the vaccine had only been 80% effective, and if his gar-
dener’s son had fallen in the wrong 20%, then Jenner would have drawn the
wrong conclusion, and an important opportunity might have been missed
(and an eight-year-old boy needlessly given a potentially fatal illness). Lind
was also lucky that vitamin C is so effective against scurvy and so specific
to citrus fruits (although it is contained in cider as well, hence the partial
improvement in the two sailors who received this).
Thus, having a control arm (or arms) to a trial is important, but it’s not enough
in itself, because there might still be other differences between the groups
accounting for the outcome. Lind’s groups of sailors, for example, might well
have had differences in their background health or some other factor—again,
he was lucky that he was dealing with a vitamin deficiency that is very easily
and effectively corrected and that results in a rapid and visible return to good
health. Most diseases are not so amenable. The next step forward, the RCT,
was actually pioneered by Sir Austin Bradford Hill, whose ‘causal criteria’
we came across in Chapter 7 and ‘verdict of causality’ in Chapter 8. The idea
of randomly assigning an intervention wasn’t a completely new one—it was
being used in agricultural science, notably at the Rothamsted Experimental
Station in Hertfordshire.⁹ However, Bradford Hill was the first to apply it in
medical research in an evaluation of the drug streptomycin as a treatment
for tuberculosis (TB) in 1948. It’s not entirely clear that the choice of random
treatment allocation at the time was made for the methodological reasons
nowadays acknowledged—the records are a little unclear on this. The ran-
domization might have been adopted partly because of a limited supply of
the drug and a desire to ensure that people with TB had a fair and equitable
chance of receiving it. However, there also does appear to have been a wish to
keep the comparison groups as similar as possible. Nevertheless, the design
became popular, and it is now a requirement for any new drug to have RCT
evidence for its efficacy before it can be approved for prescription in most
medical settings.
⁹ Under the direction of Ronald Fisher, a pioneering and highly influential statistician and experimen-
talist who laid the foundations for many methodologies still in use. He would be more celebrated and
better known today (and get more than a footnote here) if he hadn’t also held and publicized some frankly
unsavoury views on eugenics and racial superiority. He was also an early vocal sceptic about health haz-
ards associated with smoking—this might have been simply because he liked to smoke, but it doesn’t help
his reputation that he was employed by the tobacco industry as a consultant. Even if he only opposed
the implications of research findings because he personally enjoyed the odd cigarette, it hardly suggests a
dispassionate scientific attitude open to new ideas.
126 How to do Research
In conclusion
On the other hand, you could say that experimentation (of a sort) does come
into the picture when theories derived from historical research are put into
practice, particularly when they shape government policies. Although poli-
cies can be considered as interventions, the trouble with working out cause
and consequence is that you’re dealing with a situation beforehand, a situ-
ation afterwards, something that happened in between, and a whole lot of
reasoning at best (guesswork at worst) when you try to put the connections
together. At the end of Chapter 10, we considered group-level interventions
Designing Research: Alternatives to Experiments 129
Fig. 11.1 A rather sinister looking Trofim Lysenko, Soviet agronomist and biologist,
addressing the Kremlin in 1935 as Stalin looks on approvingly (as far as you can tell
anything from Stalin’s expressions). Above and beyond the numerous scientists who
were sacked, imprisoned, or executed for disagreeing with him, Lysenko’s misguided (or
at least misapplied) theories about agriculture may have been responsible for around
30 million deaths in the USSR and China. Whether the fault here lies with Lysenko per-
sonally, with the politicians who accepted his theories without question, or with wider
top-down political systems that stifle debate, is up for argument. However, it reveals the
dangers of theories that are not put under academic pressure, and probably the dangers
of letting researchers anywhere near political power.
Trofim Lysenko speaking at the Kremlin (1935). From Soyfer, V. N. (2001). The consequences of political
dictatorship for Russian science. Nature Reviews Genetics 2, 723–9, cited in Peskin, A. V. (2001). Science
and political dictatorship. Nature Reviews Genetics 2: 731. Public domain, via Wikimedia Commons.
https://upload.wikimedia.org/wikipedia/commons/5/5a/Lysenko_with_Stalin.gif
Designing Research: Alternatives to Experiments 131
3 And his theories weren’t completely without foundation—the field of epigenetics does allow for her-
itable traits to be passed on without DNA alterations; it’s just that epigenetics acts in a much more minor
way than Lysenko was proposing.
132 How to do Research
setting at a particular time. Clearly there are all sorts of challenges in drawing
conclusions. To an outsider, the field does seem to weigh towards trying to
learn from mistakes—the ‘let’s definitely not try that again’ take-home mes-
sage. However, evaluations are also made of more balanced policies, such as
the Truth and Reconciliation Commission founded by Nelson Mandela in
South Africa in 1996. This might be seen as essentially a test of restorative vs.
retributive justice following regime change and a long history of violence and
oppression. The debate continues as to whether it met its goal, although it at
least might be classified in the ‘could have been a lot worse’ category. Finally,
as mentioned in Chapter 8, anthropogenic climate change might also be con-
sidered as a rather large and profound experiment currently underway, from
which we don’t yet know the answer for certain, but which we might be wise
not to continue for too much longer.
Natural experiments
Not all imposed interventions are quite so difficult to conceptualize and eval-
uate, however, and ‘natural experiment’ has been coined as a term to describe
research studies that have capitalized on events to test particular theories
where the event can be seen as a more specific intervention of sorts. A variety
of questions have been addressed in this way. For example, plenty of research
has evaluated what happens to health outcomes when a regional or local pol-
icy is introduced (for example, a smoking ban). Other studies have followed
up offspring born to mothers during a famine, comparing them to people
born before or after the famine, to investigate the effect or not of maternal
nutrition on health conditions occurring much later on in their offspring.
The Dutch famine of 1944–1945 towards the end of the Second World War
has been commonly used for this, as food availability for mothers before and
afterwards was relatively normal, so a very specific generation of offspring can
be investigated. Similarly, comparisons have been made between people born
before or after 1963, which was the year when atmospheric nuclear weapons
testing was banned and low-level exposure to radioactive isotopes changed
considerably. As a final example, a recent US study wanted to look at the long-
term influence of military service on earnings and was able to take advantage
of the fact that the Vietnam War draft lottery was randomly assigned, there-
fore creating a randomized controlled trial (RCT) of sorts, ensuring that the
comparison groups (interviewed much later) were similar. This moves us
on to thinking about the second limitation on experimentation which is the
ethical one.
Designing Research: Alternatives to Experiments 133
⁴ Well, you can if you’re randomizing people to something that might mitigate the risk factor and pre-
vent the disease outcome, but you’re going to need strong evidence for something being a risk factor in
the first place before anyone’s going to fund an expensive prevention trial.
134 How to do Research
One of the earliest cohort studies in medicine was the British Doctors’
Study. This was set up in 1951 by Austin Bradford Hill with his junior col-
league Richard Doll and ran all the way through to 2001. The aim was
to investigate health effects of smoking, which weren’t at all clear around
the time the study began. Lung cancer rates were known to have increased
dramatically before the 1950s, as had smoking, but then so had pollution
from burning domestic and industrial fuel. Doctors were chosen as study
participants because in those days they were cooperative, easy to contact,
generally agreed to take part, and (crucially) stayed involved in the study and
didn’t drop out in large numbers. I suspect it also helped that they were a
single professional group and quite similar to each other, although smok-
ing in the 1950s was less linked to social class than it is today. The study
began with around 40,000 doctors and the main comparisons were made
between smokers and non-smokers; however, participants naturally changed
their behaviour over time and the team were therefore able to estimate the
effects of stopping smoking at different ages. A range of health outcomes
were carefully collected over the years and successive reports were published,
underpinning what is now standard, accepted knowledge about smoking and
its consequences—the overall life expectancy lost (but gained again in those
who stop smoking; the earlier, the better), and the increased risk of heart dis-
eases as well as cancers and lung diseases. The British Doctors Study wasn’t
the only investigation reporting these associations—a single study, however
well designed, would never have been enough. Similar findings were coming
out of other cohort studies, such as the Framingham Study, set up in Mas-
sachusetts in 1948 to investigate risk factors for heart attack and stroke in
the general population, and later on from the Nurses’ Health Study—a simi-
lar design to the British Doctors Study, but with wider health outcomes and
involving over 120,000 participants in 11 US states.
The design of a cohort study doesn’t need to be complicated—it’s mostly a
matter of obtaining accurate measurements of the risk factor(s) you’re inter-
ested in, of the outcomes as they occur, and of any other characteristics
you might want to consider and adjust for. The challenges mainly lie in the
logistics—a traditional cohort study might well involve quite a large sample
of people, because even the commonest disease outcomes are quite rare in
absolute numbers and you’re going to need enough cases occurring over time
in order to make an adequate comparison. You may well also have to follow
the groups up for long periods, and the biggest problem when it comes to
drawing conclusions (i.e. the whole point of the study in the first place) is if
you’ve needlessly lost a lot of people along the way (moved house, changed
phone number, no alternative contact method, or lack of interest due to time
Designing Research: Alternatives to Experiments 135
between examinations). This ‘attrition’ limits the number of people with out-
comes that you can analyse, and it might also introduce error (‘bias’) into
your findings if it’s somehow more or less likely in people with the risk factor
and/or the outcome. In the British Doctors Study, for example, if smokers
were more likely than non-smokers to drop out of the study, and if health
outcomes were worse in the group who dropped out, then you’d be pref-
erentially losing people who smoked and had worse health outcomes and
therefore would be underestimating the association between the cause and
the effect in the people who remained in your study. Fortunately, follow-
up rates were very high in that study and there were no major concerns
about bias.
Cohort studies, therefore, do tend to be expensive because of all the staff
required to keep them running efficiently, and the nightmare scenario for
anyone in charge is that you end up with massive attrition and not much
you can realistically say about outcomes. Like randomized trials, they’re def-
initely not for the faint of heart. There are also questions that can’t be feasibly
answered by this design. For example, if an outcome is rare, it may simply
not be possible to assemble a large enough cohort. Or you may be interested
in a cause–effect relationship over a long time period (e.g. the association
between someone’s circumstances at birth and their health in middle age),
but no one’s going to fund a study that doesn’t provide its answers till decades
later. The alternative design in these circumstances is to find people with the
disease outcome, find an appropriate comparison group and then look at the
differences in the risk factor of interest. If it’s a risk factor, then you expect it
to be more common in the first group than the second. This is called a case
control study.
In theory, a case control study ought to be a lot easier than a cohort study,
and certainly cheaper—after all, you only need to see your participants once
and collect the necessary information about their past lives and experiences
in order to compare risk factors. There’s no follow-up involved and no wor-
ries about attrition. You might even be able to perform the study yourself and
not have to employ a research team to help. However, although case control
studies are indeed generally cheaper and logistically easier than cohort stud-
ies, a lot more thought goes into their design to guarantee that you can answer
your research question and not end up wasting everybody’s time. First of all,
you need to consider your control group: you may have assembled a group of
136 How to do Research
‘case’ participants (i.e. people with the disease or outcome you’re interested
in) but you now have to think about who to compare them with (i.e. people
without the disease or outcome but similar in other respects). This might be
straightforward but it’s often the factor limiting the design because the study
depends on there being an adequate comparison. For example, there used to
be a fashion for collecting healthy controls from university staff or students,
because they were quite easy to find and approach. However, comparing ill
people attending a hospital to staff or students is comparing two groups that
are likely to differ in many respects and give rise to all sorts of false (‘biased’)
findings.
The other challenge of a case control study is that you’re generally having to
look at the past in order to work out whether or not people have been exposed
to the risk factor of interest. This is fine if it’s something very obvious, like a
serious illness or injury, but it’s more difficult if it’s something that depends
on a participant remembering it or not. If someone is affected by an illness,
they’ve usually put in quite a lot of time and effort into thinking about what
might have caused it, so ‘cases’ tend to be much more likely to remember
relatively minor things in the past that might be risk factors. For example,
if you go down with food poisoning, you’re bound to think through every-
thing you’ve eaten over the last few days in order to identify a potential cause,
whereas if you’re an unaffected ‘control’, you’ve probably forgotten even what
you ate the previous evening. It’s still trickier if you’ve got something more
severe and gradually developing, like cancer. So, it’s quite easy to end up
with observed differences between cases and controls due to this different
level of recall, and to draw incorrect conclusions as a consequence. There’s
a huge advantage if you can find an old information source (e.g. historical
medical records or some other administrative database) that tells you about
the risk factor you’re interested in, because you don’t then need to worry
about people’s memories. For example, if you were interested in pregnancy-
or birth-related risk factors for diseases occurring in adulthood (such as heart
disease or schizophrenia), you’ll probably need to use cases and controls who
have old maternity or obstetric records, rather than having to ask people (or
their parents) to remember. Furthermore, you must ensure that you or your
interviewers don’t treat the cases and controls differently when you’re asking
questions about the past (e.g. trying harder to find information in cases than
controls); this will also give you false results.
Case control studies therefore do need a lot of thought and must be inter-
preted with caution because of all of these challenges. However, they may still
play an important role in health research, particularly when a quick result
is needed. For example, when the disease we now call HIV/AIDS was first
Designing Research: Alternatives to Experiments 137
recognized in the late 1970s/early 1980s,⁵ there was a lot of initial uncer-
tainty about its cause and if it required urgent investigation. At the time,
it was known that there were growing numbers of gay men⁶ with a type of
cancer called Kaposi’s sarcoma, affecting skin and lymph nodes, and a par-
ticular type of pneumonia caused by a yeast-like fungus called Pneumocystis,
both of which were recognized to occur in people with a suppressed immune
system. On 15 May 1982, The Lancet published one of the earliest investiga-
tions of this—a case control study carried out by a team from two New York
universities.⁷
In this study, the authors recruited 20 gay men with Kaposi’s sarcoma
receiving treatment at the New York University Medical Center between
1979–1981. For each of these cases, they recruited two controls from a local
physician’s practice who were also gay men and were the same race and age
as the case. Quite a lot of information was taken, which was understandable
given the need to identify risk factors as quickly as possible; however, the
researchers were particularly interested in three potential causes. Was it:
When they had analysed the results, the team were able to rule out the first
of these theories, as there was no difference in metronidazole use between
cases and controls. The second theory received some support because the
cases were more likely to report using amyl nitrate, but there was no differ-
ence in butyl nitrate use. The third theory also had support because cases
did report higher numbers of recent sexual partners. The authors were quite
careful in their conclusions and felt they couldn’t absolutely rule out a toxic
effect of amyl nitrate as a cause, although the lack of association with butyl
nitrate (which ought to have the same toxic effects) was felt to weigh against
this, as was the lack of Kaposi’s sarcoma in other people taking amyl nitrate
therapeutically for heart disease. They therefore tended to weigh in favour of
⁵ This was when it started to be noticed in the US, at least. From retrospective evaluations and tests
of stored samples, the first cases of HIV found so far were in 1959. It is believed to have evolved first in
chimpanzees and may well have crossed the species barrier and begun affecting humans in Central Africa
from the 1920s.
⁶ Note, the nomenclature of the 1970s/1980s is being used here for consistency with the research report.
⁷ Marmor, M., Laubenstein, L., William, D. C., Friedman-Kien, A. E., Byrum, R. D., D’Onofrio, S., and
Dubin N. (1982). Risk factors for Kaposi’s sarcoma in homosexual men. The Lancet 8281: 1083–7.
138 How to do Research
Fig. 11.2 The HIV-1 virus alongside the very similar HTVL-1 (human T lymphotrophic
virus type-1; a retrovirus underlying a type of leukaemia). When the illness later called
AIDS started emerging at scale in the late 1970s, it wasn’t clear what was causing it and
there were several competing plausible theories. Just as John Snow had investigated
patterns of cholera occurrence over a century earlier to distinguish between air-borne
or water-borne spread, early case control studies compared the lifestyles of men with
and without AIDS illnesses to investigate whether these might be due to infections or
toxins.
An image of the HTLV-1 and HIV-1 retroviruses. From the Centers for Disease Control and Preven-
tion’s Public Health Image Library, ID #8241. Public domain; https://commons.wikimedia.org/wiki/
File:HTLV-1_and_HIV-1_EM_8241_lores.jpg
⁸ The virus went through quite a few names before being called Human Immunodeficiency Virus in
1986. Acquired Immune Deficiency Syndrome (AIDS) was first proposed as a name for the disease in
mid-1982.
Designing Research: Alternatives to Experiments 139
A single case control study is rarely definitive, and this early investiga-
tion of AIDS was clearly only suggestive, and imprecise in its conclusions—it
narrowed down the field of interest but still couldn’t distinguish between an
infection and a chemical toxin as the cause. If the virus had taken longer to
identify, perhaps there would have a lot more investigations of this sort, try-
ing to reach a consensus. To my mind, the study was impressively rapid in
its set-up and reporting—after all the US Center for Disease Control (CDC)
only dates the AIDS pandemic from 1980. Also, given the unsatisfactory way
in which the news media reacted to the emergence of the condition,⁹ it is
to The Lancet’s credit that it rapidly published what was quite a small-scale
study.
Of course, it would be easy to assume that the discovery of the HIV retro-
virus was always going to solve the matter, although it’s worth bearing in
mind that the research remained primarily observational for some time. The
isolation of the virus allowed the HIV test to be developed for diagnosis, and
then there were further studies to investigate prognosis—tragically unmodi-
fiable for a decade until the introduction of drug therapies in 1992 following
successful clinical trials. I don’t know whether anyone considered or cited
Bradford Hill’s causal criteria (Chapter 7) at the time, but clearly there was a
self-evident strength and specificity to the observation (i.e. HIV-positive sta-
tus was strongly and specifically linked to risk of developing AIDS), just as in
Chapter 7, where we considered asbestos exposure as strongly and specif-
ically linked to the occurrence of mesothelioma. We’re in the ‘real world’
natural science domain here, so there are inevitably anomalies and incon-
sistencies. For example, one of Barré-Sinoussi’s many further contributions
to HIV research (other than identifying the virus in 1983) was to investigate
the small proportion of HIV-positive people who continue to maintain low
virus levels without any drug treatment. Nothing’s ever completely clear cut.
With this in mind, it seems that the main limitation on the experimental
approach is ethical. After all, there may always be some way around the fea-
sibility limitation. Having made this point, it’s worth acknowledging that not
everything is as ethical as it should be. Jenner’s experimental inoculation of
an eight-year-old boy was hardly an ethical intervention, even if Jenner meant
well—but that was the eighteenth century. More notorious, and certainly
inexcusable, was the US Tuskegee Syphilis Study, in which 400 impoverished
⁹ And politicians—consider the rapid actions taken by the Hawke government in Australia from 1983
and compare them to the US, where President Reagan didn’t even mention AIDS publicly until 1985.
140 How to do Research
Fig. 11.3 Blood being drawn from a participant in the Tuskegee Syphilis Study. This
photograph is believed to have been taken around 1953, by which time penicillin had
been a standard treatment for syphilis (but not administered to these participants) for
six years. From this point, the study (and ‘experimental’ lack of treatment) would con-
tinue for a further 19 years, and the wait for a presidential apology a further 25 years
after that. Immoral and unethical on so many levels, the study’s impacts on public
perceptions of health research are still being felt.
Subject blood draw, c. 1953. Photo taken of the Tuskegee Syphilis Study. From the holdings of the
US National Archives and Records Administration (National Archives Identifier 956104). Uploaded
by Taco325i. Public domain, via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/
commons/e/ef/Tuskegee_study.jpg
1⁰ Peter Buxton, who had already raised formal concerns in 1966 and 1968 that were rebuffed by the
CDC. He wasn’t the first either—the ethics had been questioned since at least 1955.
Designing Research: Alternatives to Experiments 141
took until 1997 for President Clinton to apologize on behalf of the govern-
ment. Understandably, trust in the medical research establishment from the
African-American community was profoundly damaged, and repercussions
persist to this day.
Beyond the clear-cut ethics around experimentation on humans are the
less-clear (or at least less-agreed-on) frameworks for experimentation on ani-
mals. I’m not going to cover this in detail as opinions run strong on the matter
and others have considered it at length and with much greater expertise.
However, it’s important to acknowledge the issue and that it is complicated—
or at least it will be complicated for most readers. The simple position at
one extreme is that no animal experiment is ethical, and this will be held
by some people. The simple position at the other extreme is that all ani-
mal experiments are ethical, although I’m not sure that anyone really holds
this (or at least I hope not), so I’ll ignore it. The majority view (assuming
‘I don’t know’ or ‘I don’t want to think about it’ aren’t allowed as options)
will be somewhere in between. The two primary determinants are the species
involved and the purpose of the experiment. Considering the species, I sus-
pect that most people have no particular qualms about the large numbers
of fruit flies that are bred for laboratory research,11 but that they are likely
to disagree with experiments involving chimpanzees. If there’s a line to be
drawn in between about acceptability or not (e.g. fish, frogs, mice), I suspect
most people would need to know the purpose of the research. If it involves
essential knowledge that could lead to a potentially life-saving treatment (for
humans or animals), I suspect the line of acceptability would be different than
for a much more trivial ‘knowledge for its own sake’ experiment. Likewise, a
different view might be taken on testing the safety of a new medicine com-
pared to a new cosmetic. What regulatory systems try to do is encapsulate
this sort of balancing act, requiring the use of animals for research to be jus-
tified in terms of the species involved and the purpose of the study.12 And I
guess the systems try their best to be sensitive to majority opinion13 and to
reflect this (and any shifts in opinion over time) in their judgements—which
is better than nothing. Of course, we could argue that ethics are absolute and
not a matter of following majority opinion. This is a reasonable stance, but
then there needs to be some process for deciding the absolute position in
the context of differing opinions, and that’s definitely beyond the scope of
this book.
11 Or, if they have qualms about fruit flies, they might not have ethical concerns about experiments on
laboratory-bred bacteria or viruses.
12 As well as dictating standards for the welfare of animals that are sanctioned for experiments.
13 Or at least the perceived majority opinion, I suppose; I’m not sure how much polling is involved.
12
Designing Research
R&D
The relationship between the design of research projects and the technologies
or facilities available cuts across most disciplines. For example, you might
be thinking about whether you can measure the disease risk factor you’re
interested in, or whether you have access to the right sort of laboratory
environment for your study of protein structures, or to a newly discovered
archived source required for your English thesis, or to the necessary statisti-
cal procedures and software for your economic modelling. Across the history
of research, these technologies tend to begin as simple measurement issues,
although there also seems to be something important about group support
and complementary expertise—the ‘academic environment’ in early centres
Designing Research: R&D 143
1 The secret impressively died with the Byzantine empire, partly because a chain of components was
required to make Greek fire, from its chemical composition to the equipment for its manufacture and
projection from ships. This knowledge was kept compartmentalized, and no one was allowed to know the
full chain.
144 How to do Research
2 And an assumption that this sudden, out-of-the-blue inspiration is the way scientific discoveries are
made. I can’t say I’ve ever had a Eureka moment myself, or that I’ve met any other researcher who’s had
one either.
3 Or perhaps ‘adoption’ might be a safer description for the European printing that helped accelerate its
Renaissance. Metal movable type is believed to have been in use since the thirteenth century in Korea and
China. It appears to have been independently invented in Mainz, Germany in 1439 by Johannes Gutenberg,
although it’s hard to know how much (if at all) this was influenced by the spread of the idea from Asia.
⁴ There are a few competing candidates for the inventor of the microscope and it’s a question that
remains undecided, although most were around 1620. Galileo was definitely in there early on.
Designing Research: R&D 145
Fig. 12.1 A rather fanciful depiction of the death of Archimedes, the celebrated
inventor, killed during the Roman invasion of Syracuse (Sicily) in around 212 BCE.
The equally fanciful story goes that he was deep in study when the soldier arrived
and that his last words were: ‘Do not disturb my circles’. It seems fairly unlikely
that a soldier would admit to killing him, let alone pass on his dying words, as
there had been specific orders to keep him alive. The Roman military machine with
Archimedes in its employment (as they’d hoped might be the case) would have been
seriously formidable, although I guess they managed well enough without him in
the end.
Death of Archimedes by Thomas Degeorge (1815). Public domain, via Wikimedia Commons. https://
upload.wikimedia.org/wikipedia/commons/f/f3/Death_of_Archimedes_%281815%29_by_Thomas_
Degeorge.png
Mathematics as a technology
Following the line of astronomy, it wasn’t only the technology of the tele-
scope that moved the field forward but also the development of mathematical
techniques, particularly by Kepler, who moved around various central Euro-
pean academic centres in the early seventeenth-century trying to earn a living
while repeatedly falling on the wrong side of the various religious shifts of the
time—too Lutheran for the Catholics, not Lutheran enough for the Luther-
ans. Amongst all of this, he managed to lay the mathematical foundations
used by Newton later that century. However, Kepler had help from John
Napier, who invented logarithms in 1614. A Scottish aristocrat, Napier was as
interested in alchemy, necromancy, and the Book of Revelation as he was in
146 How to do Research
mathematics.⁵ Logarithms make the job of long multiplication and long divi-
sion a great deal easier (hence the use of slide rules at schools in pre-calculator
days), so opened up a range of calculation possibilities. Using these, Kepler
was able to publish tables that predicted planetary movements considerably
better than Copernicus had done over 80 years earlier. Whether mathe-
maticians like to be thought of as equivalent to lens makers and engineers,
advances in applied research have frequently had to depend on developments
in that field. Much later on, when Einstein was putting together his 1915⁶
theory of general relativity, which unified the concepts of acceleration and
gravity and defined the relationship between spacetime and matter, he drew
on Gauss and Riemann’s work at the University of Göttingen in the previous
century (Chapter 6), where they had worked out the principles of geometry
on curved planes⁷ and using multiple dimensions. In turn, these were further
developed at Trinity College, Cambridge, and University College London by
William Clifford, who had begun presenting theories about curvatures in
space, foreshadowing general relativity.
The advances in mathematics made during the sixteenth and seventeenth
centuries allowed better modelling of the movements of the planets, as
observed from Earth, with successive advances from Copernicus to Kepler
to Newton, and then to Pierre Simon Laplace later on in Paris who math-
ematically removed Newton’s requirement for God to correct the planetary
alignments every few hundred years (Chapter 5). Telescopes also improved,
for example, thanks to William Herschel, a musician and amateur astronomer
who built mirror lenses and, in 1781, was the first person to discover a new
planet (Uranus) since Babylonian times. However, this still left the challenge
of estimating interplanetary distances, because if you don’t know how big
something really is (e.g. Venus or the Sun), then you can’t work out how far
away it is—and if you don’t know how far away it is, you can’t estimate how big
it is. These discoveries depended on developments in the much more mun-
dane ‘technology’ of long-distance travel. To work out distances, you need to
triangulate—that is, to observe the distant object at the same time from more
than one place, plotting its relationship to ‘fixed’ points like the stars in order
to work out the angles in your triangle. However, the base of this hypothetical
triangle needs to be long enough, and you need to make sure you’re taking
⁵ Napier was thought by his neighbours to be a sorcerer and to have a black rooster as his familiar,
although in the reign of the witch-hunting King James VI, he doesn’t seem to have had any trouble—it’s
not as if he was an unmarried woman living with a cat. Perhaps it helped that he dedicated his prediction
of the Biblical Apocalypse to the king.
⁶ Or 1916; opinions vary . . .
⁷ Hence referred to as ‘non-Euclidean’ geometry because the principles developed by Euclid (in the
third century bce) applied to flat surfaces only.
Designing Research: R&D 147
For anything further away than the Moon, you need a much greater distance
between the points of observation. This, in turn, depended on there being
safe-enough navigation to take the same readings from points as far away
from each other as possible. The first serious attempt at this was carried out
in 1671, when Jean Richter travelled to French Guinea and observed the posi-
tion of Mars at the same time as his colleague Giovanni Cassini in Paris. Once
they’d estimated the Earth–Mars distance from this, they had the necessary
mathematical projections of planetary alignments to calculate all the other
distances.⁹ Further refinements of these estimations were made using the
‘transits of Venus’—the movement of Venus across the Sun’s disc that occurs
regularly on two occasions, eight years apart, every 243 years. Edmond Hal-
ley, the British astronomer (who predicted the next appearance of the comet
that bears his name) had suggested that these transits could be used for dis-
tance estimation. However, they needed to be observed from different parts
of the world and the 1761 transit sparked a rush of eighteenth-century voy-
ages, one of the first examples of international scientific collaboration (or
competition—the distinction is always a little blurred). Astronomers from
France, Britain, and Austria travelled all over the world to take measurements
that year and even more were taken at the next transit in 1769.1⁰ Transits
of Venus have continued to be exploited as an opportunity for researchers,
although the questions have moved on over the years. For example, the most
recent transit in 2012 was used to investigate changes in the Sun’s bright-
ness and known properties of Venus’ atmosphere, hoping that these findings
⁸ Hipparchus estimated this to be within a range of 59 to 72 Earth radii. The actual average distance
from today’s measurements is 60.5 radii, comfortably within this range.
⁹ This was also quite accurate—for example, the distance between the Earth and the Sun from their
calculations was only about 7% different from the current accepted value.
1⁰ One of the 1769 transit observations was taken in Tahiti by James Cook. After he’d finished there, he
opened sealed orders from the British Admiralty telling him to explore the south Pacific for a fabled new
continent. He landed in Botany Bay the following year and claimed Australia as a British territory.
148 How to do Research
might help to glean information from the much less visible transits of planets
across other stars.
Estimating even greater distances requires an even longer base for triangu-
lation, or else a different form of measurement. Chapter 3 discussed Ptolemy’s
second-century model of space, with its seven planetary spheres followed by
the eighth sphere of the ‘fixed stars’. It was an outmoded model by 1500, but
only because the planetary relationships weren’t modelled well enough (and
the fact that Uranus and Neptune hadn’t been discovered yet). For centuries
afterwards, and indeed even for most purposes today, you might as well view
the stars as sitting on the surface of a sphere somewhere out there because
there’s no relative movement that can be meaningfully observed in real time
from Earth. The idea of the fixed sphere had begun to be discarded in the
seventeenth century when Halley compared contemporary star charts with
those drawn up by the Ancient Greeks (including one by Hipparchus) and
realized that, while most of the observations were very consistent, some of
the stars had clearly moved significantly in the intervening 2000 years. This
indicated that stars were bodies in relative space, just as planets were, and
then as soon as astronomers started to estimate interplanetary distances, they
began realizing how vast everything was beyond that if it wasn’t showing any
observable relative movement. But how do you start estimating distances to
the stars? To begin with, this still had to rely on triangulation and the only
way to extend the base of the triangle was to observe the same stars from dif-
ferent ends of the Earth’s orbit around the Sun—that is, observation points
300 million km apart (the diameter of the Earth’s orbit) compared to a max-
imum on our planet of 12,700 km (the diameter of the Earth). However, this
in turn required extremely accurate star charts and had to wait until photog-
raphy had been invented, although the technique had been applied to single
stars from the early nineteenth century.
Fig. 12.2 The first known design of a pendulum clock by Galileo Galilei in 1641, although
this drawing was probably by his student, as Galileo was blind by that time and within
a year of his death. Inventions are mixed blessings—undoubtedly, the ability to measure
time was an important prerequisite for many fields of research; however, how many of
us would say that clocks and timetables make life any more pleasant, or haven’t yearned
for a simpler age? If Galileo hadn’t figured out the pendulum, someone else would have
done, which is another truth about research and human history—what we call ‘progress’
will happen regardless as a product of our curiosity, whether it leaves us better or worse
off at the end of it all.
Drawing of a pendulum clock designed by Galileo Galilei (1659). Vincenzo Viviani. Public domain, via
Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/8/8c/Galileo_Pendulum_
Clock.jpg
amplitude of the swing or the weight at the end. Of course, there was no stan-
dard for Galileo to compare it against, so he had to use his pulse. He designed
a pendulum clock but never completed it (Figure 12.2), and the first one was
built in 1656 by the great Dutch scientist and inventor Christian Huygens,
just seven years after Galileo’s death.11 For around 150 years, timekeeping
relied on pendulum clocks and their mechanical successors until the devel-
opment of quartz clocks during the nineteenth century and atomic clocks in
the twentieth.
11 The spirit of Archimedes was very much alive in people like Huygens and Galileo, and, of course, in
Leonardo da Vinci.
150 How to do Research
The progression of astronomy and astrophysics is therefore not only the result
of researchers generating and testing hypotheses but is just as much a product
of the technologies available (lenses, navigation, clocks, photography, spec-
troscopy, etc.) as well as cross-application of knowledge from other fields,
mathematics in particular, theoretical physics later on. Similar dependen-
cies underlie the history and progression of other disciplines, of course; in
particular, there are often periods of rapid progress when a technological
development ‘unlocks’ previously closed-off areas of enquiry. Earlier chapters
mentioned the invention and development of vacuum containers as a kick-
start for chemistry, and Volta’s 1799 invention of the battery clearly helped
the study of electricity over the nineteenth century. Similarly, it could be said
that the field of human biology was advanced at an early stage in the mod-
ern era by the legalization of dissection in Western Europe, improving on
Greek models of ‘gross anatomy’ (i.e. what’s visible to the naked eye). Shortly
afterwards, the invention of the microscope opened up ‘microanatomy’. For
example, William Harvey, an English physician,12 had carried out a series of
experiments involving tourniquets in the early seventeenth century, enabling
him to conclude that veins and arteries were somehow connected, and that
blood circulated from one to the other, although he was unable to take
12 Appointed to King James I, so a contemporary of Napier, but more in Bacon’s mould of scientific
enquiry. He was sceptical about witchcraft allegations at the time and is believed to have saved the lives of
at least four women by showing the charges against them to be nonsense.
Designing Research: R&D 151
the theory further. Then microscopes were invented and Marcello Malpighi
working at Bologna later in the century was able to identify lung capillaries in
frogs, completing the picture. This was followed soon afterwards by experi-
ments at Oxford University that showed purple-coloured venous blood could
be turned into red-coloured arterial blood by shaking it up with air, allow-
ing conclusions to be drawn about what was going on in the lungs, centuries
before oxygenation and haemoglobin were understood.
Fig. 12.3 A diagram by Jean-François Champollion, one of the key figures involved in
deciphering Egyptian hieroglyphics using the Rosetta Stone. Champollion had consid-
erable struggles in his life, caught up in the fluid politics of post-Napoleonic France and
having to deal with local academic rivalries, not to mention tense relationships with
English scholars engaged in the same race to decipherment. He was ground down by
it in the end and died of a stroke at the early age of 41, although not before he had
established a considerable reputation in the field.
Jean-François Champollion’s table of hieroglyphic and demotic phonetic signs (1822). From Cham-
pollion, J-F., Lettre à Monsieur Dacier. British Museum. Public domain, via Wikimedia Commons.
categories.13 This records a royal decree issued in 196 bce by the priests of the
Egyptian king Ptolemy V covering various matters of taxation, some recent
military successes, and instructions about worship. Because it contains essen-
tially identical texts in Ancient Greek, Egyptian hieroglyphics, and Egyptian
Demotic (the late Egyptian language in script rather than hieroglyphics),
its discovery was a major step forward in deciphering the hieroglyphics
(Figure 12.3)—the first time this writing had been understood since the fall of
the Roman Empire. In turn, this opened up a wealth of new knowledge from
all the previously indecipherable scripts at the time, and the nineteenth and
early twentieth centuries saw rapid progress in translations of other ancient
languages.
13 The ‘discovery’ consisted of it being noticed by French soldiers who were strengthening defences
during Napoleon’s campaign in Egypt at the time. Luckily, they were accompanied by Egyptologists, or it
might have been crushed into the fortifications. The stone was later appropriated by British forces but was
translated by both French and British academics. It remains in London and hasn’t yet been returned to
Egypt.
154 How to do Research
The invention of the radio has its origins in electromagnetism. Back in the
early 1820s, Hans Christian Ørsted1⁴ at the University of Copenhagen discov-
ers that electrical current generates a magnetic field, a principle that Michael
Faraday at the Royal Institution in London1⁵ uses to create an electric motor.
Electrical current can now be turned on and off, making a needle move miles
away at the other end of the cable, resulting in the invention of the telegraph
in the 1830s. This then leads to the code systems developed by Samuel Morse
and others, clearly with huge commercial potential as European settlers are
spreading across the US and need to keep in touch with each other. In the
meantime, the idea of further-reaching electromagnetic forces is mulled over
and pulled together into a set of theories in the 1870s by James Clerk Maxwell,
working at the time at the Cavendish Laboratory in Cambridge.1⁶ This is one
of those big unification moments in physics, up there with Newton’s work
beforehand and Einstein’s afterwards, but without any practical application
1⁴ Coincidentally a friend of his near-namesake, Hans Christian Anderson, the author of all those bleak
children’s stories.
1⁵ Working for Sir Humphry Davy, the nitrous oxide enthusiast from Chapter 10.
1⁶ A distant legacy from the family fortune that supported Henry Cavendish from Chapter 10.
Designing Research: R&D 155
in mind. It’s also on the complex side and not understood by many at the
time. Luckily one of those who does follow it is Heinrich Rudolf Hertz in
Berlin, who is looking for something to do for his doctorate and ends up
building sets of apparatus in the late 1880s that can both transmit and receive
electromagnetic waves—originally called Hertzian waves but renamed radio
waves around 1912. In the meantime, Kentucky professor of music David
Edward Hughes has invented what becomes the standard means of printing
telegraph messages,1⁷ uses the profits to move to London, and invents the
first microphone in 1878.1⁸ Detection of Hertz’s waves is taken further in
the 1890s by Édouard Branly in Paris and Oliver Lodge in Liverpool, with
Lodge (crucially) also developing ‘syntonic tuning’ that allows for simul-
taneous transmission without interference.1⁹ Around this time, scientists
start to demonstrate that signals can be transmitted at a distance, although
apparently still without much thought of commercial potential, and Lodge
is displaying transmission in London, although more as an experiment for
the general public. Jagadish Chandra Bose (Figure 12.4), working at the Uni-
versity of Calcutta, is also transmitting and receiving radio waves and shows
that these can pass through several walls, using a device like a more modern
antenna. Meanwhile, in New York the great inventor and electrical engineer
Nikola Tesla is interested in using radio waves as a means of remote control,
and Alexander Stepanovich Popov in St. Petersburg is using a radio signal
receiver to detect lightning strikes. Sooner or later, someone has to take the
initiative to commercialize ‘wireless telegraphy’ and Gugliemo Marconi turns
out to be the man of the moment, travelling from Bologna to London in order
to file a British patent for this in 1896. He establishes the Marconi Company
Ltd. and gets going on a series of demonstrations of transmission across ever
increasing distances—over the English Channel in 1899, transatlantic from
around 1901. Further developments come from Karl Ferdinand Braun at the
University of Strasbourg, and he and Marconi share the 1909 Nobel Prize
in Physics.2⁰
If nothing else, this example demonstrates the internationalism involved
in these sorts of developments, with accrued knowledge bouncing around
1⁷ The very same paper tape system beloved of Westerns—where some poor operator in an isolated
telegraph station gets the warning about outlaws on their way just after their train has drawn in and there’s
already a gun pointing at his head.
1⁸ Hughes might also have produced radio waves before Hertz, although seemed happy for Hertz to
take the credit.
1⁹ Lodge viewed electromagnetic radiation as evidence of wider phenomena and was a firm believer in
spiritualism, telepathy, and clairvoyance, which raised scientific eyebrows at the time.
2⁰ Marconi’s invention was credited with saving the lives of those who survived the sinking of the RMS
Titanic in 1912, as it was used to signal for rescue. Marconi was probably more thankful that he wasn’t one
of its passengers in the first place (he took an alternative crossing only three days earlier).
156 How to do Research
Fig. 12.4 Jagadish Chandra Bose, a scientist with broad interests across biology and
physics, working at the University of Calcutta. Like many others involved in the devel-
opment of the radio, he was uninterested in the commercial element—indeed, he met
Marconi in 1896 and was aware of the potential here. Instead, he suggested that his
research in the field should be used by others, an early advocate of open science, at
least in his own domain (although a century later, university employers were taking a
different view of intellectual property). Bose’s work underpins much of the technology
deployed in microwaves and Wi-Fi, and the Bose Institute in Kolkata is one of India’s old-
est research establishments. Having succeeded against the odds, his inaugural address
there in 1917 emphatically declared: ‘It is not for man to complain of circumstances, but
bravely to accept, to confront and to dominate them; and we belong to that race which
has accomplished great things with simple means.’
Jagadish Chandra Bose at the Royal Institution, London, 1897. The Birth Centenary Commit-
tee, printed by P. C. Ray. Public domain, via Wikimedia Commons. https://upload.wikimedia.org/
wikipedia/commons/5/56/J.C.Bose.JPG
measurements or materials you need, or the scope for your research to gen-
erate intellectual property or even patentable products. As mentioned, for
humanities research (but applicable to all fields really), if you want your study
to be novel, you’re either going to have to bring in a new technology or disci-
pline or discovery, or else you’re going to have to make do with what’s already
there and think up a novel way of using it.
more turbulent periods, advances have depended on what has flowed from
one collapsing empire to another emerging one—for example, theories of
mathematics and life science emerging in early Middle Eastern civilizations,
developed in Ancient Greece and Rome, surviving in translation for further
development in Middle Eastern Islamic empires (with or without influences
from South Asia and China along the way), and finally, re-translated or
rediscovered to feed the Western European Renaissance and Enlightenment.
Other examples of more direct transmission might include the East-to-
West flows of knowledge in relation to printing and, possibly, gunpowder.
In the ages before printing, knowledge was fragile and dependent on the sur-
vival of one or two unique hand-written manuscripts. The burning of the
fabled Library of Alexandria (Figure 13.1) is likely to be a myth, covering up
a much more mundane, slow decline through lack of support. However, there
were probably any number of more minor losses, resulting in the rather stop-
start pattern of progress early on. Once printing got going, there was a safety
net in the much larger numbers of copies potentially available, assuming a
decently sized print run to begin with and the idea taking on. This meant
that if one country decided to embark on some book-burning and censor-
ship, there was a reasonable chance that the knowledge had been passed on
for adoption and storage elsewhere. And then, in turn, more modern library
systems emerged to house obscure works until they were ready to be redis-
covered by a PhD student looking for something to do. Whether the digital
age has helped this process is perhaps still up for debate. The vaults of infor-
mation are clearly larger by many magnitudes, although that also makes for
a higher chance of work getting lost in the midst of everything. Hence, the
importance of publication and research dissemination, hence this chapter,
and the next . . .
If research isn’t communicated, it’s hard to see it as anything other than
a waste of time, so writing and publication have always been central to
academic life. However, published work has taken many and varied forms
over the years. For example, the three volumes of Newton’s ground-breaking
Philosophiae Naturalis Principia Mathematica would be an anathema to a
physicist or mathematician today (even if they could read the original Latin)
and I don’t suppose many biologists would be thinking about producing
original artwork to the level and quality contained in Hooke’s Micrographia
or Maria Sibylla Merian’s Metamorphosis Insectorum Surinamensium. The
expected form of published work also differs widely between research spe-
cialties today—from the terse prose of a multi-author scientific report to the
single-author book that might be used to communicate a particularly impor-
tant piece of research in history or literary studies. The academic publishing
160 How to do Research
Fig. 13.1 A nineteenth-century illustration of what the Great Library of Alexandria might
have looked like in its heyday. This library was created by some quite assertive purchas-
ing of manuscripts (papyrus scrolls), particularly from markets in Greece, sponsored by
the Egyptian king Ptolemy II and his successors. The scholars followed the manuscripts
and hence a centre of academic excellence was formed—not unlike a government nowa-
days investing in cutting-edge laboratory facilities. However, in Alexandria the focus
was on the ability to communicate and share findings, the limiting factor for research
progression at the time (i.e. before this was democratized through printing and, more
recently, digital media).
The Great Library of Alexandria by O. Von Corven (nineteenth century). Public domain, CC BY-
SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons. https://upload.
wikimedia.org/wikipedia/commons/6/64/Ancientlibraryalex.jpg
industry itself has changed beyond recognition over the years, particularly
so in the more recent digital era, and it would require a braver author than
I to predict where it is going, even in the next 10–20 years. The nature
of academic communication, therefore, is (and always has been) variable,
but I think it’s worth taking some time to think about writing for publica-
tion as it’s so much a part of life for all of us who are already carrying out
research, and an unavoidable task ahead of you if you’re just starting. Also, I
do feel there are some fairly widely, if not universally, applicable challenges to
overcome.
Communication 1: Getting Published 161
Getting started
Perhaps it’s worth starting off with the writing process itself. Whatever your
background, it does take some getting used to. Where they can get away
with it (i.e. particularly in the sciences), education systems have tended to
steer away from long-form writing over the last few decades, so increas-
ing numbers of school-leavers are faced with undergraduate dissertations at
university without much preparation. Alternatively, they may have attended
a university course that has given up on this with undergraduates, leav-
ing them to face the same thing at post-graduate level and/or definitely
with a doctoral programme. I don’t blame the schools and universities—
lengthy prose is time consuming to mark and it’s questionable how well it
discriminates between good and mediocre candidates; it’s probably also more
difficult to mark objectively and defensibly if students are feeling litigious
about their results. However, the fact remains that progressing in a research
career involves writing relatively lengthy prose sooner or later. Therefore, it’s
worth accepting this and figuring out what to do if you think it might be a
challenge.
Having read or listened to a few interviews with novelists over the years,
I’m not convinced there’s much of a difference between the processes of aca-
demic and creative writing. Writer’s block is certainly a common enough
obstacle in academia, and I think the solutions are much the same as those
for a writer of fiction. In particular, the best (maybe only) way to start beat-
ing it is by simply by forcing yourself to write something, rather than making
yourself yet another cup of coffee, attending to the shelf that needs dusting,
answering your texts and emails, and all the other myriad distractions. It’s
fine to write a poor-quality first draft—you don’t need to show it to any-
one else at this stage and it’s likely that your thoughts will be a little more
in order by the end of it than they were at the beginning. You can then
start hacking it into shape at the next draft. Unlike a novel, academic writ-
ing tends to split naturally into a series of points to be made, and it’s helpful
if paragraphs can be arranged and ordered so that the reader is taken on
a logical journey through your argument or description. You might find it
helpful to have a subheading (and sub-subheading, etc.) system for all the
paragraphs to help with that organization, and you can try to get over any
writer’s block by dividing up the larger work into bite-sized chunks (as well
as making life easier for your poor reader). If the final product isn’t meant to
have subheadings (e.g. if it isn’t the house style for your target journal) then
you can generally remove them later on without the paragraphs losing their
flow.
162 How to do Research
Before you even start, however, you do need to have some idea what you’re
trying to communicate.1 This might involve you going back to the begin-
ning of this book and working out what your research is really about, and
where it fits in the scheme of things. However, let’s save time and assume
that you’ve remembered enough about the principles of research and its
flow through history to have an inkling of an idea. Perhaps start out like
Socrates (Chapter 3) and think through what it is that you and the world don’t
know—what was the reason for the piece of research you want to describe,
what question was it trying to answer, or what was the blank space on the
map that needed filling in? Next (and see Chapter 6 if this is getting hazy),
decide whether what you’re doing is marshalling arguments (or equations,
or evidence from historical sources, etc.) in order to take the reader on a
new path you’ve drawn from A to B—the rationalist approach to knowledge
generation,2 or if you’re describing a study that’s taken observations or an
experiment putting a theory to the test—the empiricist’s way. Perhaps it’s
some mixture of the two (in which case, do try to disentangle them—what is
the empirical evidence and what is the rationalist argument?). If it’s empirical
research, is it simply a piece of description, or is it testing a theory? Either is
fine, but be careful to keep to what was decided in advance of the study; in
particular, don’t try to take unexpected findings from a piece of exploratory
observation and pretend it was a theory that you’d always intended to put to
the test—any reader worth having will be able to see through that, however
hard you try. And it’s fine to describe and explore. What you can conclude
may be limited without the experiment to test your findings, but that can
always come later.
Assuming you have an idea of what you need to communicate and that you
have overcome your writer’s block, you now have to get down to the commu-
nication itself. This is when matters start to become discipline-specific, but
a good guiding principle is to think of your reader as objectively as possible,
having pity on anyone who has to wade through academic texts, and making
1 And if you are getting writer’s block, at least consider whether it’s because you’ve got a mess of findings
you don’t know how to communicate because you’re not sure what you were doing in the first place. That’s
fine, and by no means unheard of, but you might need to get some senior advice on sorting it out.
2 Descartes and ‘I think therefore I am’ as a starting point in his argument for God—that sort of thing.
Or the succession of mathematical principles that Newton used to build towards his laws of motion.
Communication 1: Getting Published 163
sure that you’re writing with them in mind. There’s no harm in imagining
a general rather than expert reader—one of your colleagues, for example,
or someone more junior—just to make sure that you’re putting your ideas
together clearly and compensating for the fact that your prose is unlikely to
be particularly gripping. That’s just basic politeness. However, you also need
to have in mind the reader who’s going to be judging your work for publi-
cation, in relation to everything else that’s competing for space. So, do get
to know what they’re likely to expect—if you’re submitting to an academic
journal, read their instructions for authors and have a good look at what they
publish. Try to find a few pieces of work similar to your own and use them
as a template to work out the approximate length, style, use of subheadings,
and other formatting issues. If there isn’t anything remotely similar to your
planned submission, are you really sure you’re sending your work to the right
place? Ultimately, reviewers and editors will make their judgement on the
quality of the research you’re describing and whether they believe it’s right
for the publication, but it does help if they feel they’re in comfortable, famil-
iar territory when they read what you’ve written—for example, the flow of
the text, the relative length of any subsections, the layout of any tables or
figures.
Different academic publishers have different processes for evaluating sub-
missions, but most involve some sort of peer-review process—occasionally
with the reviewers identified, much more often anonymous. However, most
academic journals are seeing the volume of their submissions outpacing the
pool of reviewers they can draw on, so they have to be quite choosy about
what to send for review (particularly as most reviewing remains unpaid and
essentially a favour). This means that a much more cursory editor’s review is
increasingly the first hurdle to overcome, the judgement here being whether
the submission will definitely interest the journal if the reviewers like it. I sus-
pect that editors vary widely in the depth to which they consider submissions,
but they’re likely to focus heavily on your abstract or other similar summary,
and on any cover letter you’ve been invited to include as part of the package.
With this in mind, you do need to get good at writing short, pithy prose that
summarizes your longer piece of work and presents it in an easily digestible
format. I suggest getting senior advice (and certainly a read-through, if you
can) when pitching for publication, as the style may need tailoring. You do
need to express clearly why the publisher should consider your approach.
Keep in mind, though, that most academic editors and reviewers don’t like
a ‘hard sell’ and will be suspicious if you seem to be exaggerating—it just
doesn’t feel right for research. Also, if your submission’s summary is over-
hyped at the first stage, any reviewers reading the piece in depth later on
164 How to do Research
will pick this up quickly enough and it’s unlikely to put them in the best of
moods. Quiet, calm assertion is probably the safest style—being clear about
your objectives, the findings you’re communicating, and why you think they
matter—and then leaving it for others to judge.
Disciplines vary as to the balance between the other two sections. In the
typical scientific report, the ‘results’ section lays out what was actually found,
often quite briefly and relying a lot on tables and/or figures. Then, a ‘discus-
sion’ section considers what can be concluded from this. In other fields, for
example in social sciences, the two may be more conflated and the findings
discussed as they are presented, with no need for anything more than a brief
concluding round-up at the end. Similarly, a report of humanities research
or a mathematical paper are unlikely to need to separate their findings from
their discussion. However, there are still advantages in many fields in keep-
ing them distinct, if only in your thinking. You need to clarify what you’re
communicating (your observations), and separate it from the deeper truth
that you’re inferring, using the overlay of your own reasoning or argument.
Inference
3 Although sometimes a good way to infuriate biological scientists working in health research, or at
least the ones who actually care what anyone else thinks.
166 How to do Research
• Could your observations have arisen from the way in which you found or allocated
your sample(s)?
Bias • Could your observations have arisen from the way in which you collected your information?
• Might there be other factors (e.g., differences between comparison groups) that provide
alternative explanation? Can you ‘adjust’ for these using statistical modelling?
Causality • How strongly do your findings support the causal relationship you’re interested in?
Fig. 13.2 The process of inference, which I think can be made to apply to most empir-
ical research, even if the language comes from epidemiology—after all, it’s common
enough to have source material (whether human participants, cell samples, or a histor-
ical archive), to take measurements, and to consider whether your findings have just
arisen by chance, and whether they represent a cause–effect relationship. The point
with inference is that a good research report will be open and self-critical about it when
you discuss your findings, sharing your objectivity with your reader. However, it’s also
likely to be key to the way you design your studies in the first place (i.e. to forestall any
concerns that a reader might have further down the line).
Author’s own work.
As a starting point, the reader has to accept that the actual reported results
are genuine—for example, that the average blood pressures in Groups 1 and
2 really were what the authors have recorded, or that the quotations under-
pinning a literary thesis are correctly provided. If you’re thinking about
your own findings then there’s plenty you can do to make certain they’re
correct: double-check your calculations, make sure you’ve transcribed your
measurements accurately, go back to your source material and re-read it—
whatever applies to the way you’ve collected your information. This is nothing
more than the principle of careful observation—the attention to detail we
considered in Chapter 4 as a characteristic of any good research project.
If you’re looking at someone else’s findings, then you have to accept them
because what else are you going to do? Of course, they could be incorrect
because the other person or team has made an inadvertent mistake. Alterna-
tively, the other person or team could have made their findings up—research
fraud does happen and there are flare-ups of worry from time to time about
what to do about it. I don’t think there’s an easy answer to either of these
scenarios. An advantage of academic consensus (Chapter 8) is that it tends
to be cautious about findings that come out of nowhere and sound too good
to be true—or at least this ought to be the case if the research discipline is
a healthy one. And the academic community does tend to notice if a group
or individual is consistently putting out over-hyped findings that no one else
can reproduce. The difficulty is that it’s near-impossible to prove most cases of
fraud or sloppy practice, so when they are suspected, the most likely response
will be for findings to be quietly ignored, rather than called out as false.⁴ Pub-
lishers naturally have more to lose, and they periodically anguish over this
matter (nobody likes to be left holding responsibility for falsified informa-
tion), although I have yet to hear of any wholly watertight fraud detection
system. It may become a rarer issue as research communities grow in size
and fewer important studies are carried out by individuals working in isola-
tion. Fraud, like high-level conspiracy, is harder to get away with when there
are a lot of people involved—sooner or later there’ll be a whistle-blower, or at
least rumours will get around. The commercial sector is more of a worry here
because of the tighter controls that can be imposed on whistle-blowing by
current and previous staff, as well as the more obvious incentives for skewed
or even falsified information. However, any institution, public or private,
⁴ Unless you’re lucky enough to witness a proper academic shouting match—insecure, irritable profes-
sors with limited social skills can really go for it when they want to.
168 How to do Research
protein structure changes, GDP). Also, they don’t only apply to observa-
tional studies. In a randomized controlled trial of a drug treatment, group
differences might be observed if people know (or guess) whether they’re
receiving the new treatment rather than the placebo and if they’re more likely
to report improvement as a result. Or if you’re comparing two cell lines under
different experimental conditions, and if the results involve some level of sub-
jective evaluation, you might unconsciously skew the findings one way or
another if you know which cell line you’re looking at.
The point about bias isn’t that the observations themselves are false. Rather,
it’s that the results in the actual study might not be reflected ‘out there’ in the
situation or population that the study is trying to represent. For example, a
difference we’ve observed between people with and without bowel cancer in
our sample might not really be present if we were to investigate everyone with
and without that condition. Instead, the observations might have come about
because of the way in which the comparison group was selected, or the way
in which the measurements were applied in the study.
An additional, but simpler, issue is that even a perfectly designed study
doesn’t include everyone or everything in its comparison groups but takes
what it hopes are representative samples. There will therefore always be
some difference observed by chance between selected groups, even if there’s
no actual underlying difference. For studies using numeric (quantitative)
outcomes, helpfully the expected random distributions of values follow
mathematically predictable patterns (e.g. the bell-shaped ‘normal distribu-
tion’). Statistical techniques can therefore be used to quantify this role of
chance—not involving particularly complicated equations, although most
often carried out by computer software nowadays, rather than hand cal-
culation. If your observations aren’t numeric, you may have to use other
approaches. For example, in ‘qualitative’ research, where the output con-
sists of narrative accounts (e.g. transcribed interviews), findings tend to start
repeating themselves after a while and new themes (or at least important
ones) become less and less likely to emerge, referred to as ‘thematic satu-
ration’. So, a decision is usually made to stop at this point, on the assumption
that the findings now adequately reflect the population from which they’ve
been drawn.
Step 3—causation?
The process of inference is at the heart of good research and so it’s helpful if
you can show evidence of it in your writing—either in the gradual description
of your findings, considering their meaning as you go, or else in a dedicated
discussion section kept separate from the results—whatever the tradition is
for your discipline. However, do try to balance the discussion. One cheap and
easy route is not to think about possible limitations at all and accepting all
your findings at face value.⁵ The other extreme is to list out everything that’s
wrong and leave the reader wondering what’s left to take away. If you do feel
that your study has a fundamental flaw that prevents you from drawing any
conclusion, then it may be best not to attempt to publish—after all, there has
⁶ Rather than the old system where academic journals made their profits from subscribing readers
and/or academic libraries.
172 How to do Research
⁷ And even the occasional rejection at the end of that long period with ‘failure to find a reviewer’ as the
reason given.
⁸ Which does, unfortunately, happen from time to time, although how these people live with them-
selves, I’ve no idea.
Communication 1: Getting Published 173
I suspect most researchers end up, out of habit, finding themselves most com-
fortable with the word length and structure of the standard short report for
their field—the typical output of a single study submitted to an academic
journal, for example. However, there are also longer formats used to publish
research findings. Generally, the trick to ‘writing long’ is finding a workable
superstructure to build around; for those of us used to stringing together
paragraphs in short scientific reports of no more than 5000 words or so,
assembling larger volumes of text can feel like embarking on a trackless open
ocean in a small rocky boat. So, we need something to keep the descriptions
or arguments coherent, as do our readers.
I discuss wider communication methods in Chapter 14; however, I think
it’s worth considering the doctoral thesis at this point, as it’s a publication
of sorts and it’s where most researchers come up against proper long-form
174 How to do Research
writing for the first time. By the end of their thesis, many students will feel
no inclination to revisit the experience, but it’s probably the best training for
lengthier reporting when this does come around. The tradition of requiring
a book’s worth of prose as a rite of passage in academia is a strange one, par-
ticularly as a thesis is often not read by anyone apart from the student, their
supervisors (you would hope), and a couple of examiners. However, it is what
it is. Doctorates and their examinations vary widely between countries. The
British model, for example, is typically a weighty novel-length thesis, which
the candidate defends very privately, usually in a small office with two or three
examiners. European models (those I’ve seen at least) tend towards a shorter
(novella-length) thesis and a public defence in a lecture theatre.⁹ Quite a few
European theses I’ve come across are also produced in a print run and posted
around to other academics for their enjoyment and edification; British the-
ses, when printed versions were still required,1⁰ rarely ran to more than three
or four weighty final copies—for the university library, the student, and the
supervisors, perhaps. However, I assume that at least some, perhaps particu-
larly in humanities fields, end up turned around and reformatted for potential
commercial publication.
I suspect that writing a doctoral thesis comes as most of a shock to science
researchers who never went into academia with the intention of composing
anything that lengthy. Agreeing a chapter plan is certainly important—the
superstructure I mentioned—although you shouldn’t feel that this has to be
set in stone; also, try to make it fit your material rather than feel that it’s some
immutable external imposition. Because university regulations have to cover
doctorates from all disciplines, they tend to be quite vague on the structure
of the thesis and can be interpreted flexibly—it’s your readers (i.e. examiners)
and their expectations that count most of all, and your supervisor should have
a good feel for what’s acceptable.
Having assembled your chapter plan, do also bear in mind that you don’t
have to start writing at the beginning. Completing the middle sections—the
methods and results chapters or equivalent—is often an easier starting point
because they’ll be familiar, and you may have even managed to publish some
of the material already. Once you’ve drafted the middle sections, the intro-
ductory chapters tend to be easier to write because you’ve got something
ahead of you to aim at. I don’t know how widely applicable it is, but my own
⁹ The presence of friends and relatives being rather intimidating for examiners.
1⁰ The requirement for hard-copy theses may turn out to be a permanent casualty of the COVID-19
pandemic, now that everything has had to shift to electronic circulation. If so, I don’t think they’ll be very
much missed—they look impressive on an office shelf, but the printing and binding processes do seem to
create unnecessary stress for students at the already-tense hand-in point.
Communication 1: Getting Published 175
Starting broad, steadily focusing in Specifc to your study − what you did, what Starting with your own findings,
on your study you found, and what you’re concluding then broadening out
General More Gaps in Your Your results Inference; How your Wider
introduction; specific previous methods strengths and findings fit implications
setting the review of research limitations of with relevant and areas for
scene relevant leading into your study and previous further
previous your own its conclusions research research
research objectives
Fig. 13.3 A structure that you might find helpful for a PhD thesis. The challenge that
a lot of students have with this sort of long document is not so much the part in the
middle (describing what you did and what you found); hence, it’s sometimes best to
start writing the middle chapters first. The difficulty is in guiding a lengthy review of
previous research so that it leads naturally towards the specific objectives for your own
study. If each section of the introduction is not more relevant to your study than the one
before it, then there might be some rearrangement needed—it will certainly be easier for
an examiner to read if you structure it that way. The opposite is true (i.e. starting specific
and becoming less so) when you discuss your findings in the final chapter(s), although
this tends to be easier as you’re on the home straight by then.
Author’s own work.
176 How to do Research
Accidental recognition
English rather than Latin, and so were better-read in their country.1 Thomas
Digges grew up in the household of John Dee, the philosopher and magician
at the court of Elizabeth I, and he rose to political power and influence as a
result, transmitting Copernican ideas through his own strangely titled book,
Prognostication Everlasting, published in 1576. As well as this, the ideas found
their way around academic sites in mainland Europe, enough to get to Kepler
and Galileo, who took them on, developed them further, and attracted most
of the hassle from the religious authorities for their pains. So, having the right
readers did help matters back in the sixteenth century as much as it does in
our own age of influencers.
Some researchers can gain a reputation for their work, despite being reluc-
tant to publish, although it does rather stand in the way. Henry Cavendish
(Chapter 10) was a very clever but painfully shy eighteenth-century scientist
who also happened to be phenomenally wealthy, and so was able to build and
staff his own laboratories and support his experiments without having to seek
funding. He was known to his contemporaries for his chemistry, which he did
publish, but he also carried out pioneering work in electromagnetism, which
went unpromoted and unnoticed, ending up re-discovered and attributed to
scientists from later generations. Cavendish was notoriously cautious about
his findings and their dissemination, so was fortunate to gain the influence
and recognition that he did, as there were plenty of other faster workers in
the field. Darwin was another scientist rather prone to sitting nervously on
his theories (evolution by natural selection famously). Caution, scepticism,
and attention to detail are important qualities in researchers; however, an
excess can paralyse productivity and impede output and influence.
The issue here is more about personal attribution than the science
itself. The understanding of chemistry developed by Cavendish would have
emerged sooner or later—it would just have others’ names attached, as
happened with the electromagnetic principles he discovered. And Alfred
Wallace, or someone else, would have probably ended up producing a service-
able theory of natural selection if Darwin hadn’t been spooked into action
when he realized how close Wallace had come, and if he hadn’t attracted
acolytes like Thomas Henry Huxley (‘Darwin’s bulldog’), who assertively
publicized and advocated his work after its publication.2 As we’ve come across
repeatedly, knowledge seems to get there in the end, and at least some fame
1 England at the time was under regimes where religious conflict tended to focus on burning people,
rather than science books. Good for science: less good for people.
2 Huxley’s publicity machine certainly ensured that Darwin was remembered rather than Wallace—
although to be fair, Wallace viewed Darwin’s work as better than his own, and Darwin and Huxley did
help to get Wallace a royal pension when he had fallen on hard times later in life. Wallace was also an
ardent spiritualist which didn’t help his scientific standing; neither did his campaign against the smallpox
vaccine.
Communication 2: Getting Known 179
Fig. 14.1 Caroline Herschel in her late 90s, still active and engaged with the scien-
tific community. As a young woman, she was desperate to escape a life of domestic
drudgery in Hanover and was rescued by her brothers in England, assisting William with
his music initially and then with his astronomy as this took off. Not long afterwards she
was making her own discoveries and achieving recognition in her own right for these.
She therefore represents one of the last women to chart a career in science without
having to overcome hostile institutional prejudice—largely because eighteenth-century
astronomy was still an amateur field, not requiring university education or society mem-
bership. The professionalization of science was probably inevitable, but it brought with
it all the discriminatory trappings of wider society.
Caroline Herschel, unknown artist, after an engraving by Georg Heinrich Busse, Hanover, 1847.
ETH Library. Public domain, via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/
commons/4/40/ETH-BIB-Herschel%2C_Caroline_%281750-1848%29-Portrait-Portr_11026-092-
SF.jpg
1665, was a rapid best-seller, with its presentation of a new world as seen
through the microscope, helped by the author’s high-quality illustrations.
Over 150 years later, Charles Lyell was able to achieve the same level of
impact with his Principles of Geology (published in three volumes from 1830–
1833) and, indeed, was able to live off the royalties (a rarity nowadays with
published research output). These were times when a researcher from any
background stood at least some chance of recognition, assuming they could
publish, as there was a relatively small community to reach. Hence William
Herschel, trained as a musician and working as an organist in Bath, could
Communication 2: Getting Known 181
read up on astronomy, design and build a telescope in his back garden, dis-
cover Uranus in 1781, and be elected as a Fellow of the Royal Society the same
year. Similarly, his sister Caroline (Figure 14.1) could learn the same trade,
publish independently, receive a scientist’s salary, and hold a government
position, remaining widely respected and consulted up to her death in 1848 at
the age of 97.
As research in the sciences accelerated, the number of researchers
expanded and started to coalesce around institutions. Historically, healthy
research fields do seem to need communities of like-minded people for
collaboration and exchange of ideas, and the development of lines of com-
munication within and between groupings was an obvious driver of progress.
Also, the cutting edge of science became increasingly dependent on laborato-
ries, equipment, and funding—less easy to achieve at home, even for wealthy
men like Cavendish. While there were clear advantages in the shift from indi-
viduals to institutions (self-evident if you assume that research success has
been governed by its own natural selection processes), this sort of coalescence
does also tend to be accompanied by conservatism and exclusion.
3 Something he probably wouldn’t have welcomed, as a man devoted to simplicity and modesty.
⁴ No one could quite get their head around the implication that apparently solid materials consisted of
a lot of empty space between atoms.
182 How to do Research
Fig. 14.2 John Dalton (left) and Humphry Davy (right) both came from humble origins—
Dalton a weaver’s son from Cumberland, Davy a woodcarver’s son from Cornwall.
Both achieved prominence and reputation in physics and chemistry through prodi-
gious hard work, although very different in style. Dalton’s progress was steady evolution
as a respected teacher in Manchester and careful but opportunistic accumulation of
research, building towards his atomic theory. Davy clearly excelled at networking and
his charm and charisma made him a good fit for public lectures and experimentation
at the new Royal Institution. Here, he was influential in supporting Dalton and helping
gain him recognition, as well as discovering the young Michael Faraday. However, like
Newton, his later career was more political than scientific, whereas Dalton and Faraday
were more continuously productive. It does bring to mind the fabled race between the
hare and the tortoise.
a: Engraving of a painting of John Dalton (1895). b: Portrait of Sir Humphry Davy, 1st Baronet, FRS (1778–
1829), chemist and physicist. Source a: Frontispiece of John Dalton and the Rise of Modern Chemistry
by Henry Roscoe (William Henry Worthington, engraver, and Joseph Allen, painter). Public domain,
via Wikimedia Commons. https://commons.wikimedia.org/w/index.php?curid=3248482; Source b:
Wellcome Collection. Attribution 4.0 International (CC BY 4.0). https://wellcomecollection.org/works/
f48jat3q
Dalton therefore did manage to build reputation and influence but had
to overcome considerable disadvantages. Although the world of researchers
(and the world population in general) was still small in the early nineteenth
century, it was beginning to create environments that were more difficult
to break into than they had been previously. As research moved from an
amateur to professional activity, it increasingly orientated itself around uni-
versities and other institutions. And the institutions, as Dalton knew and
took for granted, were already set up with deliberate exclusivity—on grounds
Communication 2: Getting Known 183
⁵ Or lack of religious affiliation, as the philosopher David Hume (Chapter 5) had found.
⁶ For example, eighteenth- and nineteenth-century Quaker industrialists such as Edward Pease, the
‘father of the railway’, or those who gave their names to companies in confectionary (Cadbury, Terry, Fry,
Rowntree), banking (Lloyds, Barclays), and manufacturing (Bryant & May, Clarks).
⁷ A lifelong interest—he kept a diary of the weather from the age of 21 and continued it till the day
before his death 57 years later.
⁸ The Dalton is the standard unit of atomic mass to this day. Interestingly, ‘Daltonism’ is also sometimes
used as a term for colour blindness, so his early work did bear fruit in the end. His data remained the
standard estimates for the heights of mountains in the Lake District until Ordnance Survey maps started
being produced in the 1860s.
⁹ Not all his acquaintances were scientific. Samuel Taylor Coleridge was an enthusiast for nitrous oxide
inhalation (of course!), and William Wordsworth asked Davy to proofread the Lyrical Ballads that he and
Coleridge had composed—possibly an unwise choice, as that edition ended up notoriously full of errors.
184 How to do Research
risk of devoting a lot of time and effort to writing about research and never
actually doing any. On the other hand, if well directed, writing of this sort
can help promote a field of enquiry and improve the visibility of discover-
ies as they emerge, as well as help synthesize individual findings into the
communication of more robust truths.
As with all writing, it helps to be clear at the outset what it is you’re
wanting to communicate and consider carefully what opportunities there are
for publishing it. In my own field of health research, for example, the aca-
demic journals tend to distinguish between ‘original research’ (i.e. a research
paper describing a single study and its findings; the sort we considered in
Chapter 13), literature reviews, and opinion pieces. Literature reviews bring
together multiple studies to draw conclusions from pooled findings—that
is, a vehicle for the consensus process and Bradford Hill’s ‘verdict of cau-
sation’ (Chapter 8). Those reviews that call themselves ‘systematic’ aim for
as complete as possible a coverage of previous investigations and are bet-
ter considered as original research, in that they have objectives, methods
(the approach taken to identifying and collating relevant studies), results (the
actual collation and synthesis of individual findings), and conclusions. How-
ever, some journals also accommodate what are sometimes called ‘narrative’
reviews, where the author does not make any claim to comprehensiveness or
objectivity in their choice of material. These might, for example, be assem-
bling literature to provide readers with a summary and digest of recent
findings on a particular topic, or of methodological approaches in a field.
They tend to be quite lengthy compared to original research papers and might
have reference lists running to 100 citations or beyond.1⁰
A narrative review might be published as an article in a journal, following
peer review and editor approval, but the same sort of material might alterna-
tively be contributed as a chapter in a commissioned multi-author book. The
value placed on this format depends on the field of research. In the sciences,
books risk being already out of date before they’re published, so it’s worth
considering at the outset whether this the right vehicle for dissemination.
However, even the faster-moving fields of research do tend to involve under-
lying truths that emerge more slowly and benefit from the level of consider-
ation that can be devoted to them in a five- to ten-thousand-word chapter,
particularly if the book as a whole is edited so that chapters are complemen-
tary and allow for broader themes and connections to be developed. Ideally,
there will be some discussion between editor(s) and publishers about this in
1⁰ I guess an analogy in humanities might be the editing of an anthology, or the curating of a gallery
exhibition—both also opportunities to bring together source material in order to convey some broader
principle or underlying thread.
186 How to do Research
In the other direction entirely, there are important (and probably growing)
opportunities for communication through short-form writing. This is clearly
an area that has been transformed by the Internet, although I don’t think
the principles have changed very much. In my own field, the pre-Internet
publication opportunities lay particularly in ‘editorials’—brief, focused pieces
(usually considerably fewer than 1000 words with limits on references) pro-
viding an opportunity to say something that might be of interest to readers.
In medical journals, where they’re still widely available, these are probably
the articles that stand most chance of being read in their entirety, whereas
research papers tend to be scanned at the level of the abstract (summary)
and only read in depth by those who are actively investigating in that field
or considering application in clinical practice. So, composing editorials or
equivalent opinion pieces is well worth bearing in mind in terms of com-
munication and ‘getting out there’, even though they’re currently given little
value in most university metrics. Sometimes they’re free form in remit, just
saying something of interest; sometimes they’re commissioned to accompany
a research article that the editor wants to be highlighted, and so have to both
consider the source article itself and think through its wider implications.
Finally, at the most extreme end there are opportunities to submit letters
for correspondence sections, which are shorter still (maybe 100–200 words
if you’re lucky).
If a structure is important in keeping a 100,000-word thesis or book
together, it’s just as important at the small scale. In general, there simply isn’t
the space to make more than a few points, or perhaps just a single one, so
sentences and paragraphs have to be tightly organized in order to construct
the argument and take the reader along with you. I’m sure there are different
approaches, but personally I would go for successive drafting with proper rest
Communication 2: Getting Known 187
Social media and other online platforms have clearly expanded the scope sub-
stantially for wider, more informal communications about research without
the constraints of the publication process (submissions, rejections, revisions,
formatting, proofreading, etc.) and clearly with the advantages of immediacy.
Writing blogs to accompany research output has become routine nowadays,
as has apologizing to university communications colleagues if you don’t hap-
pen to subscribe to this or that social media platform.11 I don’t think that
there’s sufficient perspective yet on the digital age to judge whether partic-
ular communication strategies are helpful in achieving impact for research,
or whether it’s all just a waste of time.12 However, the principles for writ-
ing haven’t really changed very much. If you’re pitching an opinion piece to
a journal or if you’re composing the same thing for a blog, you still need
to have some sense of your readership, some purpose in writing, and some
structure to make sure that your prose is as readable as possible and gets its
point across. Practice is probably the most important factor—the more you
try writing beyond pure research output, the better you are likely to become
at it. In this respect, it probably helps if your writing isn’t confined to self-
published blogs and if you’re actually having to interest and enthuse an editor.
It’s a challenge I would recommend, but each to their own . . .
There’s an overlap here, of course, between research and journalism, and
probably a rather ill-defined distinction between the two. At some point,
publishing articles about research for researchers drifts into publishing arti-
cles about research for a wider readership, and there are of course junior and
senior academics from both the sciences and humanities who have crossed
over entirely into mainstream media. It has probably always been the case.
Dalton began writing for the general public as a journalist of sorts before he
11 A time management issue really; how on earth does anyone get any work done if they’re trying to
keep track of all this stuff flying around?
12 Just as it doesn’t seem clear whether the platforms are really any good for advertising more generally,
despite the money that’s poured in that direction.
188 How to do Research
managed to get a foot in the academic door. Davy, once he got his break at
the Royal Institution, combined celebrity and showmanship with scientific
enquiry of sorts—not the most rigorous technique, perhaps, but enough to
discover a few new elements, which is more than most of us might dream
of. And the journalistic side is as important as the academic: those who can
communicate their field sensibly to a wider public may have more impact
than any number of pure researchers. However, it’s an uneasy relationship.
On the one hand, researchers ought to make good journalists because of
the practice they have in writing for a less-than-captive audience. On the
other hand, researchers do tend to get used to writing for other researchers.
A good journalist really needs to be able to write fast to order and the
process of communicating complexities has to involve some simplification,
which the researcher might view as corner cutting. Hence there’s a repu-
tation (often, but not always, deserved) of academic writing as obsessive
and precise to the point of being desiccated and unreadable.13 I suspect a
lot of academics would like to have higher media profiles (we’re all human
and everyone likes an audience) and some of the carping about journalistic
over-simplification probably does arise from jealousy. However, it’s also rea-
sonable to say that some erstwhile academics do drift to journalism because
they’re not particularly good researchers. As I said, it’s not always an easy
relationship.
Visualizing truth
13 For example, see Edward Casaubon in George Eliot’s Middlemarch (Chapter 3).
1⁴ And, once again, there’s nothing new in this—Plato’s famous mind-picture of projected shadows on
a cave wall viewed by chained prisoners was just a way to visualize the idea of higher levels of reality.
Communication 2: Getting Known 189
Fig. 14.3 Dorothy Hodgkin, a leader in X-ray crystallography, looking happy about her
Nobel Prize win in 1964. It’s tempting to link her flair for visualization, underpinning her
pioneering research into molecular structures, to the copies she made of mosaics as a
child assisting her parents during their archaeological work in the Middle East. She pro-
gressed towards her research career by being an academic high-flyer at school, helped
by a supportive, encouraging family and what was, by the 1920s, a mature Oxbridge col-
lege system for women. Despite her politics, which were considerably left of centre, her
portrait hung in the Downing Street office of Margaret Thatcher, whom she had taught
at Oxford in the 1940s.
Dorothy Mary Hodgkin. Unattributed photograph from Les Prix Nobel 1964. http://www.nobelprize.
org/nobel_prizes/chemistry/laureates/1964/
1⁵ Or, indeed, colleagues from your own institution. A lot of people whose offices are in neighbouring
buildings or different floors acknowledge that they only properly get to talk to each other at a conference
halfway around the world or waiting in an airport departure lounge.
192 How to do Research
unavoidable part of academic life along with its other accompanying chal-
lenges (writing for publication, applying for funding, coping with rejections,
all the rest of it) and you’ll need to find a way to deal with difficulties if you
encounter them.
Everyone’s unique and will differ in their weak spots. Personally, I’ve never
found informal small-group sessions easy,1⁶ but I’ve never had any particular
problem with large lecture theatres (which surprised me the first time this
was required, many years ago now—I had anticipated a whole lot of stage
fright, which just never happened).1⁷ You’ll discover your comfort and dis-
comfort zones soon enough, and will no doubt orientate your activity in the
comfort direction. However, there are limits and you’re likely to need to deal
with scenarios you find more difficult. For example, I avoid invitations to run
workshops if I can get away with it, and I’ve given up any expectation that I’ll
get over whatever psychological hurdle is causing me the problem, but some-
times these sessions are just inevitable and have to be endured. Likewise, if
you get nervous speaking to large audiences, you’re likely to need to face up
to it somehow.
At a basic level, the solution tends to lie with plenty of practice and repe-
tition. You also need to accept that you’re going to make mistakes along the
way and try to view these in a positive light, as learning opportunities, rather
than getting discouraged. And don’t ever assume that anyone else has had
an easy ride—even the really slick operators will have crashed and burned
at some point. There are many resources available to help improve lecturing
skills, with peer feedback often emphasized as particularly important. This
doesn’t have to be formal; it might just involve asking a supportive colleague
to give you some pointers when they’ve heard you speak. It’s helpful generally
to have a self-critical mindset (i.e. be prepared to accept that improvement is
always possible) although do keep the self-criticism within limits (i.e. be will-
ing to accept that you might, on occasions, have done something well). Also,
you can sometimes get direct feedback from the attention or boredom of the
audience, depending on your venue.1⁸ Finally, when you find yourself watch-
ing one of these sessions, try to switch to a position of objectivity from time
to time, reflect on the speakers you admire or not, and work out why it is
that their talks are engaging you or boring you or annoying you (and you can
probably learn as much from the bad ones as the good).
The format and style of research presentations varies enormously between
disciplines, as you’ll find out if you ever attend events that are designed to
be cross-disciplinary. So, you definitely need exposure to talks from senior
colleagues and develop a sense of what works and what doesn’t.1⁹ Science
presentations tend to orientate around slide shows on the standard platforms
available, whereas humanities presentations often involve pre-prepared prose
read out to the audience without visuals. Within slide-based presentations,
the expectation might be for bells-and-whistles visualization (animations,
embedded videos, etc.), or it might be for the simplicity of a few key messages
on a plain background. My colleagues in ophthalmology routinely present on
independent double screens (one for each eye), whereas I’ve never had to do
this myself and wouldn’t want to risk it. For my specialty, the submission
for a conference requires nothing more than an abstract (and research that
contains findings but is not necessarily finalized). However, my colleagues in
computer science will expect to submit a fully prepared research paper for
publication in the conference proceedings, which may rate as equivalent or
superior to an academic journal publication.
Beyond paying attention to the preferred format, so that you come across
as ‘belonging’ to your field, preparing a presentation involves similar princi-
ples to those we’ve discussed for academic writing. Knowing your audience is
obviously key to the pitching element, the language that you’ll use (on slides
as well as verbally), the images that you’ll show, and the way in which you’ll
convey your message. For example, if members of your audience are likely
to come from different research disciplines, you may need to incorporate
some orientation to your own field;2⁰ otherwise, they’ll disengage at the first
technicality—and there’s nothing more irritating than listening to a speaker
who clearly doesn’t care who they’re talking to. Structure is just as important,
if not more so, in an oral presentation as in writing, because your audience
needs to be taken through the arguments you’re presenting. And instead of
word count limitations, you need to consider timing very carefully—over-
running is a discourtesy (and generally felt that way) to whoever’s chairing
your session, as well as to fellow speakers (whose allotted time you are rudely
encroaching on), and to the audience (who will start worrying about the
coffee break). If you’re using relatively simple slides without animations or
1⁹ Although it’s probably best to pay closest attention to presentations from colleagues a few years ahead
of you in experience, rather than those from senior professors—who are often following a format that was
fashionable in their youth, but which may now be a little tired.
2⁰ Without coming across as patronizing—that’s the challenge.
194 How to do Research
videos, a good rule of thumb is to allocate one per minute and aim to finish
a few minutes short of your total allowance. Personally, I find slide presenta-
tions more engaging when the speaker isn’t reading directly off the slide, so
favour brevity and limited visuals, but I’m aware that preferences and expec-
tations vary. As a science researcher, I rely quite heavily on getting a narrative
in place in advance through the slideshow; I assume that speakers from other
disciplines who read out research papers verbatim or who talk off the cuff
will place a similar reliance on structure in their prose or lecture notes. With
speaking, as with writing, developing an argument is best done by taking your
audience gently through a coherent series of steps; otherwise, their attention
is liable to drift.
A poster is often an alternative to an oral presentation. Many confer-
ences accommodate both and for some, the poster is actually preferable—for
example, in the aircraft hangar events, a well-positioned poster may attract
more attention and generate more impact and valuable conversations than an
oral presentation in a poorly attended parallel session at some remote corner
of the venue. Poster formats and expectations also vary between disciplines,
so again you should take the opportunity to survey the field and work out
what you think looks best. Obvious considerations for poster presentations
include making absolutely certain in advance whether the display boards are
going to be portrait or landscape in orientation (there’s always someone who
doesn’t get the message) and thinking about the visual appeal of your poster
from about a metre’s distance. You’re looking for a strong visual to attract the
passer-by, a title that’s legible at maybe two or three metres, and a font size for
the rest of your text that doesn’t require someone to bring along a magnifying
glass. Finally, do make sure to turn up to talk about your poster at the time
allotted, and think about taking along some handouts and business cards,
as these are often the times when you get to meet like-minded colleagues—
the networking function that, if anything does, will ultimately resurrect the
physical conference after its recent pause.
Just as writing about research can drift from academia into journalism,
presentations to researchers merge into oral communications to wider audi-
ences. Pitching for interviews with newspapers, radio, and sometimes tele-
vision is increasingly part of the process of maximizing impact for research
output, and most institutions have a communications officer or department
to facilitate this. It’s therefore well worth looking for training opportunities
Communication 2: Getting Known 195
Research doesn’t come for free; it never has. To begin with, researchers need
to earn a living somehow. Then there are the additional costs of research
projects themselves. If the acceleration in knowledge generation over the past
500 years can partly be attributed to advances in communication, it has sim-
ilarly been accompanied by an expansion in the number of people engaged
in research (above and beyond the expansion in their nations’ populations).
This in turn has been accompanied by an expansion in the professional-
ization of research disciplines and in the funds and structures available to
support them. I think it’s fair to say that there hasn’t been a great deal of
planning in this process. Although particular funding streams and structures
have been set up with strategic objectives in mind, for the most part these
have been short term and subject to variations in leadership and priorities.
The job of the researcher, as well as actually generating and communicating
new knowledge, has therefore always involved a certain amount of scrabbling
around trying to make ends meet—tiresome, but the way life is nowadays,
particularly in the sciences.
In the early days, ‘pure’ research (i.e. knowledge generation for its own sake)
didn’t come with particularly high costs—or, looked at another way, was
constrained to questions and topics that didn’t require high costs. So, you
could feasibly live the academic life if you had at least some independent
income that freed you from working every waking hour to keep food on
the table and enemies away from the door. Pure research (Chapter 2), of
course, built on a background of applied research and discoveries since pre-
history (fire, metallurgy, agriculture, etc.). These, we have to assume, came
from small societies working full-time but incorporating what would now be
called R&D into their activities—that is, experimenting with different metal
alloys or different agricultural practices, looking for ways to improve, and
communicating any knowledge gained to the family or community, if not
more widely. At some point there would have been individual members of
Money 197
primitive communities who were given time off from normal activities to
increase knowledge in this way, and who were recognized and supported
by the rest of their society as prototype academics or inventors. We can’t
be sure of this until written records begin, although there’s an interesting
theory about advances in ‘civilized behaviour’ coinciding with evidence of a
decline in circulating testosterone levels in our ancestors from around 65,000
years ago. So, it’s possible that reduced aggression and competition allowed
more cooperative societies to emerge and advance in knowledge, presum-
ably involving some level of specialization in tasks. Whatever the process,
it’s difficult to imagine a structure like Stonehenge being assembled without
someone being put in charge of logistics and absolved from the job of actu-
ally hauling the stones from their origin,1 even if this took place over several
centuries.
Early research was therefore made possible (or at least accelerated) by the
division of labour that accompanied the transition from hunter gatherer to
settled agrarian societies. It’s therefore hard to escape the fact that researchers
are only able to make a living when there are other people doing the harder
and less-rewarding jobs, so what we do is probably more reliant on inbuilt
societal inequality than many left-leaning academics might like to acknowl-
edge. Certainly, philosophy and the beginnings of the pure end of research
in ancient Athens might have coincided with the emergence of what we now
call democracy, but definitely relied on a whole lot of disenfranchised slave
labour (and disenfranchised women) to support the few privileged men who
had the leisure time to sit around, think deeply, and discuss their ideas.
1 150 miles or so for the two-tonne stones, 15 miles or so for the 25-tonne stones—both formidable
enough challenges.
198 How to do Research
revealing where their Queen Helen was hidden.2 The land that he owned,
a little north of Athens, was named after him and became a revered centre
for intellectual and sporting activity. Several centuries later, it seems that the
philosopher Plato inherited property here and founded the Platonic Academy
in the fourth century bce, presumably influenced by Socrates and wanting to
develop his line of thinking. The school operated for around 300 years until it
was destroyed by the Romans, but its influence continued through a revived
‘Neoplatonic Academy’ until the sixth century ce and in the more dispersed
centres of learning across the Byzantine and other Middle Eastern empires.
Although the Platonic Academy was set up with its founder’s private
income, there wasn’t a stable enough aristocracy in those days to sustain
that funding model, so the successor institutions were primarily dependent
on rulers and their interests. The Library of Alexandria, for example, was
founded and developed under the Ptolemies, the dynasty ruling Egypt in
the last few centuries bce, who were keen to show off their prestige through
an assertive and well-funded programme of manuscript purchasing.3 The
manuscripts and the library attracted scholars, and Alexandria became an
academic hub. However, the obvious problem with this model was the depen-
dence on rulers to continue the funding stream. This was complicated further
by power structures and the never-helpful blurring of lines between academia
and politics (see Chapter 16). The inevitable happened at Alexandria in 145
bce when Aristarchus, the sixth head librarian, supported the wrong side of a
dynastic struggle, which resulted in the aggrieved successor sending him and
other scholars into exile. Once the Roman Empire took over and their inter-
est in Egypt (and dependence on Egyptian grain) declined, so did Alexandria,
and scholarship moved on elsewhere.⁴
In what became a long-standing tradition, or at least necessity, research
activity followed the money and appeared again wherever support and suffi-
cient intellectual freedom was available, regardless of the regime supporting
it. For example, an academic community grew up in Baghdad in the ninth
century,⁵ supported and funded by the Caliphs of the Abbasid Empire
until its destruction by a Mongol invasion in 1258. Or there’s the example
2 An abduction this time by King Theseus (of minotaur fame) before she was later abducted by Paris,
starting the Trojan War, and prefigured by Zeus’s rape of her mother while disguised as a swan. Rather
grim, really, and does rather make you wonder why Ancient Greek culture has so often been held up as a
touchstone for Western European civilization.
3 Particularly trawling Greek marketplaces, resulting in a sizeable export of written work.
⁴ It’s worth noting that academic traditions did take a long time to fade. Hypatia, the pioneering math-
ematician mentioned in Chapter 2 was leading a school at Alexandria and following in its tradition of
pre-eminence until her murder in 415 ce.
⁵ Commonly called the House of Wisdom, although it’s unclear whether a specific academy existed of
that nature, and little survived after Baghdad fell.
Money 199
Fig. 15.1 Sometimes researchers have to follow opportunities wherever they present
themselves. Ernest Everett Just was a marine biologist and embryologist with a grow-
ing reputation, and one of the few non-White researchers to receive a doctoral degree
from a major US university (University of Chicago, 1916). However, his skin colour still
prevented him from getting an adequate post-doctoral appointment. Therefore, at the
same time of the mass exodus of scientists in the other direction, Just began working
in Berlin and Milan, moving to Paris when the Nazis began taking control in Germany.
His work flourished in pre-War Europe, where he felt more encouraged, and one of his
particular contributions was to study cells in their natural setting, anticipating the live
cell imaging now possible today. He was evacuated back to the US in 1940 but died from
cancer not long afterwards.
Photograph of Ernest Everett Just by unknown artist (unknown date). Public domain image. https://
commons.wikimedia.org/w/index.php?curid=191183
200 How to do Research
Later costs—employment
The combination of both a secure personal fortune and the right talent
and mindset were clearly an advantage in those early days, as they freed
researchers from the vagaries of funding and patronage. Boyle (Chapter 10)
and Cavendish (Chapters 10 and 14) were both blessed with wealth and
⁶ Amongst other discoveries, Tycho plotted the course of a comet, showing that it passed through the
planetary orbits, disproving the old ideas about crystal spheres surrounding the earth. However, he still
proposed an Earth-centred model of the Universe, albeit one that allowed the outer planets to orbit around
the Sun (which itself orbited the Earth).
Money 201
intellectual ability, allowing them to carry out their work in chemistry and
physics in the way that they pleased. These and other examples (Darwin in
Britain, Lavoisier in France, Benjamin Franklin in the US, Alessandro Volta
in Italy, Hans Christian Ørsted in Denmark) gave rise to the idea of the
‘gentleman scientist’, although it’s worth bearing in mind that the oppor-
tunities were also open to those women who had independent incomes or
support. Maria Sibylla Merian (Chapter 4) had sufficient freedom within and
after her marriage to produce her seventeenth-century botanical and ento-
mological descriptions, and to seek the training she needed in the graphical
and academic skills (natural history, Latin) for effective communication. She
published under her own name and was widely read and respected, even if
Linnaeus failed to cite her. In France, Émilie du Châtelet (Chapter 5) was for-
tunate to be the daughter of a court administrator to King Louis XIV who was
already well connected with contemporary writers and scientists. As well as
access to this network, she received an extensive education and reached an
understanding with her husband after she had given birth to two surviving
children, allowing her to live separately,⁷ resume studies, and develop her
wide intellectual interests, including the experimental tests she carried out
on Newton’s theories of motion.
⁹ He wasn’t the first, but his predecessor had only lasted a year and was primarily a classics scholar,
whereas Quiller-Couch was a productive novelist and literary critic; he might also have been the model
for the character of Ratty in Kenneth Grahame’s The Wind in the Willows.
204 How to do Research
Other employers
It’s worth noting at this point that of course there are other entities besides
universities that accommodate researchers and administer funds. We dis-
cussed R&D in Chapter 12 and there are clearly any number of industries
that host and fund research, although generally with applied commercial
goals in mind, rather than knowledge generation for its own sake. Also, the
career structure on offer in the commercial sector is likely to be corporate,
rather than one oriented around research skills development, and there is
also little ability or inclination to provide anything equivalent to doctoral
training. So, industries do rely on the university system to some extent for
researcher training, although that’s not to say that some of them won’t decide
to become more autonomous in future. Beyond this, there are public sector
bodies employing and supporting research—government departments and
healthcare providers, for example—as well as larger charities and sometimes
the research funding bodies themselves (e.g. the intramural programmes at
the US National Institutes of Health) or independent endowed foundations
like the Cavendish Laboratory (Figure 15.2). Finally, the university system
itself is an ever-changing landscape with a mixture of private and public pro-
vision between and within countries and a vagueness about what constitutes
a ‘university’ in the first place. A dividing line might be drawn between insti-
tutes able or not able to award a doctoral degree, but how central this will
remain as a feature of a research career is anyone’s guess.
So, like it or not, the life of a researcher tends to be oriented around fund-
ing. At a basic level, a salary and contract are needed, and then there is the
additional requirement of having somewhere to work (usually but not always
accompanying the employment) as well as, of course, the costs of the research
itself. Unless you’re very lucky to land a secure salary and to be working in
a field where you don’t need to buy any equipment or fund any travel, the
likelihood is that you’ll be looking out for money for most of your research
career—whether leading bids yourself or supporting the general effort by
your colleagues and employers to attract resource. I don’t personally see this
Money 205
Fig. 15.2 Ernest Rutherford’s set up at the Cavendish Laboratory founded in Cambridge
in 1874. This was named after Henry Cavendish (Chapters 10, 14) and supported by a
bequest from a distant and similarly wealthy relative (his first cousin’s grandson, the
7th Duke of Devonshire). James Clerk Maxwell (Chapters 8, 12) was the founder, and
the Laboratory was particularly famous for hosting the atomic physics research group
under Rutherford, who took over as Director in 1919. Thirty Cavendish researchers have
won Nobel Prizes.
Sir Ernest Rutherford’s laboratory (1926). Science Museum London / Science and Society Picture
Library, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons.
https://upload.wikimedia.org/wikipedia/commons/9/97/Sir_Ernest_Rutherfords_laboratory%2C_
early_20th_century._%289660575343%29.jpg
1. The overall idea (what it is that the project will deliver, why this is
important).
2. The plan for delivery (the methodology proposed, any logistical chal-
lenges and how these will be addressed).
3. The proposing team (whether they have sufficient track record and
expertise).
4. The hosting institution(s) (whether the facilities and wider environ-
ment are appropriate for the project).
5. The budget (whether this is realistic to support the study, i.e. not too
low, and demonstrating value for money, i.e. not too high).
The application form will therefore cover these aspects and proposals will be
judged and ranked accordingly. Particular research fields may have additional
requirements in their funding applications. For example, in health research
there are usually sections to complete on ethical issues (the external approval
to be sought, whether the project presents any particularly challenging fea-
tures), public involvement (how patients and the wider public have been
and will be involved in directing the study), communication plans (how the
Money 207
findings, if relevant for health care, will get to the people/agencies who need
to know), and data management plans (goodness only knows—I’ve never met
anyone who feels confident about this section, but there are generally two or
three pages to be filled somehow).
So, how to write an application for a project grant? Well, clearly there are no
magic answers, or we’d all be putting our feet up. To begin with, do read any
blurb on the funding available and the application process, and if the funder
offers any workshop or symposium about their call, do sign up and pay atten-
tion. Some funding is open to anyone with a good project, but a lot of calls
have particular strategic objectives in mind, wanting to support research in
particular fields, or for particular questions. Also, some funders will favour or
restrict themselves to certain sorts of research and specifically exclude others.
Although you might get lucky with submitting something slightly off target,
your chances are going to be substantially diminished. And grant applications
do take a lot of time and effort to prepare, so it’s worth being quite focused on
opportunities with some chance of success (I don’t think any applications can
be said to have a high chance in terms of strict probability). At this point, it’s
also important to get an idea of the level of funding available. Many funders
will give an indication of the total pot allocated to a given call and how many
applications they’re interested in supporting. If they don’t, and if the funding
has been available before, try to find out about the size of studies that have
been successful. Again, something relatively expensive might stand a chance
if it’s a particularly important study being proposed (and one that clearly can’t
be delivered more cheaply), but you’ll have a much higher chance of success
if you can fit it into a more expected budget. On the other hand, don’t imag-
ine that a study with costs that are much lower than expected will be at a
significant advantage—the chances are that the project will look either under-
resourced or not important enough if it’s seen as unusually cheap, unless there
are good reasons for this (for example, if you’re applying for overseas funds
and exchange rates are in your favour).
Having established the broad type of study the funder is interested in sup-
porting, probably the trickiest element of the whole process is developing
the central idea for your proposal. I think this difficulty is partly because it
feels the wrong way around. All the core principles of research, as discussed
in previous chapters, focus on a question to be answered—the acceptance
208 How to do Research
of ignorance, the initial observations, the theories derived from these, and
the experiments to test these theories. Therefore, the natural process is for a
researcher to have a question in mind that’s been drawn from their experi-
ence and reading, and then to look around for resources to support a study
that will answer that question. However, many funding calls already have in
mind their broad questions or topics of interest and the job of someone look-
ing for research funding is to think of a study that will fit in with the funding
objective—which may not always be the study they would ideally like to set
up. We discussed the same challenge with fitting ideas to data resources and
‘solutions in search of a problem’ towards the end of Chapter 12 and I think
that split mindset is often required for financially supporting what we do—
focused like good academics on the questions that need answering, while also
keeping an eye out for opportunities like good inventors or entrepreneurs.
Either way, I think this is why the central idea in funding proposals can be a
tough one, particularly for more junior researchers who have had less time to
immerse themselves in their field of interest, but it’s just another one of those
challenges to overcome. As well as experience and developing the right mind-
set, it’s also worth creating and maintaining some flexibility in your ideas as
you progress as a researcher, assembling a few questions and interests to draw
on. My first supervisor talked about it in terms of pots on a stove, keeping a
few on the back burner simmering away so that they’re ready to be brought
forward at an opportune moment.
The central idea is particularly important for the panels that decide on (or
at least formally recommend) funding allocation. Panel members will gen-
erally be senior academics but not necessarily experts in every specific field;
they will be faced with a shortlist of proposals that have all been judged as
excellent by peer reviewers, but only a fraction of which can be supported
with the money available. So, all the projects are methodologically sound and
rated well by specialists, but panel members have to make a further choice
somehow. This final ranking tends to end up hinging on what the studies will
deliver and how important this knowledge generation is perceived to be by
senior non-specialists, as well as possible considerations of ‘strategic fit’ with
the funding call (and there will be funder representatives on the panel advis-
ing on this). Projects that are ‘good quality but a bit dull’ will inevitably be
weeded out at this point, so your proposal does need to convey a strong sense
of what’s known, what’s unknown, what the study will deliver, and why it mat-
ters. It also helps if this is expressed as clearly as possible, because a great idea
buried in impenetrable text stands a sizeable chance of being overlooked.
Beyond formulating this central idea, the rest of the application doesn’t
need to be complex, and the preparation process is more of an endurance
Money 209
A way of life
Quite a large part of academic life is devoted to trying to apply these principles
in making arguments for funding support. There’s a rather broad (and unsat-
isfactorily gendered) term, ‘grantsmanship’, applied to this process, which
describes the expertise of those involved in putting together applications.
Some of this simply reflects experience and an instinct as to what might be
worth pitching, and some of it reflects the quality of writing applied to the
proposal—which does help if it’s clear, readable, precise, and well structured,
just like research communication (Chapters 13, 14). Peer-reviewers and panel
members are only human, and no one is going to be well-disposed towards
dense, turgid prose or a proposal full of typos. However, ‘grantsmanship’
also describes skills in the process of submission, which is clearly going to
vary according to the discipline and setting where you work. The following
list provides a few principles that I think are likely to be common to most
circumstances.
1. Allow plenty of time. There are always occasions for rapid responses
with punishing deadlines, and you might get a personal kick out of
living on the edge in that way. However, you’ve generally got to bring
a co-applicant team along with you on a grant application, most of
whom are likely to prefer a calmer and more considered planning pro-
cess. Cajoling colleagues to help with a tight deadline once or twice
might be OK (particularly if it’s clear that you don’t have a choice
and the funder hasn’t left you very long), but it does start looking
discourteous if you’re doing this repeatedly.
2. Decide on and negotiate the co-applicant team early. The remit of the
funding call or study may determine this (e.g. expectations for multi-
ple institutions, coverage of multiple disciplines). However, you need
210 How to do Research
to be careful about costs, and it’s easy to swamp a proposal with senior
salary contributions (if you’re expected and allowed to claim these),
which may not leave enough for you to employ the researchers you
need to run the study. And it’s obviously much trickier to explain to
an invited co-applicant that you can no longer afford them than not
approaching them in the first place.
3. I’m told that there are sometimes dark arts involved in the politics of
applications and the co-applicants you need to have on board, so there
may be some cautious early negotiations to be conducted. I have abso-
lutely nothing to say on this—best have an informal chat with a senior
colleague if you’re in doubt.
4. If you’re in a small research field, you might not want to involve all
your friendly colleagues from other sites in an application or there’ll
be no one to peer review it (apart from the unfriendly ones).
5. Ask for cost calculations as early as possible because these will often
need to be carried out, and ultimately signed off, by finance staff at
your institution. Most of these staff, if you approach them with a last-
minute request, will know full well that you could have given them a
lot more notice if you’d been more personally organized. And sooner
or later they might say no, in which case, you’re not going to be able
to submit your application. As with point 1, it’s just another basic
courtesy issue.
6. You may need to prepare for several rounds of costing and re-costing,
as you try to get your budget to reach the desired amount—another
reason to get started early on this.
7. Unless you’re in a research field with massive equipment needs, the
majority of your costs are likely to come from salaries. Fitting into
a budget limit is therefore most likely to be achieved by altering pay
grade levels or durations of employment, rather than fiddling around
with more trivial categories like travel costs.
8. Do consider the feasibility of any salaries you’re including. Person-
ally, I try to avoid proposing any project with a less than 18-month
researcher salary, because who’s going to apply for a shorter-term post
and how are they going to get up to speed in time? If you’re confi-
dent that you’ll have a good field of applicants when you advertise a
shorter-duration position (or a pool of current salaried staff who can
be rapidly deployed), then by all means ignore this. It’s just that fail-
ing to get a funded study up and running because you can’t recruit
to it is sometimes worse than not getting the funding in the first
place.
Money 211
9. You generally only need a short summary of the project when you’re
approaching co-applicants and requesting costings. The longer and
more detailed writing can be carried out while you’re waiting for these
things to come together.
10. Do try to submit at least a day before the deadline. Most funders now
operate online application systems which cut out irreversibly at the
published day/time and with no scope for a deadline extension (as
there used to be for paper-based applications). You’re not going to be
popular with your finance colleagues or co-applicant team if, after all
that work, you fail to submit because your computer crashed or your
ISP went down at the last minute.
1⁰ A process that feels very similar at times to the informal assembling of teams in the school
playground, so don’t be surprised if it conjures up less-than-happy memories.
212 How to do Research
training, travel, and other incidental expenses (e.g. publication fees), but
often do not include any salary for assistants or substantial research costs.
Many of the principles of grant writing apply, as described in the last few
paragraphs, but there are important differences. Unlike project grants, the
adjudication process for fellowships tends to focus on three elements: the
candidate, the research proposed, and the hosting environment. Costs are
generally requested but don’t tend to be a deciding factor as they’re more or
less the same for everyone. Instead, the ‘package’ being pitched is a promis-
ing candidate who will gain clear benefits from the training and experience
on offer, and who will be supported by one or more institutions with appro-
priate resources for this purpose. The project and its general usefulness are
important, but not quite as central to the proposal as in a grant applica-
tion. What the funder ultimately hopes to get at the end of the process is a
new generation of well-trained researchers who will be leaders in the future.
Applications for these schemes go through the usual peer-review process, but
those shortlisted nearly always end up with an interview, whereas panel inter-
views are only occasionally used for project grant adjudication.11 Sometimes
externally funded fellowships or packages of personal support (e.g. doctoral
training studentships) are awarded to institutions as a block, and obtaining
one is therefore more like a job application.
11 Although this might become more common, now that we’re all used to virtual meetings, which make
interview panels a lot more logistically feasible than the old practice of travelling across the country to sit
in a boardroom for hours at a time and drink far too much coffee.
Money 213
Wider politics
So, there are a variety of potential mechanisms for funding research, but not
all countries and not all disciplines adopt all mechanisms, and all you can
do is keep an eye out for what’s available (and perhaps a particular eye out
for any novel sources that others might not have noticed). Funding agencies
themselves have an unenviable task of deciding on mechanisms to distribute
and administer their awards, as well as being constrained by higher-level
influences on the funds they have available,12 and redirected by turnover of
senior staff. As a result, and as mentioned earlier, strategies can often change
uncomfortably for the research community—anyone who has been in any
field for ten years or more will probably have witnessed or heard of the nasty
consequences of a funding shortfall. For example, centres of excellence are
sometimes set up and provided with infrastructure support, often brokered
by a productive and politically influential group lead. When that leader retires
(or moves on, or becomes politically less influential for whatever reason),
the highly productive, long-standing research group might possibly remain
intact, but there will be a strong temptation for the funding agency to cur-
tail the support in order to increase resources to other groups, or at least to
generate some flexibility in their portfolio—and new opportunities always
look more attractive than maintaining something that’s been around for a
while. The results at a personal level can be unpleasant and there’s rarely much
thought given to legacy, so it’s probably wise never to assume that anything
will last. If we can be objective and sceptical about research questions, then
we can apply the same to the motivations and pressures of our employers or
funders.
Strategically, there are a few underlying contradictions that are not often
acknowledged by funding agencies, but that are worth bearing in mind at
times when you’re trying to negotiate the system. For example, it’s quite
12 For example, because of changes in government spending priorities when this is the means of support,
or because of investment income fluctuations for independent foundations.
214 How to do Research
13 Not least because I suspect you can keep more institutions happy and engaged at a lower cost through
a single funding award spread thinly than through multiple individual awards.
Money 215
around for long enough to evaluate, but it seems likely that the success or not
of a collaborating-while-competing network is going to depend strongly on
the personalities of its constituent leads.
The final point is one that’s perhaps not the wisest to make while I’m still
in employment, although I know I’m not the only one to feel this way. I think
that there is such a thing as too much funding. Large awards do sound attrac-
tive, and any academic allowed a word in the ear of someone with money will
always argue for as much as possible—why wouldn’t they? However, there
are large volumes of work entailed in their administration (all the posts to
appoint and line manage, the extra space and facilities they’ll require, the
time taken in coordinating component work packages, etc.) and the situa-
tion can arise where managing a huge award is more problematic than if the
funding application had never been successful in the first place. In addition,
the success of very large awards can be next to impossible to scrutinize by the
funder, particularly when there’s too cosy a relationship between the awarder
and awardees (academic institutions and their representatives, for example,
involved in both the allocation and receipt of funding). Unless there are tight
monitoring processes, failures may only be picked up when it’s too late to
do anything, by which stage no one wants to know. However, tight moni-
toring processes require administrative resources on both sides; otherwise
the groups meant to be delivering research output will be incapacitated with
paperwork and progress reports. I suspect that it’s not dissimilar to govern-
ment procurement and the occasions where money disappears into chaos and
non-delivery. Even the largest research funding allocations pale into insignif-
icance next to the worst government procurement failures, so perhaps it isn’t
something to get unduly worried about. And some large awards are well
administered, have clear vision, and do genuinely deliver value for money—
tight and effective project management is usually involved, ideally closely
integrated with those delivering the research so that everyone is clear on the
objectives and sharing the vision. Perhaps the only solution is to have large
awards administered by wise and disinterested experts—which is a little like
the impossible ideal of benign dictatorship for political leadership.
16
Power and politics
Perhaps these new digitalized and distributed times will witness the renais-
sance of the researcher who works alone and independently, connected
virtually rather than physically with their academic community. Or perhaps
there are already lots of people working in this sort of way and I’m behind
the times. However, I think it’s still reasonable to assume for the moment
that most researchers, or at least those in early stages of their career, are
based in an institution of some sort, most likely a university. And if you’re
employed (or a student) at an institution, then you have a supervisor/line-
manager and a wider group of senior colleagues with influences on your
working life, not to mention all the fellow researchers at your own level, or
more junior, with whom you may have rather complex part-collaborating,
part-competing relationships. Some of these will be colleagues at the same
institution, while others will be researchers in the same field across other sites
nationally and internationally. Within this tangled web of connections are
any number of relationships, and within these relationships there are likely to
be power structures—some formal, others implied; some benign, others less
so. Unless you’re very lucky, negotiating these relationships is going to occupy
a significant part of your life, although, as with research funding (Chapter 15),
there’s probably nothing unusual in this—it would be much the same in any
other line of business.
there are challenges in achieving all of these metrics,1 hard work ahead, and
a certain amount of luck required, but there’s usually plenty of scope for apti-
tude and application to receive the rewards they deserve and simple lines on a
CV to reflect this. In a well-functioning research institution, there also tends
to be the professional space for everyone to work away quietly on their area
of interest without causing trouble for anyone else. Funding is theoretically
a point of competition, but it needn’t cause difficulties if perceived adjudica-
tions involve fair and independent processes. When conflict does brew up in
relation to awards, it tends to be around wider perceptions of unfair influence,
which ought to be avoidable.
Interestingly, office accommodation does tend to be an enduringly pre-
dictable battleground and the focus for any number of long-running feuds—
or at least that was the case in the pre-pandemic days of physical environ-
ments when space was at a premium and you could attract any level of
funding and still have trouble housing your staff due to limited desks. I guess
it’s sensitive because of being a hard-wired territory protection issue, tapping
into primitive and even pre-human instincts, and I don’t think academics as a
whole are well-endowed with the sort of self-awareness to spot this. Also, it’s
quite a visible marker of leadership ranking. If you’re in charge of a research
group and someone else successfully claims some of your desk space, all your
junior staff are going to know this, whether you accept it with good grace or
not—not easy if leadership doesn’t come naturally and you’re already feeling
insecure. However, this sort of thing does tend to blow over quickly and is
usually an internal matter that no one’s going to want to go public about. The
academic conflicts that are potentially more damaging and do often end up
publicized, or at least widely known in the community, tend to be rooted in
quite nebulous issues of reputation or self-worth, particularly as reflected by
recognition and perceived influence.
1 And don’t ever fall into the trap of assuming that the metric of importance at the moment will be the
same in a year’s time; it’s best to cover your bases—for example, number of publications generally, number
of lead-author publications, impact of publications, research funding, other measures of esteem.
218 How to do Research
2 For example, obtaining an Oxford University chorister’s position during a period when choirs weren’t
allowed to perform under Cromwell’s puritan regime, allowing him time to hang out with, and earn some
extra money from, the scientists there.
Power and politics 219
out of history for centuries to come,3 it’s hard not to conclude that the two of
them were as bad (or damaged) as each other. Both came from humble back-
grounds, both were very much self-made men and extremely intellectually
gifted, but neither seems to have had much of an ability to see another per-
son’s perspective. In Hooke this came out in fairly tactless, unfiltered, prickly
irritability, although he was sociable and had a number of close friends.
Newton, on the other hand, seems to have been solitary, sullen, and (given
the 34 years between his ‘resolution’ with Hooke and the ‘disappearance’ of
Hooke’s portrait) capable of bearing a grudge over a very long period indeed.
And Hooke wasn’t the only recipient. Newton had another long-running dis-
pute with the German mathematician Leibniz over who invented calculus,
although they probably both devised it at the same time, just as Hooke was
coming close to proposing the inverse square law of gravity and had corre-
sponded with Newton on the subject four years before Newton proposed it
publicly in 1684.
The problem with ‘reputation’ in research is that advances are to some extent
inevitable—as mentioned previously, it seems that sooner or later knowledge
will accrue, regardless of who carries it forward. Newton was revolutionary
but his theories would have come to light through someone else if he had
never existed (through Hooke, for instance, who had very nearly got there,
or perhaps through Halley, who understood such things). So, researchers are
not unique originators in the same sense as Shakespeare or Van Gogh, whose
works definitely wouldn’t exist without them. Instead, we’re really just riding a
wave of human discovery that’s taking us wherever it is we’re going. However,
most researchers don’t see themselves that way, and the way in which human
discovery is written about (including here, I guess) does tend to focus on
the discoverers, the individuals, as if they really were Shakespeares or Van
Goghs. Everyone has to get their self-esteem from somewhere, so reputation
and influence do make a difference.
If it’s common for discoveries to occur simultaneously (because sufficient
knowledge has ‘matured’ to make this possible), there can therefore be an
unedifying race to first publication, and unedifying acrimony if it’s not clear
3 Although one of the things that didn’t help Hooke’s posterity was the fact that his notebooks were
considered unpublishable throughout the nineteenth century because of all the information on liaisons
with maidservants in his employment; also, his mistresses included his teenage niece. Newton, on the
other hand, was helpfully celibate—or at least heterosexually celibate and very discrete otherwise.
220 How to do Research
who got there first. We’ve encountered this in the other particularly famous
pairing of Darwin and Wallace (Chapter 14)—by all accounts a lot friendlier
than Newton and Hooke, but still with a clear winner and loser. Other exam-
ples include Priestley and Scheele discovering oxygen (Chapter 10), Gauss
and Bolyai coming up with non-Euclidean geometry (Chapter 7), or the race
to decipher hieroglyphic script after the French discovery and British seizure
of the Egyptian Rosetta Stone (Chapter 12). In theory, arguments about the
applications of research ought to be a less emotional affair because there’s
some means to weigh up the outcome. However, there wasn’t much evi-
dence of this in the late nineteenth-century (acrimonious and often dirty)
battle between Thomas Edison and George Westinghouse over whether DC
or AC electricity provision should be adopted in the US. Although a lot of
money was at stake, reputations and personalities were dominant factors and
the conflict only began to subside once the intransigent Edison had lost his
majority shareholding in the company that would become General Electric,
and thus had been eased out of the picture.⁴
Early theories and discoveries in thermodynamics were another case in
point. William Thomson (later Lord Kelvin) in Scotland and Hermann von
Helmholtz in Germany fought it out, with their rival supporters, in the mid-
nineteenth century as the first (conservation of energy) and second (heat
flowing from hot to cold) laws of thermodynamics were conceptualized and
applied to tasks such as working out the possible ages of the Earth and Sun.
The German physician Julius Robert von Mayer had actually got there before
both Thomson and Helmholtz, although his publications in the 1840s on the
subject went unnoticed by physicists at the time. Earlier still, the French mil-
itary engineer Nicolas Léonard Sadi Carnot had also laid important ground-
work, but his death in the 1832 cholera epidemic (Chapter 1) meant that his
belongings, including paperwork, had to be destroyed to avoid contagion, so
it’s not possible to know how far he progressed. At least Carnot was beyond
caring about his reputation as the field took off and names started to be
made. Mayer, on the other hand, became profoundly depressed at the lack of
recognition, attempted suicide in 1850, and was confined in a mental health
institution. Luckily, in the end his work was acknowledged and appreciated
with several prestigious prizes, and he recovered his health in his later years.
John Waterston was less fortunate. Taking early retirement in Edinburgh after
teaching civil engineering in India, Waterston had devised and self-published
⁴ One of the more underhand strategies in that dispute was when the Edison company (supporting DC
current) successfully lobbied for the first electric chair execution to be carried out using AC current, thus
trying to imply that it was more dangerous. Westinghouse in turn had to hire lawyers to try to defend the
murderer sentenced to be executed in that way. He lost that battle (the murderer was executed, albeit very
messily) but won the war.
Power and politics 221
theories about the kinetic energy of gases that received little recognition when
circulated in the 1840s. He was left embittered and reclusive by the experi-
ence, which didn’t help matters. His theories were independently developed
by James Clerk Maxwell (Chapter 12) among others, and it wasn’t until 1892
that someone noticed and publicized Waterston’s original work. However,
this was too late for Waterston himself—nine years earlier he had walked out
of his house and was discovered drowned in a local canal.
The early history of thermodynamics doesn’t appear to have been a par-
ticularly happy one. As well as the problem of simultaneous discovery,
recognition clearly depended to some extent on where you worked and who
you knew, as well as on the state of the scientific community at the time.
Mayer and Waterston, the losers, were simply not understood sufficiently
by their contemporaries, and the community (conservative, like any com-
munity) needed to have advanced a little in order to accommodate the new
ideas. There were more mundane issues as well. For example, it didn’t help
that Waterston’s key theories were contained in a book called Thoughts on the
Mental Functions—hardly an informative title. I suppose it also didn’t help
matters that the field of thermodynamics was becoming so important⁵ at a
time when research was still in the process of shifting from people working
alone to those in universities or other institutions—that is, in a structurally
rather vague period between the gentleman scientists of the eighteenth cen-
tury (Chapter 15) and the employed research groups of the twentieth. Feeling
unclear where you stand in the scheme of things, or having a sense of being
ignored and misunderstood, are not conducive to contentment. This was
particularly the case as progress remained very personalized with prizes
and society memberships, as success fostered more success through influ-
ence over funding and other resources, and as research became increasingly
expressed in the stories of ‘great men’ of the past. And the big portraits in the
institution entrance halls were always male. And white.
⁵ Along with electromagnetism, a bridge of sorts in physics between Newton and Einstein.
222 How to do Research
men she helped over the years because of her exclusion from the scientific
community of the time (as a non-Anglican as well as a woman; Chapter 4),
or the biologist Ernest Everett Just having to pursue his research in 1930s
fascist Europe because his ancestry excluded him from appointments at suit-
able American universities (Chapter 15). There’s also the disturbing example
of Alice Ball (Figure 16.1), a highly gifted African-American chemistry
researcher at the University of Hawaii who, in 1915 devised and developed,
at the extraordinarily young age of 23, what would become the standard
treatment for leprosy. Tragically, she died a year later,⁶ and her work was
appropriated wholesale by a senior male (white) colleague and taken forward
but passed off as his own with no credit or citation. It took a considerable
Fig. 16.1 Alice Ball, a chemistry graduate, discovered what would become the stan-
dard treatment for leprosy before she died at the age of 24. I guess more than one
story can be told. On the one hand, there’s the injustice of the senior colleague who
took her work after she died and essentially passed it off as his own. On the other hand,
at least her contribution was belatedly acknowledged by another senior colleague and
sufficient records were kept, allowing her achievements to be unearthed by later histo-
rians. And there’s now an ‘Alice Ball Day’ every four years in Hawaii, where she made her
discoveries.
Photograph of Alice Augusta Ball, photographer unknown (1915). Public domain, via Wikimedia
Commons. https://commons.wikimedia.org/wiki/File:Alice_Augusta_Ball.jpg
⁶ Tuberculosis was listed as the cause of death, although there are suspicions about chemical exposure
related to her occupation and an altered death certificate.
Power and politics 223
Fig. 16.2 Rosalind Franklin at work. Like Alice Ball, Franklin’s story can be told as one of
injustice, missing out on credit and recognition in the race to discover DNA because she
didn’t get on with Maurice Wilkins, her colleague at King’s College London, who may or
may not have passed on her work without her knowledge. On the other hand, why on
earth does the whole thing have to be seen as a race anyway and isn’t that just per-
petuating an alpha male cliché about research and the myth of ‘discovery’? Besides, it
sounds as if she gave as good as she got. I quite like the description of Franklin unnerv-
ing colleagues with her intense eye contact and concise, impatient, and direct manner,
whereas Wilkins was shy and avoided eye contact—it’s easy to imagine the dynamic in
the laboratory.
Photograph of Rosalind Franklin, MRC Laboratory of Molecular Biology (1955). From the personal col-
lection of Jenifer Glynn. CC-SA 4.0; <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia
Commons. https://commons.wikimedia.org/wiki/File:Rosalind_Franklin.jpg
224 How to do Research
and Crick, possibly without her knowledge via her senior colleague Mau-
rice Wilkins, although there is some dispute about this. Whatever went on
with the X-ray, Franklin’s work was undoubtedly fundamental to the double
helix theory put forward in 1953, but it was Watson, Crick, and Wilkins who
shared the Nobel Prize in 1962. There are Nobel rules to justify this—Franklin
had died in 1958,⁷ the prizes aren’t awarded posthumously, and aren’t shared
between more than three people; however, it does leave a bad taste. It doesn’t
help matters that the laureates were the three men, although perhaps the fault
is more in the idea of awarding prizes in the first place. There was some jus-
tice when King’s College London opened its ‘Franklin Wilkins Building’ in
2000 (i.e. with the names that way around); it’s just a shame the acknowl-
edgement couldn’t have been associated with something architecturally more
attractive.
Not victims
Fig. 16.3 Katherine Johnson, mathematician and NASA scientist, pictured at work in
1966—possibly about to put an early computer through its paces, making sure that it
could keep up with her calculations. Her work on trajectory modelling was key to the
Apollo 11 Moon landing and the safe return of the Apollo 13 crew when they had to abort.
Widely and appropriately lauded, Johnson was well aware of the potential challenges
she faced in her career, but just stared them down and trusted that her abilities would
be recognized. Which they were. And are.
Photograph of Katherine Johnson, NASA (1966). Public domain, via Wikimedia Commons. http://
www.nasa.gov/sites/default/files/thumbnails/image/1966-l-06717.jpeg
against you. However, it’s also possible to go a little over the top on this one,
and paranoia to the point of delusion is not unheard of if stress levels are
high enough. A few things are therefore worth bearing in mind, if you can
consider them at a moment when you’re feeling objective and rational:
As a final point, it’s worth bearing in mind that bad ideas get stolen as well
as good ones. I can think of at least two occasions when, in retrospect, I was
very glad that someone had stolen (and taken full ownership of) an idea that
I’d originally proposed—each time, the idea was flawed, the results weren’t
good, but it wasn’t me who had to carry the can. So, there is an upside.
1⁰ The others were of James Clerk Maxwell and the philosopher Arthur Schopenhauer.
228 How to do Research
11 Although supported in quite different ways—encouraged but not funded in London, funded but with
obligations to the Crown in Paris—resulting in differences in ethos which may or may not have a bearing
on the earlier Industrial revolution in Britain than France. One for the historians.
Power and politics 229
In the modern era, the rise of institutions around both researchers and politi-
cians has tended to act as a cushion, keeping the two apart, so there’s a lower
likelihood of a researcher exerting direct political influence. I suspect this
is generally a good thing—I’m not sure that research and politics make for
a very good mix. For a start, there are horror stories like that of Lysenko
(Chapter 11), where a misguided researcher is given free rein with govern-
ment policy and 30 million people die in the ensuing experiment. Researchers
may have strong views about the implications of their work, but they’re not
really trained to implement and own those implications—and they ought
to be viewing them as experimental and subject to falsification anyway, if
they have any integrity. Politicians, on the other hand, are there to make and
accept responsibility for decisions, most of which will be based on incomplete
evidence at the time. This accounts for the slightly tense atmosphere that
tends to be present on the occasions when politicians and academics are in
230 How to do Research
the same room together. The politician is likely to be looking for an answer,
the academic is likely to hedge their opinion with uncertainty and therefore
won’t answer the question in the way that the politician wants. Alternatively,
the academic will answer the question more directly but have to bury their
own uncertainty and feel uncomfortable as a result because this isn’t really
what research is about. An academic in regular contact with politicians may
become an expert in this split thinking and as a result may be able to bury
uncertainty without even noticing it or feeling guilty, which does sound a
little like losing your soul.
Economics is a field where research and politics are particularly likely
to meet, for obvious reasons. Indeed, political parties or factions are fre-
quently associated with, or almost defined by, a particular school of thought
in economics—for example, from left-wing Marxism to less-left-wing Key-
nesianism (Chapter 8), to right-wing monetarism. The political adoption
of economic theories can be viewed as real-world experiments of sorts
(Chapter 11), although the actual relationships between the theory and the
policy are often quite hard to delineate if you’re evaluating the outcomes of
these experiments. I may be wrong, but I imagine that economists in govern-
ment are quite practised in asserting the theory they espouse and don’t tend
to adopt Socratic positions of ignorance very often. If so, this may be better
classed as propaganda than research and better considered in another book.
Public health is another point of contact between research and politics, and
one that is rather prominent in everyone’s minds at the time of writing. This
will be considered in more detail in Chapter 17.
Of course, it’s all very well to say that research and politics are best kept
at arm’s length from each other, but from time to time there is bound to
be a need to speak truth to power, whether power wants to hear it or not.
As mentioned, academic institutions traditionally thrive when kept free of
undue influence, but funding this is a risk taken by any ruler or government,
and governments have a tendency to view their nation’s universities with dis-
trust. In the UK, although universities have a reputation of being hotbeds of
left-wing thinking, I haven’t noticed much difference in levels of wariness
from left-wing compared to right-wing governments—it might therefore be
more a concern about the principle of clever people (and/or truculent stu-
dents) gathering in groups, rather than what they’re actually talking about.
However, most countries accept that universities serve a purpose of some
sort and can generate prestige, so tend to tolerate the grumbling from that
quarter. Whether governments expect (or want) significant research out-
put from their universities is another matter, and the internationalism of
research nowadays means that the talent in that respect will gravitate towards
settings that are pleasant to work in, well-funded, and free from interference.
Power and politics 231
12 Or worse, such as the fate of those disagreeing with Lysenko’s agricultural science theories in Soviet
Russia.
13 These arguments used to include suggestions that Shakespeare fabricated Richard’s hunched back to
make him more villainous, but then Richard’s body was found in 2012 under a car park in Leicester and
the skeleton did indeed show severe spinal deformity.
232 How to do Research
wide relevance for behaviours and lifestyles. Chapter 8 considered the process
of consensus and how this has been important for consolidating knowledge
in fields where demonstrative experimental evidence can’t be obtained—that
is, where a theory can’t be proved and is difficult to disprove as well. Anthro-
pogenic climate change is an example of this process and clearly there are
elements of that debate that continue to a varying extent,1⁴ much to the
annoyance of the scientific consensus. However, the problem is that it was
always a gradual process, as any consensus should be that involves so many
different strands of evidence. At some point climate change didn’t exist at
all because there were insufficient greenhouse gas emissions; at some point
later, it was detectable but very few scientists were taking it seriously, or at
least very few were accepting the anthropogenic element. But then at some
point the consensus began to emerge. And, at some point in that consensus
process, the debate was (or could have been) a healthy, friendly one where
scepticism was just good scientific rigour. But then at some point, the scepti-
cism was interpreted as damaging to the consensus, felt to be more misplaced
or vindictive, and/or coming with trappings of dodgy funding and conflicts
of interest. And then the scepticism is called ‘denial’, and the sceptics are
placed in a category with other people labelled as ‘deniers’—of course, they
get angry about that, and the time for calm discussion has long since passed.
And of course, it doesn’t help that the outcome matters so much and has pro-
found implications—not only on the longer-term future of the planet and the
potentially catastrophic consequences for the next generations (who are now
quite reasonably questioning why they’ve been left with this legacy), but also
on the shorter-term costs and loss of livelihoods that are likely to occur as a
consequence of the measures put in place to avert the catastrophe. If there
weren’t the shorter-term costs, there wouldn’t be any controversy in the first
place, or at least it would be a lot less public. I remember living through CFCs
being banned from refrigerators and aerosols, and lead being removed from
petrol—both of which were accompanied by grumbling from the industries
affected. However, they happened, nonetheless. Interventions to limit global
warming have much wider public consequences, so it’s not surprising that the
battle is difficult, even without social media echo chambers and commercial
interests lurking in the background.
This illustrates why research doesn’t mix well with power and politics.
Knowledge tends to accumulate gradually, but no one outside the research
1⁴ That is, more around what’s to be done about it than whether temperature levels are genuinely rising.
Power and politics 233
The Eye was right to keep asking questions on behalf of parents. There have been
plenty of medical scandals exposed by investigative journalism, and plenty more
to expose. This could have been one, but it wasn’t. By the time of the second
Cochrane review,¹⁵ the Eye should have conceded the argument.¹⁶
1⁵ A systematic review of evidence collated; the one referred to in this piece was published in 2005.
1⁶ http://www.drphilhammond.com/blog/2010/02/18/private-eye/dr-phil%E2%80%99s-private-eye-
column-issue-1256-february-17-2010/
234 How to do Research
So, the process of expert consensus, while it’s occurring, can be hard to dis-
tinguish from a conservative establishment rounding on a helpless individual
trying to fight their corner. And it doesn’t help matters that research com-
munities have failed to recognize innovation time and time again in the past,
and that medical research has had its darker moments (e.g. obviously the US
Tuskegee Syphilis Study discussed in Chapter 11, but plenty of others); also, it
isn’t generally hard to find or imagine commercial vested interests on the side
of the establishment. On the other hand, there are equally vested interests
in being anti-establishment: careers made, books sold, support from like-
minded sceptics. These are not the sort of things people typically include in
declarations when they publish1⁷ so tend not to get much further than a vague
reputation among colleagues in the field. As highlighted in Chapter 3, always
be cautious of an academic who’s never changed their mind. Perhaps it’s the
sort of thing of which investigative journalism needs to be more aware, and
one more element of academic ‘power’ that does tend to complicate research.
The MMR controversy resulted in a noticeable drop in vaccination rates and
therefore, undoubtedly, gave rise to otherwise-avoidable illnesses and deaths
as a direct result. What’s more, social media was getting well underway dur-
ing the course of the arguments, adding fuel to an anti-vaccination movement
that hasn’t looked back.
In conclusion
This feels like a chapter in search of an answer and I’m not sure I’m able or
qualified to provide one. Academic power games have occurred ever since
research output came with a desirable reputation (rather than just hassle from
the Church) and ever since the myth of discoveries by individuals began to be
perpetuated. Infighting and toxic rivalries are tedious but do still grumble on,
flaring up from time to time. The shift in many fields towards institutions and
research groups, rather than individuals, may help matters, and the encour-
agement by funders of multi-site and/or multidisciplinary networks might
improve things still further. The increased visibility that research groups pro-
vide, and the increased co-dependency of practitioners, may also reduce the
risk of problematic power dynamics and abusive or exploitative relationships.
The lurid stories that any senior researcher will have heard from a generation
or two ago are hopefully fading into rarity, now that problematic behaviour
is more widely witnessed when it occurs, and now that employers are more
1⁷ That is, not just ‘I’ve received research funding from X and Y pharmaceutical companies’, but also
‘I’ve been going on about this for years and might be disinclined to change my viewpoint as a result’.
Power and politics 235
willing (or at least legally obliged) to take disciplinary action. So, I suspect
that academic environments may shift towards becoming quieter and more
supportive—possibly more boring than they used to be, but who really needs
obnoxious professors to gossip about anyway? And have the obnoxious ones
ever been particularly productive?
I certainly don’t claim to have any answer to the challenge of communi-
cating consensus opinion or of defining who it is that should be involved
in the consensus in the first place. As someone whose research has at least
some roots in public health, I find it difficult to suggest a complete with-
drawal from political and public engagement. John Snow put forward his view
that cholera might be prevented if water supplies were cleaned up, and we
shouldn’t be afraid of passing on equivalent messages today. However, the
spirit of Socrates is still buzzing around somewhere, and it does feel com-
promising to be moving away from that point of objective scepticism we
started with in Chapter 3. I think for most researchers, this is sufficient. Dis-
covery brings ample rewards, and power, influence, and recognition are not
really what makes a research career worthwhile, although you may have to
face up to them at times.1⁸ Power ultimately tends to skew priorities and
breed conflicts of interest, as well as making otherwise intelligent and tal-
ented individuals behave rather badly towards each other. Which is never a
good thing.
1⁸ The ‘imposters’ (triumph and disaster) in Rudyard Kipling’s If poem come to mind.
17
How to be a Researcher
Some Conclusions
As discussed throughout this book, research isn’t really anything more than a
by-product of our species’ curiosity. Our inquisitiveness about the world, and
willingness to learn by experiment, was there at the beginning and we have to
assume is hard-wired into what we are. There have been clear points of accel-
eration, such as when we shifted to agriculture-based communities, allowing
us (if we weren’t doing this already) to specialize and allocate individuals
to research or invention amongst other duties. Also, advances in written
communication were important, allowing us to store knowledge, rather than
have to rediscover it every time a community failed to pass it on. Printing
was a key factor underlying the last 500 years of advances, not only because
of the more rapid and wider communication on offer, but also because it
resulted in a higher probability of effective storage and archiving of knowl-
edge accrued—important for research because, as discussed, knowledge is
sometimes accrued ahead of its time and needs to be rediscovered when
the conditions are right. Whether the digital revolution will have a similar
influence remains to be seen—it might accelerate the process still further,
or it might have the opposite effect by swamping the world with volumes
of information that it can’t handle, prioritize, or sift from disinformation.
In the course of the expansion in knowledge over the last few centuries,
there have of course been technologies developed that have enabled further
progress (whether developed for that purpose or not), the research com-
munity itself has expanded beyond recognition, and institutions and career
structures have evolved to hold that community together. The number of aca-
demic specialties carrying out what can be called ‘research’ has also grown to
near-comprehensive coverage across the humanities and sciences.
Unchanging fundamentals
To begin with, it’s important to recognise that leading researchers have come
in all shades of personality—humble and devout (Faraday, Dalton), para-
noid and vindictive (Hooke, Newton), profoundly introverted (Cavendish,
Darwin), social and flamboyant (Davy, Leclerc), prickly and combative (Ros-
alind Franklin, with justification), arrogant (Einstein, with justification), and
all the rest. As discussed in Chapter 16, power dynamics have sometimes
238 How to do Research
can find junior researchers to do it for them. However, this situation is rare,
and it seems a strange career to be choosing on that basis. It doesn’t mean
that you have to be naturally gifted or a good writer at the start, but it is
something that you’ll have to work on because practice (with feedback) is
really the only way to achieve this. An ability to give clear verbal and/or
audio-visual presentations is also helpful but probably less important—or at
least there are certainly plenty of successful professors who don’t seem to
have picked up presentation skills along the way. And the need to attract
funding rather depends on what discipline and setting you’re working in—
some institutions and research fields expect this more than others. On the
subject of funding, do be careful that obtaining grants doesn’t become too
all-consuming an activity—it’s quite possible to become very successful at
this but to lose all sense of advancing knowledge, moving from grant to grant
without really achieving much by the end of it all. As discussed in Chapter 16,
there’s something about attracting funding that’s opportunistic and that can
distract from central research questions that need answering, so a balance is
needed somewhere along the line.
Dealing with rejection is certainly something you’ll have to get used to in
a research career, as it never really goes away. It’s often said that if you aren’t
having your research papers rejected very often then you aren’t aiming high
enough—whether that’s true across all disciplines, I’m not sure, but it sounds
as if it ought to be. It’s never easy because the rejection either comes with little
more than a brief note from the editorial office saying something bland about
just not having enough space, or else it’s accompanied by reviewer comments
picking holes in your meticulous submission.1 I don’t think there’s a universal
way to deal with this sort of disappointment and probably everyone in the
business has their own personal strategy. The sadness, annoyance, and dented
self-esteem usually fade quite quickly, so one approach is just to switch off
work and do something enjoyable that evening to distract yourself. Definitely
try not to get angry and paranoid, and don’t slip into thinking that you can
guess the names of your anonymous reviewers (people have investigated this
and found that the guesses are nearly always wrong). With research papers
it’s best to start thinking as soon as possible about the next place to try, so
that you can refocus on something more positive and regain the affection
that you originally had for your submission. Also, any decent-quality research
should get accepted somewhere in the end, so it’s just a matter of picking
publishers who might be interested and sending it around. The rejections that
1 Or, worst of all, reviewer comments that say it’s excellent, but the editor rejects it all the same—if they
were always going to turn it down, why waste everyone’s time by bothering reviewers about it in the first
place?
240 How to do Research
are generally harder to deal with are those for funding applications, where
there is sometimes no obvious alternative option and where you may have to
go back to the drawing board.2 In my field, these arrive more often than not
late on a Friday afternoon, so you just have to accept that you’ll have a gloomy
weekend and try your best not to take it out on your friends and relatives. By
Monday, things are usually a bit brighter.
With persistence, a thick skin will develop, although it takes a while and,
in the end, no one’s completely invulnerable. It’s also never pleasant having
to do your duty and inform your co-authors or co-applicants about the lack
of success. However, you’ll get sympathy from anyone who’s been in the field
for a while, as they’ll have gone through the same experience countless times.
Finally, if you do find yourself feeling persistently damaged by rejections, you
might need to have a think about the career choice, as it very much goes with
the territory—although, again, it’s no different really to many other jobs that
involve touting around and pitching for business.
2 Although if the funder is interested enough in your rejected proposal to offer a follow-up conversation,
do take them up on it. There are plenty of stories of people being allowed to resubmit and being successful
on a second attempt following one of these discussions.
How to be a Researcher: Some Conclusions 241
bit more advice and support on the matter than there was when I began my
career, regardless of what you enter as your search term.
The relatively recent terms ‘neurodiversity’ and ‘differently abled’ are help-
ful, I think, and particularly apt for this cluster of characteristics or expe-
riences that might be called autism or autism spectrum. What I mean by
this is that the cluster includes skills that can be very valuable for research
work as well as disabilities that present challenges for other elements of the
career. An eighteenth-century scientist like Cavendish could be extremely
shy without too much impact on what he wanted to achieve, because he was
wealthy enough to work alone at home with his employed assistants (with
whom he preferred to communicate via written notes)—this helped him
build working relationships with fellow researchers at the Royal Society until
his formidable intelligence became more widely appreciated and he could be
accepted for who he was. However, he clearly found the collegiate researcher
lifestyle a strain, as he was sometimes observed hovering outside the door of
scientific meetings, summoning up the courage to go in.3 A lot of research
nowadays is straightforwardly meritocratic and dependent on carrying out
studies and communicating output, neither of which present any particular
challenge. On the contrary, they may be well suited to those who are able
to focus on work that might frustrate others. Building up networks needn’t
be problematic either because they tend to be based around concrete com-
mon interests like research questions and resources, rather than social skills.
However, there’s still a certain element of socializing inherent in the institu-
tions and communities that accommodate research nowadays, and this can
be more challenging. If you find that uncomfortable, as many do, it’s worth
anticipating the issue and figuring out a way through, as there should be no
need for it to affect your career—and it would be a shame if it did, bearing in
mind the other advantages you may have.
Having lived with this cluster, whatever you want to call it, since before
it was properly recognized or before there were support resources, I wish
I had received useful advice. However, the problem of growing up with an
unnamed condition is that you become an accretion of all the coping mech-
anisms you’ve built up over the years without being aware of them, not all of
which are necessarily helpful. And then you come to a rather late realization
and feel that it would have been nice to have known earlier and to have had
that chance to anticipate, rather than react to, difficulties. Personally, I would
suggest viewing social events in much the same way as rejection letters—
just a necessary component of the career that’s challenging (for some) but
3 That, and running away to avoid encounters with women he didn’t know.
242 How to do Research
that has to be faced, nonetheless. Others coming to the field more recently
and with more self-knowledge are likely to be better placed to advise, and
(thankfully) there does seem to be more practical support than there used to
be, so hopefully you’ll have an easier time of things. I similarly have nothing
much to say about how on earth the cluster fits in with delivering supervision
and leadership, when these become part of your role, except that I think that
workable day-to-day leadership might be less about the clichés of charisma
and social ease, and more about providing consistency and support/advice
that’s as altruistic as possible. That, and trying not to take things too
seriously.
where you can ignore the people you don’t get on with.⁴ However, your
networks will tend to generate informal expectations and extra work, so it’s
worth being prepared. I remember hearing a professional author talking once
about a ‘favour bank’ in his work, which I think often underpins research
communities. The idea is that if you do help someone out with something,
it’s reasonable to expect some help from them in the future, so you might
well go the extra mile in order to have that favour ‘banked’. Similarly, if you
ask a favour of someone and they kindly help, you’ve placed yourself under
an informal future obligation to them that they can draw on at a future date.
It’s nothing more complicated than that, although perhaps a bit more transac-
tional than an informal friendship. You’ll find, of course, along the way that
there are people who don’t play fair—that is, who are known for not help-
ing out much and yet still expect people to collaborate with them. The main
thing is just to know this and what to expect. Thankfully there aren’t too many
non-players, and most colleagues are pleasant and helpful when they can be
(everyone’s busy). I think this helpfulness is likely to become more common
with time, as collaborative networks are an increasingly important feature of
research, and the natural selection process is likely to favour those who are
known to play fair.
Another common issue that can cause friction in academic communities
is attitudes to deadlines. Someone older and wiser once shared their theory
with me that attitude to timekeeping was the only absolute compatibility fac-
tor in romantic relationships—that the parties either had to be both inclined
to lateness, or both punctual; any other personality trait could differ and still
be manageable, but not that particular one. I suspect that’s an exaggeration,
but there’s some truth in it, and it applies to academic life as well—although
you have less choice over who you work with. Research careers do tend
to involve collaboration and collaborations do tend to involve deadlines,
whether these are for funding applications, submission of material for publi-
cation, preparation of slides in advance for presentations, or simply turning
up for meetings. The issue of last-minute, tight-deadline funding applica-
tions was discussed in Chapter 16 as a courtesy issue, but some people do
seem to operate that way and if you’re more of a plan-ahead type of person
you might find this something of a strain. Alternatively, you might be some-
one who works better to quite a short deadline and find yourself frustrated
with gradual, longer-term planning activities. Ultimately, however, you’re
unlikely to change a colleague’s attitudes or behaviour on this—either you
⁴ If not, have a think about whether you’re in the right place; it needn’t be this way.
244 How to do Research
work around the difference, modifying your expectations, or else you avoid
each other—I’m not sure there’s a middle way.
have to be written but nothing gets circulated because the researcher never
feels it’s quite good enough, or there’s always some more checking to do. This
can be paralysing for a career and very difficult for all concerned, as well
as being something that may not show up for a while. There’s sometimes a
misconception that obsession is a desirable trait in a researcher. As described
earlier, it’s more the ability to tolerate and enjoy repetitive, detailed tasks that’s
advantageous—not a dependence on routine or perfectionism. The reason
all of this is relevant here is that any publication on your CV shows an abil-
ity not only to write, but also to let go of and submit, a manuscript (which
may not be a perfect but is ‘good enough’), as well as seeing it through the
process, responding to reviewers’ comments, resubmitting if needed. There-
fore, having one or two first-author publications is quite a good sign to an
employer that you not only have an aptitude for carrying out a research
project, but also you show potential for progression in the career that fol-
lows. Importantly, if you do find yourself struggling with a perfectionism or
prevarication that hinders your productivity, then you need to seek help and
advice.
Beyond identifying a supervisor and trying to find work in your area
of interest, I think the remaining steps are likely to be particular to your
chosen research field. For some fields, for example, it may be helpful
to study for a Master’s degree or other qualification to gain additional
discipline-specific training. For some, it might be advisable to gain expe-
rience as an employed researcher prior to seeking a doctoral studentship;
for others, early doctoral training may be the norm. Given the time and
effort (and funding) required for a doctorate, it’s probably a step to take
when you’re fairly confident that you want to pursue a research career,
although the degree needn’t constrain you if you change your mind by
the end of it, as many do—there are generally plenty of other job oppor-
tunities. For example, some researchers enjoy working in their field but
prefer to move into more management-oriented positions, rather than
the traditional academic posts with all the attendant pressures of univer-
sity expectations and metrics. And research projects nowadays, particu-
larly the large ones, often need effective management considerably more
than they need senior academics. Alternatively, some people find them-
selves preferring teaching to research and may modify their post-doctoral
career accordingly. I suspect that the opportunities will broaden further as
time goes on and it’s possible that university lecturers and professors may
become antiquated posts in the future as research careers become focused
on skills and experience rather than qualifications and titles. Time will
tell . . .
246 How to do Research
Conclusions from books that have attempted very broad historical sweeps
are varied, but interesting as contexts for the way research has evolved. Yuval
Noah Harari, in Sapiens: A Brief History of Humankind (see Chapter 3), out-
lines a series of thoughts about the history of our species; these include a
suggestion that what we like to call our ‘progress’ has not necessarily been
accompanied by greater species well-being, and has certainly come at con-
siderable cost to other species, and to the world in general. Bill Bryson, in
A Short History of Nearly Everything, draws similar judgements about our
impact as a species, illustrating these with the near simultaneity of the start
of the Enlightenment in London (the ideas being put together for New-
ton’s Principia) and the extinction of the dodo in Mauritius. Andrew Marr
in his History of the World, on the other hand, contrasts the enormous
advances and transformations arising from research with the fact that we
still haven’t figured out a political system that works. John Gribbin, in Sci-
ence: a History, discusses research shaping our view of ourselves, starting
with the time 500 years ago when the Earth was believed to be the cen-
tre of the known Universe, and Man a divinely-chosen species, and ending
today with the Sun as an ordinary star in a suburb of an ordinary galaxy,
and Homo sapiens as a species containing the same elements that are most
common across the Universe as a whole (with no major differences chem-
ically from any other life form). And the excellent 20-minute history of the
entire world, I guess,⁵ having taken us up to the present, with the Earth in its
troubled state of pollution and inequality, decides to finish off with a com-
ment on runaway technology that seems to have lost any sense of purpose:
‘“Let’s invent a thing inventor”, said the thing inventor inventor, after being
invented by a thing inventor; that’s pretty cool; by the way, where the hell
are we?’
In preparing examples to illustrate research principles and practice for this
book, I deliberately chose to look back quite far in history. There were various
reasons for this:
⁵ https://www.youtube.com/watch?v=xuCn8ux2gbs
How to be a Researcher: Some Conclusions 247
4. As any good historian will tell you, it’s risky to write about anything too
recent because of the lack of perspective.
Having said that, I think a concluding chapter does allow some licence for
reflection on more recent events and how they reflect or change some of the
principles we’ve been discussing.
To give some context, at the time this book is being finished it’s the middle of
2022 and London, my city, has emerged from a sizeable wave of omicron-
variant COVID-19 infections. Hospital admissions with COVID-19 have
been high but not as severe as the delta-variant ‘second wave’ that we saw
in early 2021 when few people were vaccinated, and large numbers were very
ill or dying. The problem recently has been more the level of staff absence
because of infection or contact with infection, in healthcare as well as other
sectors, so it’s still not a good time to need medical attention and many people
are continuing to lie low and self-isolate. We’ve been living now with the dif-
ferent waves of the COVID-19 pandemic for over two years, although there’s
a cautious hope that we might have passed the worst of it. This is exactly why
it’s not a good idea to write about recent history⁶ because it could all look very
foolish and naïve by the time this is published, let alone by the time you’re
reading some battered old copy in a second-hand bookshop.⁷ All the same,
from this particular perspective, the experiences of the COVID-19 pandemic
illustrate a few of the principles we have discussed.
On the scientific side, the boundaries of our knowledge are clear and, with
all the world’s expertise at our disposal, there are still important challenges
that may, for all we know, be unsurmountable. At a basic level, I think we
still don’t fully understand the route of transmission. When the pandemic
began, there were concerns about both airborne and contact transmission, so
we were advised both to wear masks and wash hands. Two years on, airborne
transmission is more of a focus, although I haven’t heard of anyone disproving
the contact route and I know people who still wash supermarket vegetables
and wipe down other items. Not that there’s any problem with this; it’s good, if
perhaps excessive, general hygiene and it might reduce the risk of all the other
infections knocking around. Also, it’s quite difficult to imagine how actual
⁶ And I’m only going to write about the COVID-19 pandemic as an issue here; I’m not going to attempt
to cover current conflicts and the world economy, which are very much still in flux.
⁷ Or whatever the digital equivalent is.
248 How to do Research
this was well meaning: many researchers found themselves unable to carry
on their usual work in the initial lockdown periods in the spring/summer
of 2020 (e.g. because of laboratory closures) and yet felt that they ought to
contribute. Unfortunately, a lot of people suddenly rediscovered themselves
as epidemiologists and public health experts⁸ and there was consequently a
proliferation of rather badly designed online surveys doing the rounds in
2020, most of which I suspect will remain unpublished. Journals were also
swamped with COVID-19 papers of sub-optimal quality, which made life
hard for those trying to submit more robust data. However, as mentioned,
they were (at least) well-meaning attempts, and it does seem to be dying down
now as everyone gets back to their day job.
‘So-Called Experts’
⁸ Much to the bemusement of those of us who had worked in a discipline that had always felt a little out
of the limelight. Although, incidentally, if epidemiologists are feeling aggrieved that other self-appointed
experts have been taking the limelight, since when did it become appropriate to use social media as a
vehicle for ad hoc scientific peer review? And how is it in any way supportive for the next generation of
researchers to have their papers savaged publicly on social media platforms by epidemiologists in senior
and respected positions? Some decency please! You know who you are . . .
250 How to do Research
⁹ For example, Kahn-Harris, K. (2018). Denial: the unspeakable truth. Notting Hill Editions Ltd., Kendal.
1⁰ Hence the term ‘denialism’ itself has become pejorative and a sensitive issue with those, for example,
who call themselves ‘climate change sceptics’ and would get very angry at being called deniers.
How to be a Researcher: Some Conclusions 251
Returning to more concrete matters, the pandemic may well have important
consequences for the way research is carried out and therefore for the life
of the researcher. In March 2020, the group I work with relocated overnight
from university offices to everyone’s homes. Luckily, because we work with
data that can be remotely accessed, the research could continue, unlike a
lot of colleagues’ studies that had to be paused. We therefore transitioned
to remote working and were back in that position again in early 2022, hav-
ing only had about six months since the previous summer of part-time
office-based activity. This is obviously similar to the experiences of many
people in other lines of work, and questions at the moment revolve around
whether we’ll ever return to the way things were in 2019. If new opportu-
nities are embraced, then it’s possible that the nature of a research group
could change fundamentally—no longer confined to a set of university build-
ings but a wider virtual network. And once wider virtual networks become
normalized then the nature of research-hosting universities (or any insti-
tutions) may need to be re-thought. For example, the idea of a highly
distributed national or international entity hosting research and employing
multi-national researchers is a lot more feasible now than it was in the days
when face-to-face meetings were the norm and when any virtual meeting
was felt to be decidedly suboptimal (and nearly always subject to computer
crashes, audio-visual problems, and such-like). As discussed in Chapter 14,
the age of the ‘virtual conference’ has also definitely begun and there will
need to be some thought put into what benefits were conferred by physical
conferences as these re-emerge, possibly in hybrid virtual-physical formats.
As a broader conclusion, perhaps the most important lesson to be drawn
from the COVID-19 pandemic is life’s unpredictability. In 2019 it would have
been inconceivable that so much research would have to be paused, that so
many other research communities would have been enabled to work virtually
so rapidly, or that epidemiology, public health, virology, and immunology
would have suddenly become such important topics. However, an ever-
sceptical research community shouldn’t really be surprised by these things
because, in our clearer moments, we should be remembering that all assump-
tions are best kept at arm’s length and open to question. The context in which
research has been supported and carried out has changed immeasurably,
so there’s no reason to suppose that anything we take for granted now will
be static—universities as hosting institutions, current funding mechanisms,
peer-review as a means of vetting publications, the doctoral degree as a rite
of passage for an academic career, or the very idea of research groups being
252 How to do Research
Although this book is about research, I hope I have made it clear that it is not an academic text
itself, and that it has never had any aspirations to be so. For the historical elements, I have not
done what every good historian should do and sought out original source material. Instead,
I have taken what I believe and hope to be reasonably uncontested (and often well-known)
historical facts and biographical material and have assembled them to suit my purposes—that
is, to illustrate the particular points I am making. It is also clearly not intended to be a com-
prehensive history of science or any other type of research, as you will find plenty of examples
of important figures who are mentioned only in passing, or not at all.
Where I happened to pick up controversy or uncertainty around a particular historical or
biographical fact, I have tried my best to imply this in the language used. If there are any errors
or over-simplifications, I do apologize, although I don’t believe that any of my arguments rely
heavily on any one particular fact, so I hope that the conclusions still stand. However, do let
me know if you spot anything demonstrably incorrect in what I claim—I’m very happy to be
instructed by those with more expertise. Likewise, I have tried to indicate as clearly as possible
where I am stating a personal opinion or impression, and again, while I hope you can follow
the line of thinking, you’re very welcome to disagree.
In conceiving this book, I was strongly influenced conceptually by a number of other authors
who have taken relatively broad, light-touch sweeps across history or other fields and who have
therefore drawn their conclusions (as I have) from overviews rather than detail. Of particular
help when I was preparing this book was Science: A History by John Gribbin (Penguin Books,
London, 2002), Philosophy of Science: A Very Short Introduction by Samir Okasha (Oxford Uni-
versity Press, Oxford, 2002), and A Concise History of Mathematics, 4th edn. by Dirk J. Struk
(Dover Publications Inc., New York, 1987). I believe the idea for the book germinated a long
time ago when reading A Short History of Nearly Everything by Bill Bryson (Transworld Pub-
lishers, London, 2003) and cross-disciplinary commonalities were informed by In Search of
England by Michael Wood (Penguin Books, London, 2000), A History of the World by Andrew
Marr (Macmillan, London, 2012), The Story of Art by E. H. Gombrich (Phaidon Press, London,
1950), and Europe: A History by Norman Davies (Oxford University Press, Oxford, 1996), as
well as by many In Our Time podcasts (BBC Radio 4). Contextual thinking on the develop-
ment of our species in relation to research principles owes rather eccentric debts to Sapiens:
A Brief History of Humankind by Yuval Noah Harari (Vintage, London, 2011), Psychology
and Alchemy by C. G. Jung (Routledge, London, 1953), and The Golden Bough: A Study in
Magic and Religion by J. G. Frazer (Macmillan Press Ltd., London, 1922), among others I’ve
forgotten.
I have been fortunate to have had the opportunity to think about broad research theory
through teaching duties over the years and through contributing as an author and co-editor
to Practical Psychiatric Epidemiology (Oxford University Press, Oxford, 2003, 2020). It was a
long time ago, but I believe my first teaching materials on research theory drew strongly on
the overview in Modern Epidemiology, 2nd edn. by Kenneth J. Rothman and Sander Greenland
(Lippincott Williams & Wilkins, Philadelphia, 1998).
Index
Climate change theories 45, 86f, 93–8, 107, Euclid 18, 22, 146 n.7
132, 154, 170, 232, 250 Euler, Leonhard 64
Cochrane collaboration 93, 233
Cohort studies 133–5 Faraday, Michael 33, 154, 182f, 184, 227
Colden, Jane 44 Farr, William 6, 10, 12
Computer science 96, 193, 203 Fermat, Pierre de 66
Continental drift theory 34, 153 Fermat’s last theorem 34, 66, 103
Copernicus, Nicolaus 28f, 29–30, 70, 88, Fibonacci See Leonardo of Pisa
146, 177–8, 200, 218, 229 Fisher, Ronald 125 n.9
Correns, Karl 179 Fleming, Alexander 2
COVID-19 pandemic 3 n.3, 99, 172, 174 Franklin, Benjamin 89f, 100, 201, 228
n.10, 190, 247–9, 251 Franklin, Rosalind 223–4, 237
Crick, Francis 223–4 Frazer, James 24–6, 71–2
Curie, Marie 81, 202, 224, 238 Fresnel, Augustin 59, 88
Curie, Pierre 238
Galileo Galilei 28f, 29, 88, 98, 144, 148–9,
Dalton, John 181–4, 187–8, 201, 227, 237 178, 228, 229
Dante 27, 28f, 31 n.7 Gauss, Carl Friedrich 64–6, 146, 220
Darwin, Charles 16, 31, 41, 65 n.5, 72–3, 88, Geology 22, 31, 44–6, 75, 93–5, 96, 107,
98, 153, 178, 201, 220, 227, 237 153, 180
Davy, Sir Humphry 123, 154 n.15, 181–4, Germain, Marie-Sophie 66, 202
188, 227, 237 Gibbon, Edward 62–3, 108, 109
Denialism 98, 232, 250 Gilbert, William 55
Descartes, René 30, 52 n.4, 68–9, 74, 151, Glaser, Barney 107, 108
162 n.2 Gordon, George Stuart 203
Dickens, Charles 4, 5, 103–5 Grahame, Kenneth 203 n.9
Digges, Leonard 177–8 Gribbin, John 246
Digges, Thomas 177–8 Gutenberg, Johannes 144 n.3
DNA discovery 84, 131 n.3, 223–4
Doll, Richard 134 Halley, Edmund 56, 147, 148, 219
Du Bois, William Edward Burghardt 110–1 Hamilton, Richard 67
Dunning School theory 110 Harari, Yuval Noah 32, 35, 36, 246
Harvey, William 150
Earth - ageing theories 22, 31, 72, 94, 220 Haworth, Walter Norman 124 n.8
Economics 10, 91–2, 129, 142, 168, 200, 230 Helmholtz, Hermann von 220
Edison, Thomas 220 Herschel, Caroline 180f, 181, 202, 238
Einstein, Albert 15, 31, 58–9, 60, 65, 73 n.10, Herschel, William 146, 180–1, 238
102, 146, 199, 221 n.5, 227, 237 Hertz, Heinrich Rudolf 155
Electromagnetism 55, 122, 154–5, 178, 188, Heyerdahl, Thor 15, 35
221 n.5, 227 Hieroglyphics 109f, 110, 152f, 153, 220
Eliot, George 24, 188 n.13 Hilbert, David 65, 66–7, 191
Empiricism 3, 9, 37f, 38, 51f, 68–71, 75, Hipparchus 147, 148
102–6, 162, 164, 166, 203 Hippocrates 6, 12, 31
Engels, Friedrich 109 History/historiography 1, 10, 17 n.2, 22–3,
Epidemiology 3–12, 78–85, 107, 133–9, 32, 35, 50, 61–3, 75, 108–12, 128–9, 151–3,
165–70, 249 159, 162, 166, 168, 170, 203, 223, 228 n.11,
Ethics in research 48, 123, 128, 133, 139–41, 231, 237, 240, 246–7, 249
206 Hittite empire 17, 109–10
256 Index
MMR vaccine controversy 233–4 Priestley, Joseph 16, 118–19, 121f, 123, 220,
Monetarism theory 92, 129, 230 229
Montagu, Lady Mary Wortley 49 Ptolemy, Claudius 27–8, 31, 70, 148
Morse, Samuel 154 Pythagoras 17, 18
Muhammed ibn Musa-al Khwarizmi 19
Qin Jiushao 20
Napier, John 145–6, 150 n.12 Quantum theory 59, 71, 73 n.10, 76, 150,
Natural selection theory for evolution 16, 188
17, 41, 73, 88, 130, 153, 158, 178, 227, 237 Quiller-Couch, Arthur 203
Neurodiversity 240–2
Neutrino theory 34 Randomised controlled trials 122–6, 132,
Newton, Isaac 16, 31, 42f, 52 n.4, 53, 56–9, 133, 135, 169
62, 64, 65 n.5, 71, 72, 76, 88, 98, 145, 146, Rationalism 3, 68–9, 74, 75, 92 n.3, 102–5,
154, 159, 162 n.2, 182f, 200, 201, 218–19, 112, 151, 162, 164, 203
220, 221 n.5, 224, 228, 237, 246 Ray, John 227
Nightingale, Florence 6 Reconstruction, post-Civil War 110, 111f
Relativity theory, general 58 n.7, 59, 65, 71,
Occam’s razor (see also William of Ockham) 73 n.10, 146, 150
69–70, 71 Relativity theory, special 31, 58, 59, 60, 76
Ogunniyi, Adesola 84 Reviews, systematic/narrative 92–3, 103,
Omar Khayyam 19 107, 175, 184–6, 189, 233
Ørsted, Hans Christian 154, 201 Richter, Jean 147
Osuntokun, Oluwakayode 84 Riemann, Bernhard 65, 66–7, 146
Oxygen, discovery 16, 118–19, 220 Rutherford, Ernest 205f
Palaeontology 23, 45–7, 94, 96, 152, 221 Sayce, Archibald 109–10
Paradigm theory 90–2 Scheele, Carl 16, 118–19, 220
Peer review 98–9, 163–4, 171–3, 185, Scurvy treatment 2, 123–5, 126
208–10, 212, 214, 251 Shakespeare, William 75, 203, 219, 231,
Perelman, Grigori 67 252
Periodic table of elements 34, 120 Shelley, Mary 10
Perraudin, Jean-Pierre 94 Shelley, Percy Bysshe 75
Phlogiston theory 30, 120 n.5 Shen Kuo 44, 45, 94 n.5, 228
Physics 22, 41, 55, 57–8, 71, 76–7, 88, Smallpox vaccination development 47,
115–17, 122, 143–4, 150, 153, 154–6, 179, 48–50, 123, 124, 178 n.2
181–4, 188, 200–1, 202, 205f, 221 n.5, 227 Smith, Adam 200
Plato 2, 18, 32, 39, 54, 68, 188 n.14, 198, 203 Smith, William 44–5
Politics and political science 2, 10–12, 20, Smoking and cancer 79–84, 96–7, 99, 125
21f, 36, 37, 44, 58, 64, 70f, 72, 89f, 90, n.9, 132, 133–5
91–2, 108–9, 110, 121, 128–32, 139 n.9, Snow, John 3–12, 48, 53, 78, 107, 138f, 188,
144, 152f, 178, 182f, 189f, 198, 203, 215, 235, 248
227, 228–35, 246, 249 Social Science 24–5, 75, 91, 107–8, 110, 129
Popov, Alexander Stepanovich 155 n.1, 165, 169, 240
Popper, Karl 50–2, 53, 54, 59, 73, 76, 79, 88, Socrates 18, 32, 35, 54, 60, 69, 99 n.7, 102,
90, 91, 99, 105, 170 105, 117, 162, 165, 170, 198, 227, 230,
Price, Richard 89 235, 248
258 Index
Strauss, Anselm 107–8 Vostok station, Antarctica 85, 86f, 96, 107
Sun- vs. Earth-centred planetary system 22, Vries, Hugo de 179
27–30, 55, 70–1, 200 n.6, 218, 246
Surveys 96, 106–7, 249 Wallace, Alfred 16, 41, 65 n.5, 88, 178,
Szent-Györgyi, Albert 124 n.8 220, 227
Water, discovery of its molecular
Tesla, Nikola 155, 184 structure 120
Thermodynamics 220–1 Water, its boiling point 53, 116, 117f
Thomson, George Paget 238 Waterston, John 220–1
Thomson, Joseph John 238 Watson, James 223–4
Thomson, William (Lord Kelvin) 220 Watt, James 120, 200
Tolstoy, Leo 61, 170, 237 Westinghouse, George 220
Tschermak, Erich von 179 Wiles, Andrew 66
Tuskegee Syphilis Study 139–41, 234 Wilkins, Maurice 223–4
William of Malmesbury 61
Universe origins 31, 71, 73, 74, 150, 246 William of Ockham 69–70, 73
Willughby, Francis 227
Vainio, Edvard August 44 Winckler, Hugo 110
Venetz, Ignaz 94
Verdict of causation 90, 100, 105, 125, 185 Zeno 17–18
Volta, Alessandro 120, 150, 201 Zoology 22, 39–44, 46, 47, 61, 75, 107
Voltaire 58, 62, 108, 201 n.7 Zu Chongzhi 20