Search This Blog

Tuesday, 8 August 2017

Reviewing peer review.

Fleming's discovery of penicillin couldn't get published today. That's a huge problem

Updated by Julia Belluz on December 14, 2015, 7:00 a.m. ET


After toiling away for months on revisions for a single academic paper, Columbia University economist Chris Blattman started wondering about the direction of his work.



He had submitted the paper in question to one of the top economics journals earlier this year. In return, he had gotten back nearly 30 pages of single-space comments from peer reviewers (experts in the field who provide feedback on a scientific manuscript). It had taken two or three days a week over three months to address them all.



So Blattman asked himself some simple but profound questions: Was all this work on a single study really worth it? Was it best to spend months revising one study — or could that time have been better spent on publishing multiple smaller studies? He wrote about the conundrum on his blog:



Some days my field feels like an arms race to make each experiment more thorough and technically impressive, with more and more attention to formal theories, structural models, pre-analysis plans, and (most recently) multiple hypothesis testing. The list goes on. In part we push because want to do better work. Plus, how else to get published in the best places and earn the respect of your peers?



It seems to me that all of this is pushing social scientists to produce better quality experiments and more accurate answers. But it’s also raising the size and cost and time of any one experiment.



Over the phone, Blattman explained to me that in the age of "big data," high-quality scientific journals are increasingly pushing for large-scale, comprehensive studies, usually involving hundreds or thousands of participants. And he's now questioning whether a course correction is needed.



Though he can't prove it yet, he suspects social science has made a trade-off: Big, time-consuming studies are coming at the cost of smaller and cheaper studies that, taken together, may be just as valuable and perhaps more applicable (or what researchers call "generalizable") to more people and places.



Do we need more "small" science?



Over in Switzerland, Alzheimer's researcher Lawrence Rajendran has been asking himself a similar question: Should science be smaller again? Rajendran, who heads a laboratory at the University of Zurich, recently founded a journal called Matters. Set to launch in early 2016, the journal aims to publish "the true unit of science" — the observation.



Rajendran notes that Alexander Fleming’s simple observation that penicillin mold seemed to kill off bacteria in his petri dish could never be published today, even though it led to the discovery of lifesaving antibiotics. That's because today's journals want lots of data and positive results that fit into an overarching narrative (what Rajendran calls "storytelling") before they'll publish a given study.



"You would have to solve the structure of penicillin or find the mechanism of action," he added.



But research is complex, and scientific findings may not fit into a neat story — at least not right away. So Rajendran and the staff at Matters hope scientists will be able to share insights in this journal that they may not been able to publish otherwise. He also thinks that if researchers have a place to explore preliminary observations, they may not feel as much pressure to exaggerate their findings in order to add all-important publications to their CVs.



Smaller isn't always better



Science has many structural problems to grapple with right now: The peer review system doesn't function all that well, many studies are poorly designed so their answers are unreliable, and replications of experiments are difficult to execute and very often fail. Researchers have estimated that about $200 billion — or about 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies.



A big part of the reason science funders started emphasizing large-scale studies is because they were trying to avoid common problems with smaller studies: The results aren't statistically significant, and the sample sizes may be too tiny and therefore unrepresentative.


It's not clear that emphasizing smaller-scale studies and observations will solve these problems. In fact, publishing more observations may just add to the noise. But as Rajendran says, it's very possible that important insights are being lost in the push toward large-scale science. "Science can be small, big, cure diseases," he said. "It can just be curiosity-driven. Academic journals shouldn't block the communication of small scientific observations."

Why no 'brave new world'

The truth about technology’s greatest myth




Many optimists believe that technology can transform society, whether it’s the internet or the latest phone. But as Tom Chatfield argues in his final column for BBC Future, the truth about our relationship with technology is far more interesting.

Lecturing in late 1968, the American sociologist Harvey Sacks addressed one of the central failures of technocratic dreams. We have always hoped, Sacks argued, that “if only we introduced some fantastic new communication machine the world will be transformed.” Instead, though, even our best and brightest devices must be accommodated within existing practices and assumptions in a “world that has whatever organisation it already has.”
As an example, Sacks considered the telephone. Introduced into American homes during the last quarter of the 19th Century, instantaneous conversation across hundreds or even thousands of miles seemed close to a miracle. For Scientific American, editorializing in 1880, this heralded “nothing less than a new organization of society – a state of things in which every individual, however secluded, will have at call every other individual in the community, to the saving of no end of social and business complications…”
Yet the story that unfolded was not so much “a new organization of society” as the pouring of existing human behaviour into fresh moulds: our goodness, hope and charity; our greed, pride and lust. New technology didn’t bring an overnight revolution. Instead, there was strenuous effort to fit novelty into existing norms.
The most ferocious early debates around the telephone, for example, concerned not social revolution, but decency and deception. What did access to unseen interlocutors imply for the sanctity of the home – or for gullible or corruptible members of the household, such as women or servants? Was it disgraceful to chat while improperly dressed? Such were the daily concerns of 19th-century telephonics, matched by phone companies’ attempts to assure subscribers of their propriety.
As Sacks also put it, each new object is above all “the occasion for seeing again what we can see anywhere” – and perhaps the best aim for any writing about technology is to treat novelty as not as an end, but as an opportunity to re-scrutinize ourselves.
I’ve been writing this fortnightly column since the start of 2012, and in the last two years have watched new devices and services become part of similar negotiations. By any measure, ours is an age preoccupied with novelty. Too often, though, it offers a road not to insight, but to a startling blindness about our own norms and assumptions.
Take the litany of numbers within which every commentary on modern tech is couched. Come the end of 2014, there will be more mobile phonesin the world than people. We have moved from the launch of modern tablet computing in mid-2011 to tablets likely accounting for over half the global market in personal computers in 2014. Ninety per cent of the world’s data was created in the last two years. Today’s phones are more powerful than yesterday’s supercomputers. Today’s software is better than us at everything from chess to quiz shows. And so on.
Singularity myth
It’s a story in which both machines and their capabilities increase for ever, dragging us along for the exponential ride. Perhaps the defining geek myth of our age, The Singularity, anticipates a future in which machines cross an event horizon beyond which their intellects exceed our own. And while most people remain untouched by such faith, the apocalyptic eagerness it embodies is all too familiar. Surely it’s only a matter of time – the theory goes – before we finally escape, augment or otherwise overcome our natures and emerge into some new phase of the human story.
Or not. Because – while technological and scientific progress is indeed an astonishing thing – its relationship with human progress is more aspiration than established fact. Whether we like it or not, acceleration cannot continue indefinitely. We may long to escape flesh and history, but the selves we are busy reinventing come equipped with the same old gamut of beauties, perversities and all-too-human failings. In time, our dreams of technology departing mere actuality – and taking us along for the ride – will come to seem as quaint as Victorian gentlemen donning evening dress to make a phonecall.
This is one reason why, over the last two years, I’ve devoted a fair share of columns to the friction between the stories we tell about tech and its actual unfolding in our lives. From the surreptitious erosion of digital history to the dumbness of “smart” tech, via email’s dirty secrets and theimportance of forgetfulness, I love exploring the tensions between digital tools and analogue selves – not because technology is to be dismissed or deplored, but because it remains as mired in history, politics and human frailty as everything else we touch.
This will be the last regular Life:Connected column I write for BBC Future. Instead, I’ll be writing a book about one of my obsessions: attention, and how its quantification and sale have become a battleground for 21st Century selves. I will, however, continue examining technology’s impact here and elsewhere – and asking what it means to watch ancient preoccupations poured into fresh, astounding moulds.
On which note: what do you think is most ripe for abandonment around technology today? Which habit will come to be seen by future generations as quaint – our equivalent of putting on bow ties for telephones? If you want to stay in touch, tweet me at @TomChatfield and let me know what you think.