Search This Blog

Tuesday, 8 August 2017

Reviewing peer review.

Fleming's discovery of penicillin couldn't get published today. That's a huge problem

Updated by Julia Belluz on December 14, 2015, 7:00 a.m. ET


After toiling away for months on revisions for a single academic paper, Columbia University economist Chris Blattman started wondering about the direction of his work.



He had submitted the paper in question to one of the top economics journals earlier this year. In return, he had gotten back nearly 30 pages of single-space comments from peer reviewers (experts in the field who provide feedback on a scientific manuscript). It had taken two or three days a week over three months to address them all.



So Blattman asked himself some simple but profound questions: Was all this work on a single study really worth it? Was it best to spend months revising one study — or could that time have been better spent on publishing multiple smaller studies? He wrote about the conundrum on his blog:



Some days my field feels like an arms race to make each experiment more thorough and technically impressive, with more and more attention to formal theories, structural models, pre-analysis plans, and (most recently) multiple hypothesis testing. The list goes on. In part we push because want to do better work. Plus, how else to get published in the best places and earn the respect of your peers?



It seems to me that all of this is pushing social scientists to produce better quality experiments and more accurate answers. But it’s also raising the size and cost and time of any one experiment.



Over the phone, Blattman explained to me that in the age of "big data," high-quality scientific journals are increasingly pushing for large-scale, comprehensive studies, usually involving hundreds or thousands of participants. And he's now questioning whether a course correction is needed.



Though he can't prove it yet, he suspects social science has made a trade-off: Big, time-consuming studies are coming at the cost of smaller and cheaper studies that, taken together, may be just as valuable and perhaps more applicable (or what researchers call "generalizable") to more people and places.



Do we need more "small" science?



Over in Switzerland, Alzheimer's researcher Lawrence Rajendran has been asking himself a similar question: Should science be smaller again? Rajendran, who heads a laboratory at the University of Zurich, recently founded a journal called Matters. Set to launch in early 2016, the journal aims to publish "the true unit of science" — the observation.



Rajendran notes that Alexander Fleming’s simple observation that penicillin mold seemed to kill off bacteria in his petri dish could never be published today, even though it led to the discovery of lifesaving antibiotics. That's because today's journals want lots of data and positive results that fit into an overarching narrative (what Rajendran calls "storytelling") before they'll publish a given study.



"You would have to solve the structure of penicillin or find the mechanism of action," he added.



But research is complex, and scientific findings may not fit into a neat story — at least not right away. So Rajendran and the staff at Matters hope scientists will be able to share insights in this journal that they may not been able to publish otherwise. He also thinks that if researchers have a place to explore preliminary observations, they may not feel as much pressure to exaggerate their findings in order to add all-important publications to their CVs.



Smaller isn't always better



Science has many structural problems to grapple with right now: The peer review system doesn't function all that well, many studies are poorly designed so their answers are unreliable, and replications of experiments are difficult to execute and very often fail. Researchers have estimated that about $200 billion — or about 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies.



A big part of the reason science funders started emphasizing large-scale studies is because they were trying to avoid common problems with smaller studies: The results aren't statistically significant, and the sample sizes may be too tiny and therefore unrepresentative.


It's not clear that emphasizing smaller-scale studies and observations will solve these problems. In fact, publishing more observations may just add to the noise. But as Rajendran says, it's very possible that important insights are being lost in the push toward large-scale science. "Science can be small, big, cure diseases," he said. "It can just be curiosity-driven. Academic journals shouldn't block the communication of small scientific observations."

Why no 'brave new world'

The truth about technology’s greatest myth




Many optimists believe that technology can transform society, whether it’s the internet or the latest phone. But as Tom Chatfield argues in his final column for BBC Future, the truth about our relationship with technology is far more interesting.

Lecturing in late 1968, the American sociologist Harvey Sacks addressed one of the central failures of technocratic dreams. We have always hoped, Sacks argued, that “if only we introduced some fantastic new communication machine the world will be transformed.” Instead, though, even our best and brightest devices must be accommodated within existing practices and assumptions in a “world that has whatever organisation it already has.”
As an example, Sacks considered the telephone. Introduced into American homes during the last quarter of the 19th Century, instantaneous conversation across hundreds or even thousands of miles seemed close to a miracle. For Scientific American, editorializing in 1880, this heralded “nothing less than a new organization of society – a state of things in which every individual, however secluded, will have at call every other individual in the community, to the saving of no end of social and business complications…”
Yet the story that unfolded was not so much “a new organization of society” as the pouring of existing human behaviour into fresh moulds: our goodness, hope and charity; our greed, pride and lust. New technology didn’t bring an overnight revolution. Instead, there was strenuous effort to fit novelty into existing norms.
The most ferocious early debates around the telephone, for example, concerned not social revolution, but decency and deception. What did access to unseen interlocutors imply for the sanctity of the home – or for gullible or corruptible members of the household, such as women or servants? Was it disgraceful to chat while improperly dressed? Such were the daily concerns of 19th-century telephonics, matched by phone companies’ attempts to assure subscribers of their propriety.
As Sacks also put it, each new object is above all “the occasion for seeing again what we can see anywhere” – and perhaps the best aim for any writing about technology is to treat novelty as not as an end, but as an opportunity to re-scrutinize ourselves.
I’ve been writing this fortnightly column since the start of 2012, and in the last two years have watched new devices and services become part of similar negotiations. By any measure, ours is an age preoccupied with novelty. Too often, though, it offers a road not to insight, but to a startling blindness about our own norms and assumptions.
Take the litany of numbers within which every commentary on modern tech is couched. Come the end of 2014, there will be more mobile phonesin the world than people. We have moved from the launch of modern tablet computing in mid-2011 to tablets likely accounting for over half the global market in personal computers in 2014. Ninety per cent of the world’s data was created in the last two years. Today’s phones are more powerful than yesterday’s supercomputers. Today’s software is better than us at everything from chess to quiz shows. And so on.
Singularity myth
It’s a story in which both machines and their capabilities increase for ever, dragging us along for the exponential ride. Perhaps the defining geek myth of our age, The Singularity, anticipates a future in which machines cross an event horizon beyond which their intellects exceed our own. And while most people remain untouched by such faith, the apocalyptic eagerness it embodies is all too familiar. Surely it’s only a matter of time – the theory goes – before we finally escape, augment or otherwise overcome our natures and emerge into some new phase of the human story.
Or not. Because – while technological and scientific progress is indeed an astonishing thing – its relationship with human progress is more aspiration than established fact. Whether we like it or not, acceleration cannot continue indefinitely. We may long to escape flesh and history, but the selves we are busy reinventing come equipped with the same old gamut of beauties, perversities and all-too-human failings. In time, our dreams of technology departing mere actuality – and taking us along for the ride – will come to seem as quaint as Victorian gentlemen donning evening dress to make a phonecall.
This is one reason why, over the last two years, I’ve devoted a fair share of columns to the friction between the stories we tell about tech and its actual unfolding in our lives. From the surreptitious erosion of digital history to the dumbness of “smart” tech, via email’s dirty secrets and theimportance of forgetfulness, I love exploring the tensions between digital tools and analogue selves – not because technology is to be dismissed or deplored, but because it remains as mired in history, politics and human frailty as everything else we touch.
This will be the last regular Life:Connected column I write for BBC Future. Instead, I’ll be writing a book about one of my obsessions: attention, and how its quantification and sale have become a battleground for 21st Century selves. I will, however, continue examining technology’s impact here and elsewhere – and asking what it means to watch ancient preoccupations poured into fresh, astounding moulds.
On which note: what do you think is most ripe for abandonment around technology today? Which habit will come to be seen by future generations as quaint – our equivalent of putting on bow ties for telephones? If you want to stay in touch, tweet me at @TomChatfield and let me know what you think.

Monday, 7 August 2017

How loudly should money be allowed to talk?

The U.S military midwife to I.S.I.L?:Pros and cons.

Were the ancients right(in a sense) re:the centrality of our planet?

Solar Eclipses Still Inspire Science
Evolution News @DiscoveryCSC

The film and book  The Privileged Planet introduced a class of phenomena about the earth that show a curious linkage between the requirements for habitability and opportunities for scientific discovery. The first example involved total solar eclipses. The close match between the sun and moon’s apparent diameters that permit total eclipses also have allowed scientists to discover helium, learn the chemical composition of the sun, and confirm Einstein’s theory of relativity.

Materialists must believe this linkage is mere coincidence. For example, Tom Metcalfe titles his  Live Science article, “Why Total Eclipses Are Total Coincidences.” Nowhere does Metcalfe specifically dismiss the Privileged Planet hypothesis, but he seems to work overtime to pre-empt design by repetition, using the word coincidence nine times, occasionally with strong adjectives for emphasis: sheer coincidence, total coincidence, celestial coincidence. If we add accident of geometry, that’s ten.

“It’s a beautiful coincidence — life has been on Earth for about 400 million years, and we’re living in this little window of time where this is happening, which is pretty amazing,” [Mark] Gallaway told Live Science. 

One of Metcalfe’s arguments for sheer dumb luck is that scientific discoveries made during eclipses are old news. Calling on Mark Gallaway, a U.K. astronomer, for support, he says:

Although some solar eclipses have played an important role in science, such as the 1919 eclipse that helped verify Einstein’s theory of general relativity, these celestial events don’t always hold much scientific interest today, he said.

“Eclipses are one of the most well-examined things in science. We know how they work, and to be honest, we’re just going out there because we like to see eclipses,” Gallaway said.

Metcalfe allows for a couple of little mysteries that remain to be studied, but relegates the big discoveries to long-past historical anecdotes. Is this correct? Are today’s total eclipses just lucky breaks for our entertainment? Is the Privileged Planet argument outdated? The news about the upcoming August 21 eclipse shows otherwise.

An indication of the ongoing scientific value of eclipses can be seen in NASA’s attempt to recruit thousands of “citizen scientists” in the event. The Great American Eclipse will likely be the most-studied total solar eclipse in history. Some 12 million viewers live within the path of totality, and over half the U.S. population lives within 400 miles of the path, according to  GreatAmericanEclipse.org. Having so many observers makes this eclipse a bonanza for scientific observation, and NASA is taking advantage of it with a  special website giving people instructions for how they can get involved. Here are just three of the six research projects planned:

GLOBE Observer: What happens in the atmosphere and on Earth’s surface when the Sun’s light is blocked, even temporarily?
Ham/Sci: This project by Virginia Tech and New Jersey Institute of Technology will employ amateur radio enthusiasts to study the ionosphere during the eclipse.
Life Responds (California Academy of Sciences): Many have reported unusual changes in animal behavior during eclipses. This project “will make scientifically-valuable observations of many aspects of this behavior.”
One project involving the public is NASA’s Eclipse Ballooning Project.” An infographic shows how students at universities and high schools, from Oregon to South Carolina, will participate in launching 57 high-altitude balloons that will rise 100,000 before, during and after the eclipse. The balloons, to be monitored by the Iridium and GPS satellites for location, are equipped to collect multi-spectrum data and transmit it to earth, where it will be live-streamed to scientists and to anyone with Internet access.

Farther up, astronauts on the International Space Station will be able to witness the eclipse three times from orbit.  NASA’s eclipse website  shows the orbital path. The astronauts will beam down what they see from their high platform. Their vantage point also allows them to monitor the shadow of the moon on the ground.

The NASA eclipse site also lists numerous research projects it is undertaking in  Science from the Ground.” Research teams will take advantage of the eclipse to study the solar corona, the earth’s atmosphere, earth’s outgoing radiation, and more. Here’s a taste of the valuable science that can only be studied during an eclipse:

During the eclipse, a team of scientists led by Paul Bryans at the National Corporation for Atmospheric Research will sit inside a trailer in Camp Wyoba atop Casper Mountain in Wyoming, and point a specialized instrument at the sun. The instrument is a spectrometer, which collects light from the sun and separates each wavelength of light, measuring their intensity. This particular spectrometer, called the NCAR Airborne Interferometer, will for the first time survey infrared light emitted by the sun’s atmosphere, or corona. Such an experiment can only be conducted from the ground during an eclipse, when the sun’s bright face is blocked, revealing the much fainter corona.

This novel data will help scientists characterize the corona’s complex magnetic field — crucial information for understanding and eventually helping forecast space weather events. The scientists will augment their study by analyzing their results alongside corresponding space-based observations from other instruments aboard NASA’s Solar Dynamics Observatory and the joint NASA/JAXA Hinode.

NASA lists nine  smartphone apps  the public can download to learn about the eclipse. Eclipse2017.org created another app of its own. Search for “eclipse” in your iPhone or Android app store and you will get dozens of hits.

In addition to NASA, universities are planning eclipse research projects, some recruiting citizen scientists. Here’s an interesting one at the National Solar Observatory, learning something brand new for 2017:

Citizen/CATE (National Solar Observatory): The Citizen Continental-America Telescopic Eclipse (CATE) Experiment will use more than 60 identical telescopes equipped with digital cameras positioned from Oregon to South Carolina to image the solar corona. The project will then splice these images together to show the corona during a 90-minute period, revealing for the first time the plasma dynamics of the inner solar corona.

See also this article from the Seattle Times about CATE. Sandi Doughton features some of the participants in the project, beginning with a story of a father-and-son team from Corvallis stationed atop a peak in the coast range, describing how pumped they are to do well.

A group of scientists will board two WB-57F jets during the eclipse, specially outfitted with high-tech telescopes, to image the corona at much higher resolution than possible from the ground, according to Space.com. During the observations, they also plan to learn about the soil of the planet Mercury, because that planet is difficult to observe except during an eclipse. Here is another research opportunity made possible only during a solar eclipse:

The researchers could also potentially search for vulcanoids — a family of hypothetical asteroids that may lie between Mercury and the sun. The total solar eclipse also provides the perfect opportunity to search for vulcanoids, which are believed to be remnants of the early solar system. Vulcanoids have likely evaded detection due to their small size and the unforgiving glare of the sun. During the eclipse, however, the sun’s bright light will disappear, allowing scientists to look for these elusive objects.

A team in Boulder, Colorado will use a special radiometer to learn more about earth’s energy system, to provide better data for climate models (Phys.org).This article lists a variety of other research projects taking advantage of the eclipse.

The American Astronomical Society’s Eclipse Task Force is going to use the occasion to figure out how big the sun is. That’s right; the size of our own star is not known as precisely as that of the earth and moon, Sarah Levin reports in  Live Science. “The 2017 Solar Eclipse May Prove the Sun Is Bigger Than We Think,” her surprising headline announces.

In summary, the  National Science Foundation says that the 2017 eclipse “offers unique research opportunities” — emphasis on unique. Let this quote respond to Metcalfe’s dismissive claim that scientific research during total eclipses is old news:

“This total solar eclipse across the United States is a unique opportunity in modern times, enabling the entire country to be engaged through modern technology and social media,” said Carrie Black, a program director in NSF’s Division of Atmospheric and Geospace Sciences. “Images and data from as many as millions of people will be collected and analyzed by scientists for years to come.”

“This is a generational event,” agreed Madhulika Guhathakurta, NASA lead scientist for the 2017 Eclipse. “This is going to be the most documented, the most appreciated, eclipse ever.“
We’ve just seen a few of the research opportunities in stellar physics, planetary geophysics, atmospheric science, geomagnetic science, climate science, plasma physics, ecology, animal behavior, space weather, and more — all made possible by the unique “coincidence” of total solar eclipses. The geometry of a total eclipse is also tightly linked to the requirements for habitability, as Privileged Planet argues, because we have to orbit the right kind of star, at the right distance from the star, with a moon as large as our moon, to exist.

Because these requirements are met here, earth is habitable, and simultaneously meets the requirements for solar eclipses. And since earth is inhabited by sentient beings (not necessarily a logical consequence of habitability alone), we can appreciate solar eclipses and use them to study the nature of everything from plants and animals to the far reaches of the cosmos. “The same narrow circumstances that allow us to exist,” according to the Privileged Planet hypothesis, “also provide us with the best overall setting for making scientific discoveries.”

If eclipses provided the only linkage between habitability and scientific observation, one might allow for the conclusion that they are coincidental. But the authors amass an impressive list of other coincidences, from the solar system to our galaxy to the properties of physics, that all point in the same direction, suggesting “conspiracy” rather than coincidence. That is why co-author Jay Richards begged to differ with the “coincidence” view of all these fortuitous linkages. In The Privileged Planet film, he concludes:

Our argument suggests something completely different. It suggests that the universe was intended, that the universe exists for a purpose, and that purpose isn’t simply for beings like us to exist, but for us to extend ourselves beyond our small and parochial home: to view the universe at large, to discover the universe, and to consider whether, perhaps, that universe points beyond itself.

Where the slippery slope ends and the slippery cliff begins?

Why Does This Evolutionary Biologist Want to Euthanize Handicapped Babies?
Michael Egnor

Evolutionary biologist Jerry Coyne has written a controversial series of posts in which he advocates medical killing for severely handicapped babies. We have replied (here, here, here, here, here, here). Why would anyone advocate such a thing? What would justify deliberately killing a baby — actually using hospitals and doctors and nurses and medical science to kill children?

Coyne gives his rationale:

If you are allowed to abort a fetus that has a severe genetic defect, microcephaly, spina bifida, or so on, then why aren’t you able to euthanize that same fetus just after it’s born?

Of course, the ethics of aborting handicapped babies in the womb is a matter of considerable controversy, and there is by no means a consensus on it. Furthermore, one of the arguments used to support the pro-life position is that abortion, in addition to being intrinsically immoral, devalues all human life, and endangers handicapped children after birth as well. Coyne’s rationale for the medical killing of babies, which is that we allow abortion of these same children in the womb, gives credence to the pro-life argument. Coyne shows very clearly that there is a slippery slope.

Coyne offers another rationale:

After all, newborn babies aren’t aware of death, aren’t nearly as sentient as an older child or adult, and have no rational faculties to make judgments (and if there’s severe mental disability, would never develop such faculties).

Coyne argues, astonishingly, that the vulnerability of handicapped children justifies killing them. He isn’t (yet) advocating killing handicapped adults. His criterion (for now) for killing severely handicapped people is that they are unaware and can’t make decisions for themselves. In Coyne’s moral world, people who lack “rational faculties to make judgements” have less right to life than rational people do. You have a right to life, unless you are handicapped and don’t know what is happening to you. I have to respect Coyne’s candor, if nothing else.

It makes little sense to keep alive a suffering child who is doomed to die or suffer life in a vegetative or horribly painful state. After all, doctors and parents face no legal penalty for simply withdrawing care from such newborns, like turning off a respirator, but… we should be allowed, with the parents’ and doctors’ consent, to painlessly end their life with an injection.

Coyne doesn’t understand what “vegetative” means. Vegetative means that the child is unable to experience anything. A “vegetative” child can’t “suffer life in a vegetative or horribly painful state.” The child can’t “suffer” anything.

Furthermore, pain (for people who aren’t “vegetative”) is a common medical situation: the treatment for it is to treat the pain, not to kill the child. The fact is that handicapped children don’t ordinarily suffer intractable pain. Handicaps such as spina bifida, anencephaly, cerebral palsy, etc., are not intrinsically painful (such children often have an inability to feel pain in parts of their body). Coyne makes no mention whatsoever of medically treating the pain of the babies he proposes to kill. There are many highly effective methods of treating pain — thousands of different medications, devices, and operations that are used every day in hospitals and clinics and in homes around the world to alleviate pain. Much of medical practice is devoted to alleviating pain and suffering. Yet Coyne makes no mention of medically treating the (occasional) pain and suffering of handicapped children. His solution is to kill them.

Coyne sees the trend toward killing patients who suffer, rather than toward alleviating their pain, as a moral advance:

This change in views about euthanasia and assisted suicide are the result of a tide of increasing morality in our world…

Killing handicapped babies is not a moral advance. Devoting extra effort to their medical care, alleviating the (occasional) pain they do suffer, providing them and their families with medical and social and financial help to make their lives as happy and fulfilled as possible would be a moral advance. Respecting the lives of handicapped people is a moral advance. Killing them is moral regress, of a particularly horrendous sort.

Coyne explains the rationale of the euthanasia movement with shocking candor:

It’s time to add to the discussion the euthanasia of newborns, who have no ability or faculties to decide whether to end their lives. Although discussing the topic seems verboten now, I believe some day the practice will be widespread, and it will be for the better. After all, we euthanize our dogs and cats when to prolong their lives would be torture, so why not extend that to humans? Dogs and cats, like newborns, can’t make such a decision, and so their caregivers take the responsibility. (I have done this myself to a pet, as have many of you, and firmly believe it’s the right thing to do. Our pain at making such a decision is lessened knowing that dogs and cats, like newborns, don’t know about death and thus don’t fear it.)

The clarity is bracing. Coyne admits — he seems to celebrate it — that the slippery slope is real. Now that we have normalized abortion and assisted suicide, it’s time to normalize killing of newborns who don’t meet our definition of “fitness.” Let’s treat them, Coyne argues, like we treat our dogs. Love our babies when they’re healthy. Kill them when they are handicapped or a burden. And our babies’ vulnerability — the fact that they don’t understand — is, in the moral universe of euthanasia advocates, all the more reason to kill them. Life, it seems, is a right for the strong and the rational, but expendable for the weak and unaware.

What is particularly chilling about Coyne’s advocacy of infant euthanasia is not merely that he proposes killing handicapped babies. It is chilling that he makes no endorsement of the proper medical care of these children — where is his advocacy for the medical treatment of their (occasional) pain or of their handicap? Furthermore, it is chilling that he uses their vulnerability — the fact that as babies they are unaware and defenseless — as a reason, not to protect them, but to kill them.

So, why does Jerry Coyne want to kill handicapped babies? He has lots of reasons. But they all seem to boil down to one reason: He wants to kill them because they’re handicapped babies. Such honesty is rare from an advocate of euthanasia.


Euthanasia, fundamentally, is about killing vulnerable people. It should be resisted with every bit of our strength.

Saturday, 5 August 2017

On the battle for academic freedom.

In Science Education, Academic Freedom Makes Progress Across America
David Klinghoffer | @d_klinghoffer

In a new ID the Future podcast, Sarah Chaffee surveys progress across the United States in enacting academic freedom (AF) legislation. Despite energetic disinformation campaigns by Darwin-only propagandists, the truth about the value of teaching critical thinking in science class is appreciated by more and more legislators, educators, and activists. Download the episode here, or listen to it here.

Miss Chaffee spoke to AF proponents in Alabama, Oklahoma, and Texas. Her interviewees stress the importance of “refraining from prohibiting teachers” from challenging their students with “more science not less,” of protecting educators from frivolous lawsuits and other career penalties, because “students have a right to know that there are a lot of deep questions here.”

Biologist Ray Bohlin, on the ground in Texas, makes a great point. Everyone always repeats the mantra about how “We need more scientists, We need more scientists, We need more scientists…” And that is true. But what about those students who can’t shake the intuition that life exceeds what Darwinian orthodoxy can explain – as, in fact, many professional scientists are coming to think?

Rather than make fools of those young people and tell them such doubts have no basis in objective science, why not admit the truth – that their insight is being borne out by research, including in mainstream evolutionary biology itself? Admit to them that the question of origins is complex: evolutionary theory has strengths, but also weaknesses.


Surely, in partly confirming what they already sense to be the case, that will have the effect of exciting their curiosity and encouraging them to consider science as a career in their adult lives. And that’s what we all want, says Dr. Bohlin, right?

On testing I.D

Yes, Intelligent Design Is Testable Science – A Resource Roundup
Evolution News @DiscoveryCSC

After perusing a recent article here (“Desperately Seeking Evolutionary Innovation by Chance”), a reader offers a classic challenge:

You post lots of criticism of evolutionary biology. Have you made any advancement in formulation of your own theory? What predictive power has it shown, if any?
The query, which is really three ways of putting the same question, is a classic because it has been asked so many times in various forms – What predictions does ID make? Is it exclusively a negative case against Darwinian theory? Is it really science? etc. It just so happens that an excellent new ID the Future podcast features Center for Science & Culture Fellow Jonathan Witt discussing exactly this set of issues relating to design theory.

Dr. Witt explains, contrary to the objections of critics, how and why intelligent design is testable. He discusses predictions from biology and astrobiology, and points listeners to an extended list of testable ID predictions available online.  Listen to it here, or download it here.

As to the future of ID, without prematurely giving anything away, that is set to include research into aspects of the genetic code; investigations into genomic elements presumed non-functional based on evolutionary theory, but predicted to be functional based on ID, and much more.

That said, the reader asks valid questions. Intelligent design as a theory of design detection has made many scientific advances over the past few decades. In fact, while not always going explicitly by the name “intelligent design,” ID has made so many advances — often reported in peer-reviewed scientific papers — that it’s impossible to give a thorough answer to the reader’s questions in this brief format. But because we’ve discussed ID’s scientific status and predictive power many times over in the past, that’s not necessary.

The following links are of special interest and relevance:


And that’s just for starters. Dear reader, if you’ll study these links, you will find the answers to your questions and more. Enjoy.

Friday, 4 August 2017

Yet more primeval tech v. Darwin.

Ribosomes Optimized for Speed, Flexibility
Evolution News @DiscoveryCSC

The DNA translation machines in the cell show unexpected complexity, forcing molecular biologists to revise what they thought they knew about ribosomes. In particular, they appear optimized for speed of self-duplication and modularized for flexibility.

Last September, we evaluated a fascinating paper about ribosomes that showed how this molecular machine that translates DNA “requires the orchestrated function of hundreds of proteins” — and that’s just to get to the “pre-ribosome” stage! Ribosomes are marvels of organization and function. Since then, more discoveries have shown additional design features of ribosomes.

A cell doesn’t have all day to build and operate these machines. In July, a paper in Science Advances revised the half-life of RNAs significantly downward. Instead of 5-20 minutes to float around and get translated, most messenger RNAs (mRNAs) last only about 2 minutes before being degraded by complex recycling pathways (see this from the University of Basel ). The production rate and decay rate are important factors in gene regulation. So if you think of “orchestrated function” again, the sheet music won’t do any good if the stage isn’t already set up and the players aren’t in their seats.

The ribosome is composed of large RNAs and proteins. The paper doesn’t state the half-life of the ribosomal RNAs, which make up the bulk of the ribosome, but it’s safe to assume the lifetime of each RNA is finite — probably a matter of minutes. An extra reason for assuming this is the rapid doubling of ribosomes during cell division. Before the cell can divide, all the proteins needed by the two daughter cells must be translated. This requirement effectively doubles the work for these machines.

How does the cell prepare for this increased workload? Rather than speed up translation, the ribosomes first duplicate themselves, effectively doubling the production capacity. This means that they have to prepare and assemble all their own RNAs and proteins first. Without efficient ways to accomplish this prerequisite, cell division could be seriously delayed.

An interesting model, published in Nature by Johan Paulsson’s team at Harvard, suggests that “Ribosomes are optimized for autocatalytic production.” They knew that ribosomes are already optimized in three ways. Now, they add a fourth:

Many fine-scale features of ribosomes have been explained in terms of function, revealing a molecular machine that is optimized for error-correction, speed and control. Here we demonstrate mathematically that many less well understood, larger-scale features of ribosomes — such as why a few ribosomal RNA molecules dominate the mass and why the ribosomal protein content is divided into 55–80 small, similarly sized segments — speed up their autocatalytic production. 

The authors, as evolutionists, will assume that Darwinian processes achieved this optimization. In their own words, however, we sense their astonishment at what these machines accomplish.

Ribosomes translate sequences of nucleic acids into sequences of amino acids. Their features are therefore typically explained in terms of how they affect translation. However, in recent years it has also become clear that ribosomes are exceptional as products of the ribosomal machinery. Not only do ribosomal proteins (r-proteins) make up a large fraction of the total protein content in many cells, but the autocatalytic nature of ribosome production introduces additional constraints. Specifically, the ribosome doubling time places a hard bound on the cell doubling time, because for every additional ribosome to share the translation burden there is also one more to make. Even for the smallest and fastest ribosomes, it takes at least 6 min, and typically much longer, for one ribosome to make a new set of r-proteins (Supplementary Information); and this estimate does not account for the substantial time that is invested in the synthesis of ternary complexes. This bound seems to explain the observed limits on bacterial growth, because ribosomes must also spend much of their time making other proteins, and shows that ribosomes are under very strong selective pressure to minimize the time they spend reproducing.

Whether “selective pressure” is the mother of invention is debatable to those of us who are Darwin skeptics, but the authors point out something important. The “orchestrated function of hundreds of proteins” has time limits. The conductor is pounding his foot and tapping his baton on the podium, rushing the orchestra to get in place. Imagine how much harder if each player, instrument, chair, and music stand has to make a copy of itself first for a show across town!

Based on observed facts about ribosomal RNAs and proteins, and how quickly they duplicate, the team created a mathematical model based on the assumption that “selective pressure” forces cells to optimize their ribosomes’ doubling time. Although the model worked for fast-reproducing bacteria, they presume the same time pressure constrains eukaryotic cells:

Similar principles might also apply to some eukaryotes, because the ribosomes of eukaryotes are larger and slower. In fact, even organisms in which cell doubling times are not limited by ribosome doubling times would benefit from faster ribosome production, allowing ribosomes to spend more of their time producing the rest of the proteome. This efficiency constraint was recently shown to have broad physiological consequences for cells, and here we demonstrate mathematically that it might also explain many broader features of the ribosome 

In the figure, they show that ribosomes are dominated by a few large RNAs and lots of small proteins, about 55 to 80 of them of similar size. The reason for this arrangement has long puzzled molecular biologists. According to the new model, ribosomes can reproduce their parts quicker when the proteins are relatively short, and there are lots of them. The existing ribosomes can crank out smaller building blocks faster, and the construction workers can assemble them faster, than if they had to wait for long, complex pieces to arrive.

It’s not necessary to get into the weeds to see the elegance of the solution. Ribosomes assemble faster with more, smaller proteins, reducing the time to duplicate themselves, so that they can get on with their main job of translating all the other proteins the cell needs before dividing. The faster you double the translating machinery, the faster you can double everything else in the cell.

The model also needs to explain why ribosomes include a few large RNAs. Evolutionists have typically invoked the “RNA World” story to suggest that ribosomal RNAs represent transitional forms or vestiges from the origin of life before cells happened upon ways to make proteins. Paulsson’s model suggests a different reason — a functional reason. RNAs only need to be transcribed, not translated. RNA enzymatic activity is not as efficient as protein, but RNA is quicker to make. The cell, therefore, is better off using it when time is of the essence.

The above analysis suggests a great efficiency advantage of using rRNA [ribosomal RNA] over protein, whenever chemically possible, and so could explain why ribosomes defy the general rule that enzymes are made mostly of protein (Fig. 1). This finding does not mean that the role of rRNA is merely to ensure appropriate overall dimensions of the ribosome; however, it does provide a fundamental reason for why proteins must be used sparingly in the ribosome, for example, to increase accuracy or speed up translation, whereas rRNA should be used wherever possible without compromising function. If even one-quarter of the rRNA mass were replaced with r-protein without increasing translation rates, many bacteria would not be able double as quickly as they do 

Do you see  optimization (a form of intelligent design) at work? The authors go into more detail about why rRNAs must be large. Their model shows that small rRNAs, unlike the small ribosomal proteins, would actually slow down duplication. Suffice it to say that the observed ratio of rRNA to ribosomal protein increases the efficiency by two orders of magnitude. Here’s a pithy analogy from a layman’s summary of the paper at Science Daily:

“An analogy for our findings would be to think of ribosomes not as a group of carpenters who merely build a lot of houses, but as carpenters who also build other carpenters,” Paulsson said. “There is then an incentive to divide the job into many small pieces that can be done in parallel to more quickly assemble another complete carpenter to help in the process.”

One other mystery about ribosomes might be solved by looking at it as an optimization problem: why do ribosomes vary? Mitochondrial ribosomes differ from those in the cytosol. Eukaryotic ribosomes differ from those of bacteria. If they perform the same function, why aren’t they all the same? Here’s a paper in  PLOS ONE from last November that opens a window on a possible reason: ribosome structure is modularized. In “The Modular Adaptive Ribosome,” a team from India says this:

The ribosome is an ancient machine, performing the same function across organisms. Although functionally unitary, recent experiments suggest specialized roles for some ribosomal proteins. Our central thesis is that ribosomal proteins function in a modular fashion to decode genetic information in a context dependent manner.

Interested readers can delve further into this open-access paper to see why ribosomes vary in different cell types or different environments. “A clear example is nervous tissue that uses a ribosomal protein module distinct from the rest of the tissues in both mice and humans,” they say. “Our results suggest a novel stratification of ribosomal proteins that could have played a role in adaptation, presumably to optimize translation for adaptation to diverse ecological niches and tissue microenvironments.”

When it comes to ribosomes, it appears to be a case of optimization all the way down.

Let’s give the last word to the Science Daily article.

Rather than being mere relics of an evolutionary past, the unusual features of ribosomes thus seem to reflect an additional layer of functional optimization acting on collective properties of its parts, the team writes.

“While this study is basic science, we are addressing something that is shared by all life,” Paulsson said. “It is important that we understand where the constraints on structure and function come from, because like much of basic science, it is unpredictable what the consequences of new knowledge can unlock in the future.”

Notice how that downplays evolution’s role, in spite of the authors’ Darwinian views. It also, even if not intending to do so, supports a design pespective, while showing how such a focus leads to productive science.