Search This Blog

Monday, 30 January 2017

A triumph for academic freedom?

Academic Freedom Legislation: Understanding South Dakota's SB 55
Sarah Chaffee

South Dakota's SB 55, sponsored by Sen. Jeff Monroe, just passed the Senate.

The text of SB 55 reads:

No teacher may be prohibited from helping students understand, analyze, critique, or review in an objective scientific manner the strengths and weaknesses of scientific information presented in courses being taught which are aligned with the content standards established pursuant to § 13-3-48.

The Argus-Leader has been covering the bill's journey -- in a fairly evenhanded way. Several of the Argus-Leader's sources, however, appear to be misinformed about the scope of the bill. No, SB 55 doesn't give educators permission "to teach anything they please."

In an article written after the passage of the bill by the Senate Education Committee, the Argus-Leader wrote:

Meanwhile, opponents said the bill, if approved, could have the effect of allowing teachers to weave desired information into a curriculum where it might not be accepted as part of the state standard.

"What this is saying is you can bypass what your local school board is saying," Sen. Troy Heinert, D-Mission, said. "A vote for this is a vote against your local school board."

In an article after the chamber passed SB 55:

"With the passage of this bill, that teacher hired by the school district to teach the curriculum that the school board deems appropriate could go off on other tangents and there would be no way that the principal who does the evaluation by law would have to either reprimand them or bring them in," Sen. Troy Heinert, D-Mission, said.

Glenn Branch, deputy director for the National Center for Science Education, Inc., said the "unclear and flabby" language of the 36-word proposal could open up state science standards to alternate theories like Creationism, climate change denial and white supremacy.

"They'll be able to teach anything they please," Branch said.

The bill could also create serious legal problems for school districts as they could face lawsuits from teachers who are disciplined for not being able to speak to weaknesses of scientific theories or from parents who feel teachers explaining alternate theories under the law violate their children's rights under the Establishment Clause, he said.

"This is a recipe for legal disaster for those school boards," Branch said,

Heinert and Branch are incorrect. The bill would not allow teachers to write their own curriculum. The freedoms it gives teachers are in line with the state science standards.

First, the bill explicitly limits authorization to "scientific information presented in courses being taught which are aligned with the content standards established pursuant to § 13-3-48." (This is the section of South Dakota law that governs the standards revision process.)

Second, this law only authorizes such instruction "in an objective scientific manner." This doesn't sound like permission to bring in off-topic or biased information.

Third, this law is in line with the intent of the South Dakota science standards. In the introduction the standards state,

The concepts and content in the science standards represent the most current research in science and science education. All theories are presented in a way that allow teachers to structure an experience around multiple pieces of scientific evidence and competing ideas to allow students to engage in an objective discussion. The theories are presented because they have a large body of scientific evidence that supports them. These 6 standards were developed in such a manner to encourage students to analyze all forms of scientific evidence and draw their own conclusions.

Through the public hearing process related to adoption of the South Dakota Science Standards, it is evident that there is particular sensitivity to two issues: climate change and evolution. The South Dakota Board of Education recognizes that parents are their children's first teachers, and that parents play a critical role in their children's formal education. The South Dakota Board of Education also recognizes that not all viewpoints can be covered in the science classroom. Therefore, the board recommends that parents engage their children in discussions regarding these important issues, in order that South Dakota students are able to analyze all forms of evidence and argument and draw their own conclusions.

This legislation allows for the discussion of scientific information on a variety of scientific theories. It does not authorize (much less require) presentation of "all viewpoints" on sensitive issues -- which the Board of Education correctly notes can't necessarily be covered in science class. But proponents of the bill brought up evolutionary theory as an area where relevant scientific information isn't discussed. This would seem to contradict the statement that "All theories are presented in a way that allow teachers to structure an experience around multiple pieces of scientific evidence and competing ideas to allow students to engage in an objective discussion."

Fourth, and most importantly, Branch errs by saying that this legislation will allow creationism to be taught. Creationism has been determined to be religion by the Supreme Court (Edwards v. Aguillard), and therefore unconstitutional to teach in public schools. This supersedes any individual state's legislation. If SB 55 were to become law and a teacher was to teach creationism and be sued, they would receive no protection in court from this legislation.

SB 55 protects teachers who want to teach scientific strengths and weaknesses of topics in the curriculum the freedom to do so without risking their jobs.

Unfortunately, another Argus Leader article recounting various bills filed by Sen Monroe, ends on an ominous note about SB 55:

The full text is only one sentence, but those 40 words are enough to stir fear among science teachers, administrators, public educators and parents.


The truth would seem to be the opposite. South Dakota science teachers, and therefore the students and families they serve, will likely find this legislation heartening.

Queen of heaven:The Watchtower Society's commentary.

QUEEN OF THE HEAVENS

The title of a goddess worshiped by apostate Israelites in the days of Jeremiah.—Jer 44:17-19.

Although the women were primarily involved, apparently the entire family participated in some way in worshiping the “queen of the heavens.” The women baked sacrificial cakes, the sons collected the firewood, and the fathers lit the fires. (Jer 7:18) That the worship of this goddess had a strong hold on the Jews is reflected by the fact that those who had fled down to Egypt after the murder of Governor Gedaliah attributed their calamity to their neglecting to make sacrificial smoke and drink offerings to the “queen of the heavens.” The prophet Jeremiah, though, forcefully pointed out the wrongness of their view.—Jer 44:15-30.

The Scriptures do not specifically identify the “queen of the heavens.” It has been suggested that this goddess is to be identified with the Sumerian fertility goddess Inanna, Babylonian Ishtar. The name Inanna literally means “Queen of Heaven.” The corresponding Babylonian goddess Ishtar was qualified in the Akkadian texts by the epithets “queen of the heavens” and “queen of the heavens and of the stars.”

It appears that Ishtar worship spread to other countries. In one of the Amarna Tablets, Tushratta, writing to Amenophis III, mentions “Ishtar, mistress of heaven.” In Egypt, an inscription of King Horemheb, believed to have reigned in the 14th century B.C.E., mentions “Astarte [Ishtar] lady of heaven.” A fragment of a stele found at Memphis from the reign of Merneptah, Egyptian king believed to have reigned in the 13th century B.C.E., represents Astarte with the inscription: “Astarte, lady of heaven.” In the Persian period, at Syene (modern Aswan), Astarte was surnamed “the queen of the heavens.”


The worship of the “queen of the heavens” was practiced as late as the fourth century C.E. In about 375 C.E., in his treatise Panarion (79, 1, 7), Epiphanius states: “Some women decorate a sort of chariot or a four-cornered bench and, after stretching over it a piece of linen, on a certain feast day of the year they place in front of it a loaf for some days and offer it up in the name of Mary. Then all the women partake of this loaf.” Epiphanius (79, 8, 1, 2) connected these practices with the worship of the “queen of the heavens” presented in Jeremiah and quotes Jeremiah 7:18 and 44:25.—Epiphanius, edited by Karl Holl, Leipzig, 1933, Vol. 3, pp. 476, 482, 483.

Saturday, 28 January 2017

Trump means the end of the republic?:Pros and cons.

Would you buy a remodeled nineteenth century theory from this man.

Francis Collins and the Overselling of Evolution
Casey Luskin 

In two recent posts I discussed the continuing misrepresentations of intelligent design by Francis Collins, whose confirmation as head of the National Institutes of Health in the Obama administration was announced on August 7.

Today I would like to shift the focus to Dr. Collins' misrepresentation of evolutionary biology--or more precisely, to his misrepresentation of the scientific usefulness of evolution to biology. Collins has every right to endorse neo-Darwinian evolution if he wishes, but his view of evolution's value to scientific research is pretty much over-the-top. In a recent interview, he claimed:

Trying to do biology without evolution would be like trying to do physics without mathematics.
There is no doubt that modern neo-Darwinian theory has had an important influence on biology, but Collins' grandiose claim says more about the political nature of Darwin-advocacy than it does about evolution itself.
A number of leading scientists feel very differently from Collins. As National Academy of Sciences member Philip Skell has written, the hyping of neo-Darwinism's importance to science goes well beyond reality:

I recently asked more than 70 eminent researchers if they would have done their work differently if they had thought Darwin's theory was wrong. The responses were all the same: No. ... Darwinian evolution -- whatever its other virtues -- does not provide a fruitful heuristic in experimental biology. ... the claim that it is the cornerstone of modern experimental biology will be met with quiet skepticism from a growing number of scientists in fields where theories actually do serve as cornerstones for tangible breakthroughs.
(Philip Skell, "Why Do We Invoke Darwin? Evolutionary theory contributes little to experimental biology," The Scientist (August 29, 2005).)

In another essay, Dr. Skell added that he had

queried biologists working in areas where one might have thought the Darwinian paradigm could guide research, such as the emergence of resistance to antibiotics and pesticides. Here, as elsewhere, I learned that the theory had provided no discernible guidance in choosing the experimental designs but was brought in, after the breakthrough discoveries, as an interesting narrative gloss.
(Philip Skell, Politics and the Life Sciences, Vol. 27(2):47-49 (October 9, 2008).

Evolutionary biologist Jerry Coyne likewise admitted in Nature that "if truth be told, evolution hasn't yielded many practical or commercial benefits. Yes, bacteria evolve drug resistance, and yes, we must take countermeasures, but beyond that there is not much to say."
When testifying before the Texas State Board of Education this past March, Dr. Ray Bohlin said the following when asked about the utility of evolution for biological research. He answered:

I'd be willing to say that virtually 90, 95% of all molecular and cell biology, which is where my Ph.D. is in, does not require evolution whatsoever.
Similarly, Don Ewert, who holds a Ph.D. in microbiology and has been a biology researcher for over 30 years (including 20 years at the Wistar Institute), was asked to "address the notion that very little in biology is testable except for in the light of evolution." Ewert answered:
If you look at scientific textbooks and ask the question, if the theory of evolution were not in that textbook, what material would not make sense? And I would say that very little, if any, would not make sense. In fact, I think that anybody who learned the material apart from Darwin in those textbooks could go on to be successful scientists, veterinarians, and medical doctors. ... I would say that there is very little that you cannot fully understand apart from the theory of evolution.

Clearly evolution is important to some research, but Collins' claim that "[t]rying to do biology without evolution would be like trying to do physics without mathematics" says more about Collins' hardline devotion to neo-Darwinism than it says about modern evolutionary biology itself. Fortunately, there remain highly credible scientists who do not feel the need to uphold Darwinism as the alpha and omega of biology.

The designer counters Haldane's fossil rabbit gambit.

Sea Anemone Is a Proverbial "Precambrian Rabbit"
Cornelius Hunter 

When asked what evidence would disprove evolution, the famous 20th-century evolutionist J.B.S. Haldane  is famously said to have responded, "a fossil rabbit in the Precambrian." In other words, a fossil rabbit would have to be found in strata dating to long before rabbits, or mammals for that matter, are normally found.

And by "long," we mean somewhere between roughly one-half a billion years to several billion years. It was an exercise in what philosophers refer to as theory protectionism -- erecting insurmountable protective barriers around a theory. The fossil record was sufficiently understood in Haldane's day to know that such as finding was highly unlikely. And it was also known that much less astounding, and more feasible, fossil findings would (or at least should) pose serious problems for evolutionary theory.

In fact there are many such contradictions in the rocks, but if a rabbit in the Precambrian is the evidential standard, then evolution is comfortably safe. Haldane's Precambrian rabbit response was also an exercise in naïve falsificationism -- the thinking that a single finding is going to take down a theory so deeply imbedded in our thinking, and so confidently held to be true. In fact, evolutionary theory has survived myriad contradictory evidences of at least as much severity as a Precambrian rabbit without so much as skipping a beat.

Consider, for example, the genome of the starlet sea anemone, Nematostella vectensis. Here is how summarized it :

The genome of the sea anemone, one of the oldest living animal species on Earth, shares a surprising degree of similarity with the genome of vertebrates, researchers report in this week's Science. The study also found that these similarities were absent from fruit fly and nematode genomes, contradicting the widely held belief that organisms become more complex through evolution. The findings suggest that the ancestral animal genome was quite complex, and fly and worm genomes lost some of that intricacy as they evolved.
In other words, it was the genomic equivalent of Haldane's Precambrian rabbit -- a Precambrian genome had, err, all the complexity of species to come hundreds of millions of years later. In other cases it has more complexity than species such as worms and flies, which, according to evolution, must have lost enormous amounts of genetic complexity.

The lead author of the sea anemone study explained, "We have this basic toolkit now for the whole animal kingdom." Of course the idea of foresight is contradictory to evolutionary theory. As one evolutionist admitted, it is surprising to find such a "high level of genomic complexity in a supposedly primitive animal such as the sea anemone." It implies that the ancestral animal "was already extremely highly complex, at least in terms of its genomic organization and regulatory and signal transduction circuits, if not necessarily morphologically."

Or as another evolutionist put it:

It is commonly believed that complex organisms arose from simple ones. Yet analyses of genomes and of their transcribed genes in various organisms reveal that, as far as protein-coding genes are concerned, the repertoire of a sea anemone -- a rather simple, evolutionarily basal animal -- is almost as complex as that of a human.

None of this makes any sense in the light of evolutionary theory. Of course it is "commonly believed" by evolutionists "that complex organisms arose from simple ones." That would be rather fundamental to the theory. Yet we repeatedly find early complexity. This is another example of how resistant evolution is to testing and falsification.

Paul Nelson on academic freedom and the I.D movement.

Event: In Billings, MT, Paul Nelson Will Speak on Intelligent Design and Scientific Freedom
Evolution News & Views

This weekend, Discovery Institute Senior Fellow Paul Nelson travels to Billings, Montana, to speak on "Intelligent Design, Evolution, and the Future of Free and Open Science." His venue is Big Sky Worldview Forum , to be held Friday and Saturday, January 27-28.

Dr. Nelson has subdivided his theme into four parts:

Design as the Only Reasonable Explanation for Biology

The Metamorphosis Paradox and the Unsolved Problem of Macroevolution

Minimal Complexity as the Key Clue to the Origin of Life

Design Triangulation as a Scientific Method


All talks will be held in the Missouri Room at the Red Lion Hotel and Convention Center, 1223 Mullowney Lane, Billings, MT. A schedule of speakers is here. Enter via the North Convention Center doors. For additional details, please contact the event coordinator, Dick Pence, at 406-672-9207, or via email at rapence45@gmail.com.

Friday, 27 January 2017

Why a finite universe remains a problem for atheism

Cosmology Is Having Its Own Darwinian Crisis
Rob Sheldon 

Editor's Note: Denyse O'Leary writes in our current cover story about how "Many in cosmology have never made any secret of their dislike of the Big Bang," since on its evidence the universe appears "suddenly created" and "finely tuned." We asked another new contributor, physicist Rob Sheldon, for his take on an interesting 2010 arXive paper by Roger Penrose and V.G. Gurzadyan, "Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity," that tries to solve the problem of the Big Bang by substituting an "eternal, cyclic cosmos."


Dr. Sheldon received his PhD from the University of Maryland, College Park. After appointments at the University of Bern in Switzerland, Boston University, and the University of Alabama in Huntsville, he is currently consulting with NASA's Marshall Space Flight Center.



As you know by now, the finiteness of the universe is extremely disturbing to materialists, who want an infinite universe to avoid ever having to discuss a creator. It's a gambit pioneered by Democritus and Epicurus, ridiculed by Aristotle, and promoted by Lucretius and then the 17th-century materialists. The usual counter to materialism was biology, beginning with Aristotle, because of the inescapable evidence of purpose, of teleology. This is what made Darwin so very, very popular. He provided a materialist answer to the evidence of teleology in biology. 
But the success was short lived, because some sixty years later, around 1915-1919, Einstein developed his "General Theory of Relativity" demonstrating that the universe had a beginning. This is documented by the astronomer Robert Jastrow in his 1979 book God and the Astronomers. Stanley Jaki expands the critique in his important book God and the Cosmologists. Both of them point out that the discovery of the beginning of the universe undermines materialism. (Jaki's critique is, of course, the more scathing.)

Sir Roger Penrose is a member of the Humanist Society, which is the polite version of "New Atheists." So he has an interest in eliminating the appearance of a creation event. One of the early attempts at this was to posit a "bouncing" universe that would alternately expand and contract and expand again. Stephen Hawking teamed up with Penrose to demonstrate that this was impossible, because the contraction would lead to a black hole, from which nothing could bounce.

Recent suggestions coming from "quantum loop gravity" posit an incompressible "stringy" physics below the size scale of the proton that can cause the universe to bounce out of a black hole. My objection to most of those theories is that the forces they invoke are unobservable right now, so it is akin to adding a "tooth fairy" to the theory. One rule-of-thumb in physics is that every theory can invoke one tooth fairy, but never two. All these theories have a second tooth fairy that makes the first one vanish.

But the real demise of the "bouncing Big Bang" was the discovery that there wasn't enough matter in the universe to slow down the expansion of the Big Bang, so there will never be a "Big Crunch." Instead, the galaxies will fly further and further apart as the stars burn out into cold cinders and the black hole at the center of every galaxy will slowly consume every cinder until untold eons later the black holes evaporate via "Hawking radiation" into a vast emptiness of lonely photons.

Penrose, however, has lost neither his hope nor his imagination. He suggests that when the last black hole vanishes, the universe will have no measuring sticks, no matter in it. At this point it is ruled completely by the laws of electromagnetics and therefore will spontaneously shrink 50 orders of magnitude until it generates matter again, at which point it will look exactly the same as the Big Bang looked at 10^-34 seconds -- hot and seething with energy and creative potential. And you thought the Phoenix was a silly Greek myth?

Presumably, the signature of this shrinking will be a gravity wave set up in the fabric of space-time, such that the resulting Big Bang is the second event of creation. Thus we can look at the distribution of Cosmic Microwave Background and see an echo of the first event. Since Penrose is a theorist, he hired an experimentalist to do the data mining in the CMB data set, and the arXive paper supposedly finds a ring of brighter CMB that Penrose attributes to this effect. So, is this a classic "hypothesis -- prediction -- validation" paper?

I doubt it, for the following reasons:

Penrose's theory is so vague in particulars that it can be used to fit any set of data.

The ring that is observed looks too "perfect," which suggests it is an artifact of the data processing.

The processing of the CMB data also involves a "ring" type of comparison to remove the "noise" in the detector. Basically the CMB signal is about 2 orders of magnitude below the noise of stars, nebula, dust, etc., and it takes a huge amount of data processing to extract it. So I think this paper simply magnifies some of the deficiencies of the data collection.

I really hate to say this, but the paper never made it out of the arXiv server and into the peer-reviewed literature. So I would imagine that my criticisms were also made of the paper, and the authors either couldn't respond to them, or the effect went away when they did.

Inasmuch as Sir Roger's theory is particular, it makes certain predictions about reality that don't seem to work too well in the present. This "evaporation" of matter into photons, for example, was a common theory for thirty years about the instability of the proton. Sir Fred Hoyle wanted protons to spontaneously appear, which means they also spontaneously disappear. So if you can collect some 10^32 protons in one place and look for 10^8 seconds, one can put a rather strict upper limit on this "evaporation" likelihood. This was done in a detector in Japan, and no protons were ever seen to decay. This means we need to invoke a second, "cloaking tooth fairy" to cover the first, and the theory starts to look more and more like the pathology of Darwinism.


Which, in fact, it is.

Thursday, 26 January 2017

Complex specified information:It's everywhere.

The Spike Code: Another Information-Rich Signaling System in Neurons
Evolution News & Views

It's time for another paradigm change. "These findings suggest that a fundamental assumption of current theories of motor coding requires revision," as the Abstract of a new paper in the Proceedings of the National Academy of Sciences . Neuroscientists from Emory University have uncovered another coded signaling system, this time in nerves and muscles. The paper's categories include "Computational Neuroscience" and "Information Theory."

Neurons and muscles have a strong relationship. To get a bicep to flex, or a diaphragm to bend for breathing, the muscles involved need to be triggered. The triggers come from nerves connected to the muscle fibers. Until this paper came along, most neuroscientists figured that the brain just sped up the "spike rate" of pulses to the muscle to get them to respond. The emerging view is much richer in implications for intelligent design. It's not just the rate; it's the timing.

A crucial problem in neuroscience is understanding how neural activity (sequences of action potentials or "spikes") controls muscles, and hence motor behaviors. Traditional theories of brain function assume that information from the nervous system to the muscles is conveyed by the total number of spikes fired within a particular time interval. Here, we combine physiological, behavioral, and computational techniques to show that, at least in one relatively simple behavior--respiration in songbirds--the precise timing of spikes, rather than just their number, plays a crucial role in predicting and causally controlling behavior. These findings suggest that basic assumptions about neural motor control require revision and may have significant implications for designing neural prosthetics and brain-machine interfaces. [Emphasis added.]
Working with six male Bengalese finches that were anesthetized, the researchers monitored their breathing while recording neural spikes to the lungs. They were able to stimulate the motor neurons arbitrarily in vivo and watch what happens. This is delicate work; they had to work at 250 micro-amp levels. To locally block certain nerve-muscle junctions, they applied curare -- the compound Brazilian hunters use on poison darts -- but not enough to paralyze the poor birds! (How do you say that in scientese? "Applying too much curare and fully paralyzing EXP [expiratory muscle group] would endanger the wellbeing of the animal.")

Next, they analyzed triplets of spikes where the middle spike was variable. They wanted to test whether a "neural code" exists in the train of spikes. To do this, they had to measure interspike intervals (ISIs) at millisecond resolution. If the brain controls these intervals, and the muscles respond accordingly (for instance, with changes in air pressure), it would signify the presence of a neural code.

With these techniques they were able to isolate properties of the neuromuscular response for a variety of experimental tests. In particular, they were looking for the effects of different signal patterns. "Therefore, we believe that our muscle stimulation experiments were only activating the axons of motor neurons and were not activating muscle fibers directly," they say. "This finding allowed us to make insightful comparisons between the results of our spike pattern and stimulation analyses." After gathering large data sets and crunching them with software, they came to the conclusion they had found a code -- not just in songbirds, but all animals:

Overall, we have shown that respiratory motor unit activity is controlled on millisecond timescales, that precise timing of spikes in multispike patterns is correlated with behavior (air sac pressure), and that muscle force output and the behavior itself are causally affected by spike timing (all on similar temporal scales) (Figs. 2D, 3C, and 4C). These findings provide crucial evidence that precise spike timing codes casually [sic, causally] modulate vertebrate behavior. Additionally, they shift the focus from coding by individual spikes (1, 14, 19) to coding by multispike patterns and from using spike timing to represent time during a behavioral sequence (20, 21) to coding its structural features. Put another way, although it is clear that earlier activation of neurons would lead to earlier activation of muscles, this relationship only accounts for encoding when a behavior happens (10, 22). Here, we show that changing the timing of a single spike within a burst by ∼1 ms can also affect what the animal will do, not just when it will do it. Furthermore, we showed that the effect of moving a single spike is stable across animals (Fig. 2). We believe that this precise spike timing code reflects and exploits muscle nonlinearities: spikes less than ∼20 ms apart generate force supralinearly (SI Appendix, Fig. S12), with stronger nonlinearities for shorter ISIs [interspike intervals]. Thus, changing the first ISI from 12 to 10 ms significantly alters the effect of the spike pattern on air pressure (Fig. 2B). Such nonlinearities in force production as a function of spike timing have been observed in a number of species (23⇓-25), highlighting the necessity of examining the role of spike timing codes in the motor systems of other animals. Importantly, our findings show that the nervous system uses millisecond-timescale changes in spike timing to control behavior by exploiting these muscle nonlinearities, even though the muscles develop force on a significantly longer timescale (tens of milliseconds as shown in Fig. 3B).
They speak of the "surprising power of spike timing to predict behavior," indicating that patterns of spikes coming down the nerves are the determining factor in behavior, not just how fast they come.

Is this really a code? Well, count the number of times they refer to coding directly, beside the suggestion in the title, "Motor control by precisely timed spike patterns." Result: 29 times. "Information," a related concept in coding, gets 51 mentions. "Precision" and related terms, important for conveying information, gets 14 mentions. "Evolution" gets zero mentions.

Take a moment to watch this video of a nightingale singing on YouTube and prepare to say Wow!

How much information does the forebrain have to send to the vocal muscles to achieve that kind of performance? The authors note in their concluding discussion, "Because respiration is critical to vocalization in songbirds, it will be of special interest to record respiratory timing patterns during singing...." Indeed!

Think of the possibilities this discovery opens for further research. A multitude of questions come to mind: how does the brain know what pattern to send to a distant muscle to get it to act in a certain way? Are the codes inherited or learned? How reproducible are the patterns from one animal to another? Can a spike code from one bird sent to the nerves of another bird make it sing the same song? How does a human mind interact with the brain to turn a choice into an action? What translates the thought "I must run" into a spike timing pattern that makes you run? How rich, do you think, is the spike timing code in a performance of Chopin's Fantaisie-Impromptu? (See the video at the top.)


Being a new discovery, this "spike timing code" will undoubtedly prompt much more research on more animals in more settings. Since Darwinian theory provided no help to these researchers (how does chance produce a code, anyway?), a design approach is well placed to advance understanding in this area quickly in significant ways. Why?  ID already knows a lot about codes..

Wednesday, 25 January 2017

on darwinism creating darwinism

Evolution as Carpenter: Scientist Concludes Repetitive Elements "Are an Important Toolkit"

Cornelius Hunter


I'm not an expert carpenter, but if I know what needs to be built I'll eventually get there. It may not be beautiful, but given a blueprint I can build a structure.

What if I didn't have that blueprint, though? What if I had no idea what needed to be built -- no notion of where the task was headed? Furthermore, what if I had no knowledge of structures in general. Just randomly cutting wood and pounding nails probably would not end well. This is the elephant in the room for evolution, for according to evolutionary theory, random actions are precisely what built the world.

It is what the Epicureans claimed two thousand years ago, and this random-creation hypothesis fares no better today than it did then. In fact, with the findings of modern science we now know far more about the details than did the Epicureans, and it has just gotten worse for their hypothesis.

This is why evolutionists habitually appeal to teleological language. Regulatory genes "were reused to produce different functions," Dinosaurs "were experimenting" with flight, and the genome was "designed by evolution to sense and respond." Such Aristotelianism, which casts evolution as an intelligent process working toward a goal, makes the story more palatable; after all, evolution had a blueprint in mind.

All of this makes for a glaring internal contradiction: on the one hand evolution has goals; yet on the other hand evolution is a mindless, mechanical process driven by random, chance events. As University College London molecular neuroscientist Jernej Ule explains:

We're all here because of mutations. Random changes in genes are what creates variety in a species, and this is what allows it to adapt to new environments and eventually evolve into completely new species.
This makes evolution, rather inconveniently, dependent on random events (no, natural selection doesn't change this -- it cannot coax the right mutations to occur) which, by definition, do not work towards a goal -- they do not build anything:

This ambiguity creates a great challenge. On the one hand, mutations are needed for biological innovation, and on the other hand they cause diseases.
Indeed. This is not looking good. As Washington State University biologist Michael Skinner recently wrote:

[T]he rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed. Scientists, well aware of the issue, have proposed a variety of genetic mechanisms to compensate: genetic drift, in which small groups of individuals undergo dramatic genetic change; or epistasis, in which one set of genes suppress another, to name just two. Yet even with such mechanisms in play, genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change [between species if evolution is true] for a host of traits, from adjustments in metabolism to resistance to disease.
Whereas Skinner appeals to epigenetics to save the theory, Ule appeals to repetitive elements. Evidence has shown that far from being "junk DNA," repetitive elements play a genetic regulatory role. As a result evolutionists such as Ule have concluded repetitive elements "are an important toolkit for evolution."

Like any good carpenter, evolution has a toolkit.

Ule and his co-workers are now elaborating on the details of how repetitive element toolkit might work. It goes like this: (i) Random mutations gradually modify repetitive elements, (ii) these repetitive elements are sometimes incorporated as part of the blueprint instructions for making a protein, (iii) there are several complicated molecular machines that either repress or allow such incorporation of these repetitive elements in the blueprint.

According to Ule, this complicated process, including these two opposing machines that are "tightly coupled," allows evolution to experiment and successfully evolve more complicated species, such as humans:

We've known for decades that evolution needs to tinker with genetic elements so they can accumulate mutations while minimising disruption to the fitness of a species. ... This [process we have discovered] allows the Alu elements to remain in a harmless state in our DNA over long evolutionary periods, during which they accumulate a lot of change via mutations. As a result, they become less harmful and gradually start escaping the repressive force. Eventually, some of them take on an important function and became indispensable pieces of human genes. To put it another way, the balanced forces buy the time needed for mutations to make beneficial changes, rather than disruptive ones, to a species. And this is why evolution proceeds in such small steps - it only works if the two forces remain balanced by complementary mutations, which takes time. Eventually, important new molecular functions can emerge from randomness.
These suggestions from Skinner and Ule are the latest in a long, long line of ideas evolutionists have come up with, in an attempt to make sense of their random-creation hypothesis. In modern evolutionary thought, the first such idea was natural selection.

The reason there is a long, long line of ideas is none of them work. They are becoming ever more complicated, ever more unlikely, and equally useless in solving the basic problem of random events constructing the world.

But Ule's latest attempt highlights yet another problem: serendipity. All of the solutions, from natural selection on up to epigenetics and repetitive elements, rely on serendipity, and this reliance is increasing. Ule's solution is serendipity on steroids, for the idea holds that evolution just happened to create (i) repetitive elements, and (ii) the complicated, finely tuned, opposing molecular machines that repress or allow those repetitive elements into the protein instructions.

This isn't going to work, but the point here is that even if it did somehow work, it amounts to evolution creating evolution. In order for evolution to have created so many of the species, it first must have lucked into creating these incredible mechanisms, which then in turn allowed evolution to occur. And all of this must have occurred with no foresight.

Imagine a car factory that uses highly complex machines, such as drill presses and lathes, to build the cars. Now imagine the factory first creating those machines by random chance, so that then the cars could be built by yet more random chance events. This violates the very basics of science. It is just silly.

Zygote v. Darwin.

From Genome to Body Plan: A Mystery
Evolution News & Views 

Decoding genomes has been one of the most important advances of the last sixty years, but it's really just a start of a far larger mystery: the mystery of development. You can appreciate the magnitude of the problem in Illustra's animation of a chick embryo in "Embryonic Development" from  Flight: The Genius of Birds. An even more majestic depiction closer to home takes you from the moment of conception to the birth of a baby in this animation by RenderingCG. How does a linear genome produce such an astounding product? Then, how does the moving, living being reduce its information back down to a genome in a single cell?

Three German scientists discuss the mystery in a paper in Nature, "From morphogen to morphogenesis and back," which can be loosely translated, "From genome to body plan and back."

A long-term aim of the life sciences is to understand how organismal shape is encoded by the genome. An important challenge is to identify mechanistic links between the genes that control cell-fate decisions and the cellular machines that generate shape, therefore closing the gap between genotype and phenotype. The logic and mechanisms that integrate these different levels of shape control are beginning to be described, and recently discovered mechanisms of cross-talk and feedback are beginning to explain the remarkable robustness of organ assembly. The 'full-circle' understanding of morphogenesis that is emerging, besides solving a key puzzle in biology, provides a mechanistic framework for future approaches to tissue engineering.
Stop right there. Why must the framework be mechanistic? Didn't they just speak of "the logic and mechanisms that integrate" at different levels? Logic is not mechanistic; it is conceptual. Logic can be instantiated in circuits, on paper, and in human language. Mechanism may be the primary aspect of morphogenesis that natural science can investigate, but restricting one's investigation to a "mechanistic framework" is sure to miss the message in a book by considering only the paper and the ink.

After a brief history of morphogenesis theory from Aristotle to the era of molecular genetics, the authors claim that problems in the "mechanics centered approach" were finally solved in the 1970s. Here, they confuse the football players for the strategy of the play (so to speak). They describe the actions of the players, as if they operate mechanically, while hiding the quarterback's game plan behind passive-voice verbs ("is controlled" -- by whom?).

The initial landmark publication from this herculean project revealed that the first step in morphogenesis is the subdivision of the embryo into discrete regions by a cascade of 'patterning' genes4. Only then is each domain converted to the corresponding region of the body through a bespoke morphogenetic program, therefore establishing that the timing, positioning and inheritance of tissue-shaping events is controlled genetically. Subsequent molecular characterization in Drosophila and other systems revealed that patterning genes mainly encode signalling pathways that mediate long-range tissue patterning and gene-regulatory networks that control fate decisions; however, such genes do not control cell and tissue shape directly. Rather, the task of physically shaping cells and tissues is performed using a toolbox of essential cellular machines discovered by cell biologists, which are present in all cells in the embryo.
We appreciate the mention of a program, a toolbox, and machines, but who wrote the program? Who designed the tools and machines? It's as if the authors are watching tools moving and operating without any hands:

Collectively, these studies reveal a picture in which the shape of tissues is determined by the combined actions of genetic, cellular and mechanical inputs (Box 1). Although a number of the main players are now known, and their functions understood, we still know surprisingly little about how the various levels of shape control are integrated during morphogenesis.
"Are integrated" -- by whom? Passive voice verbs screen these authors from identifying plausible causes. And so by restricting their attention to how pieces of matter "are integrated," they witness rabbits coming out of hats without a magician:

The focus of this Review is the logic and mechanisms that connect gene regulation, cellular effectors and tissue-scale mechanics -- the troika of tissue shaping. We describe how shape, at the local level, emerges from the interaction of tissue-specific genetic inputs and the self-organizing behaviour of core intracellular machines. We then discuss how this mechanistic logic is used in several modified forms to produce a variety of shaping modes. It is becoming clear that the chain of command from gene to shape is not unidirectional, owing to the discovery of mechanisms that enable changes in tissue architecture and mechanics to feed back to 'upstream' patterning networks. The emerging integrated view of tissue shaping therefore goes full circle, from morphogen to morphogenesis and back.
Mechanistic philosophy gets hopelessly muddled here. To see why, convert the passive voice to active voice. "Mechanistic logic is used" should mean, "Somebody or something uses logic to operate a machine." A baby's shape doesn't just "emerge" by "self-organizing behavior" except in the imagination of a philosophical materialist.

From there, the authors get into the weeds, discussing blastocysts, fruit flies, "evolutionarily conserved mechanosensitive pathways" and other matters. It should be obvious, though, that if you start on the wrong track you are not going to get where you want to go (i.e., understanding morphogenesis). In this dreamland, rabbits will pop out of hats by emergence. Babies will self-organize. Programs will work without a programmer.

The authors marvel at how "organoids" emerge from induced pluripotent stem cells. Is this an example of self-organization? After thinking about it, they admit that more must be going on.

A stunning demonstration of the full-circle nature of morphogenesis, in which genes regulate tissue shaping and vice versa, comes from the study of organoids. Here, cultured pluripotent cells self-assemble into organ-like structures that are remarkably similar to those formed in the embryo. Organoids can even be generated from patient-derived induced pluripotent stem cells, which means that this technology has the potential to herald a new era in tissue engineering for the modelling of disease and the development of therapies that is based on the principles of developmental biology.... Organoid formation itself demonstrates that cells can become organized in the absence of predetermined long-range external patterning influences such as morphogen gradients or mechanical forces, which are a cornerstone of classic developmental biology. This unexpected lack of requirement for long-range pre-patterning has led to organoid formation being described as an example of 'self-organization', which is defined classically as the spontaneous emergence of order through the interaction of initially homogeneous components. Although some aspects of organoid formation may show self-organizing properties, it is already clear that cell heterogeneity and patterned gene expression play a crucial part throughout.
The organoids will never form by self-organization, therefore, unless the coded instructions in each cell direct them according to "patterned gene expression" -- that is what is crucial. They have a game plan, like band players in a "scatter" formation on the field self-organizing into a formation. Each player knows where to go.

The same issue of Nature takes a mechanistic look at the related issue of hierarchical organization. How does that "emerge"? In their article "Scaling single-cell genomics from phenomenology to mechanism," Tanay and Regev begin:

Three of the most fundamental questions in biology are how individual cells differentiate to form tissues, how tissues function in a coordinated and flexible fashion and which gene regulatory mechanisms support these processes. Single-cell genomics is opening up new ways to tackle these questions by combining the comprehensive nature of genomics with the microscopic resolution that is required to describe complex multicellular systems. Initial single-cell genomic studies provided a remarkably rich phenomenology of heterogeneous cellular states, but transforming observational studies into models of dynamics and causal mechanisms in tissues poses fresh challenges and requires stronger integration of theoretical, computational and experimental frameworks.
Even though they seek a mechanistic framework again, they are employing intelligent design to get there: tackling questions, combining concepts, seeking causes. Will a "stronger integration of theoretical, computational and experimental frameworks" emerge by unguided material processes? Well, they seem to think cells did some remarkable things that way:

Multicellular organisms have evolved sophisticated strategies for cooperation between cells, such that a single genome encodes numerous specialized and complementary functional programs that maximize fitness when they work together. Compartmentalization at several levels -- cells, tissues and organs -- leads to functional diversification of cells and systems with the same underlying genome. Physical copies of the genome are embedded in cells to enable them to maintain a semi-autonomous decision-making process through the selective management of small-molecule, RNA and protein concentrations in cytoplasmic and nuclear compartments. Theoretically, this permits genomes to break the inherent symmetry that is imposed by the precise duplication of DNA in multicellular species. In particular, it facilitates cellular differentiation through the progressive acquisition of specific intracellular molecular compositions, enabling epigenetic mechanisms to emerge and implement cellular memory. At a higher level of organization, intercellular signalling, extracellular structures and environmental cues are used to form complex spatial structures in which cells (and their genomes) are physically embedded. This creates further levels of compartmentalization that encode complex and structured tissues.
More muddle. On the one hand, strategies, codes, programs, decision-making, cues, and signaling -- implying rationality. On the other hand, evolution, emergence, and physical stuff -- implying materialism. The authors mix oil and water, thinking the oil evolved out of the water and both cooked themselves into a soufflé.

After some diversion into issues like whether or not cell types can be classified in some Linnaean system, they take pride that science is beginning to move from descriptive accounts to predictive understanding:

Efforts towards the mapping and classification of cellular programs in humans and model organisms are becoming increasingly ambitious, aiming to provide a comprehensive atlas of the cell types and subtypes in organs and whole organisms. This opens up remarkable opportunities to move beyond descriptive studies of cell type and state and to develop mechanistic-predictive models of regulatory programs.

There's no question that mechanisms are involved in development. But to mix in another metaphor, they're focused on how billiard balls move and interact on the pool table but ignoring the expertise of the players. Even if the players are robots, and the shots are predictable and repeatable, you'll miss the talent of the game without considering the intelligent design that directs each ball into its own pocket in the correct sequence. The design employs the laws of nature, but does not emerge from them.

Monday, 23 January 2017

Piscine wonders v. Darwin.

"Happy Salmon" and Other Wonders of the Fish World's Migrating Marvel

Evolution News & Views


Salmon may not be happy when we eat them, but we're happy learning about them. So in a symbiotic relationship, we should take care of them so that future observers of these masters of migration can continue to inspire future generations of nature lovers. In Living Waters , Illustra Media tells the story of the salmon's amazing life cycle. What's new about these fish that swim thousands of miles at sea, yet find their native freshwater streams years later? Several discoveries have come to light since the film was released.

Drugged Salmon

One news article  says that "Happy salmon swim better." Like people, salmon can get anxious. "Current research from Umeå University shows that the young salmon's desire to migrate can partly be limited by anxiety," this article says. Fear of the unknown downstream slows down the young migrants. But is this experiment ethical?

The research team studied how salmon migration was affected both in a lab, where salmon migrated in a large artificial stream, and in a natural stream outside of Umeå in Northern Sweden. In both environments, researchers found that salmon treated with anxiety medication migrated nearly twice as fast as salmon who had not been subjected to treatment. Several billion animals migrate yearly and the results presented here, i.e. that anxiety limits migration intensity, is not only important for understanding salmon migration but also for understanding migration in general. [Emphasis added.]
Well, maybe these salmon got a little too happy! The scientists may have only discovered that whatever they gave them made them reckless, like snowboarders on stimulants. Natural anxiety might serve to protect salmon from unnecessary risks. In any case, we do not recommend letting your kid give Ritalin to your goldfish as a science project.

Daredevil Salmon

You can imagine the stress on a salmon in this next story. Look at the video of 9-to-10 pound chum salmon swimming across a Washington state highway, right in front of an oncoming car. Why did the salmon cross the road? Because the scent of its natal stream took a shortcut over the highway after heavy rain, National Geographic explains. In the article you can also watch a bobcat take advantage of the opportunity.

Drowning Salmon

The last story was about too little water; this one on Phys.org is about too much. "How will salmon survive in a flooded future?" Fishery scientists, realizing how important salmon fishing is to the northwest economy (it's a $1 billion industry in Alaska), are worried that flood conditions in spawning grounds might scour the delicate salmon eggs out of their nests and wash them away downstream. The key to preserving their breeding grounds, they found, is keeping the area's rivers and floodplains pristine.

"Flood plains essentially act as pressure release valves that can dissipate the energy of large floods," says Sloat. "In fact, most salmon prefer to spawn in stretches of river with intact floodplains, which is probably no coincidence because these features of the landscape help protect salmon eggs from flood events."
Thermoregulation and Osmoregulation

The salmon's ability to change its gill physiology when going from freshwater to salt water and back is called osmoregulation (see how that's a great design story, here ). Now, researchers at Oregon State University  have found that northern sockeye salmon can regulate their temperature as well, "despite evolutionary inexperience." Imagine that! Maybe they took a class in fish school.

Sockeye salmon that evolved in the generally colder waters of the far north still know how to cool off if necessary, an important factor in the species' potential for dealing with global climate change....
Research by Oregon State University revealed that sockeyes at the northern edge of that range, despite lacking their southern counterparts' evolutionary history of dealing with heat stress, nevertheless have an innate ability to "thermoregulate."

The salmon regulate their body heat by finding water just right for their needs. Sounds simple, doesn't it?

While it may seem obvious that any fish would move around to find the water temperature it needed, prior research has shown thermoregulation is far from automatic -- even among populations living where heat stress is a regular occurrence.
By monitoring tagged fish, the researchers found that the salmon knew how to cool off at tributary plumes or in deeper water. It ends up saving them a lot of energy to stay at their optimum "Goldilocks" temperature -- not too hot, not too cold. The scientists never do explain how the sockeye salmon learned to do this despite "evolutionary inexperience."

Diving Deeper into the Salmon Nose

Fans of Living Waters probably remember the dramatic animated dive into a salmon's nostrils (see it here). Recently, we added new information about turbines in the nose. Now, we can learn about another wonder at the molecular level. Salmon and other fish, as well as mammals, have a molecular amplifier involving chloride ions. Stephan Frings, a molecular biologist at Heidelberg University, talks about the discovery in the Proceedings of the National Academy of Sciences. First, let's hear him wax ecstatic about olfaction in general.

The sense of smell and its astonishing performance pose biologists with ever new riddles. How can the system smell almost anything that gets into the nose, distinguish it from countless other odors, memorize it forever, and trigger reliably adequate behavior? Among the senses, the olfactory system always seems to do things differently. The olfactory sensory neurons (OSNs) in the nose were suggested to use an unusual way of signal amplification to help them in responding to weak stimuli. This chloride-based mechanism is somewhat enigmatic and controversial. A team of sensory physiologists from The Johns Hopkins University School of Medicine has now developed a method to study this process in detail. Li et al. demonstrate how OSNs amplify their electrical response to odor stimulation using chloride currents.
The mammalian olfactory system seems to have the capacity to detect an unlimited number of odorants. To date, nobody has proposed a testable limit to the extent of a dog´s olfactory universe. Huge numbers from 1012 to 1018 of detectable odorants emerge from calculations and estimations, but these are basically metaphorical substitutes for the lack of visible limits to chemical variety among odorous compounds. Dogs can cope with their odor world by using just 800 different odorant receptor proteins, a comparably tiny set of chemical sensors, expressed -- one receptor type per cell -- in 100 million OSNs in the olfactory epithelium. Olfactory research has revealed how it is possible to distinguish 1018 odorants with 800 receptors. To do this, the receptors have to be tolerant with respect to odorant structure. After all, the huge numbers suggest that an average receptor must be able to bind millions of different odorants. Low-selectivity odorant receptors are, therefore, indispensable for olfaction. The olfactory system nevertheless extracts high-precision information from an array of low-precision receptors by looking at the activity of all its OSNs simultaneously. The combined activity pattern of all neurons together provides the precise information about odor quality that each individual OSN cannot deliver. Thus, combinatorial coding is the solution to the problem of low-selectivity receptors.

If you are not sufficiently boggled by that, consider that the incoming signals are very weak. A typical OSN (the only neuron exposed to the environment) has only a millisecond to sense an odorant. Because that is too short to trigger the receptor, it has to integrate 35 sensations in 50 milliseconds. To increase their sensitivity, the cilia at the tips of the OSNs -- where the action takes place -- charge their receptors with chloride ions. These ions boost depolarization and promote electrical excitation, amplifying the output signal. Here's where salmon come in:

Interestingly, the components of this mechanism were discovered in freshwater fish, amphibian, reptiles, birds, and mammals, indicating that the interplay of cation currents and chloride currents is important for OSN function throughout the animal kingdom.
A recent study appears to confirm this hypothesis in some cases. You, too, may be "smelling better with chloride." (Here, have some salt on your salmon fillet.) But Frings admits, "The relation between OSN activity at the onset and odor perception at the conclusion of signal processing is far from being understood." The olfactory system is "very different in virtually all respects" from the other senses, like vision and hearing.

First, thousands of OSN axons -- all with the same odorant receptor protein -- converge onto a common projection neuron in the olfactory bulb. This extreme convergence shapes the signal that enters the brain, and we still have to find out how ORN electrical amplification contributes to this process. Second, when the olfactory information enters the piriform cortex, the largest cortical area in the olfactory system, it enters a world quite different from the primary visual cortex. Extensive horizontal communication between the principal neurons and continuous exchange with multiple other brain regions turn the original afferent signal into highly processed information. Finally, the way to perception leads through brain regions that establish, evaluate, and use olfactory memory. Thus, much signal processing has to take place before a mouse [or a salmon, for that matter] performs in an operant conditioning experiment.
Next time you go fishing, take a second to look into the eyes and nose of your catch. Our of reverence, you may just want to throw it back.

Sunday, 22 January 2017

Neville Chamberlain was right to seek peace in his time?:Pros and cons.

The original technologist continues to school humankind's johnny come latelies.

The World's Ideal Storage Medium Is "Beyond Silicon"
Evolution News & Views

The world is facing a data storage crisis. As information proliferates in everything from YouTube videos to astronomical images to emails, the need for storing that data is growing exponentially. If trends continue, data centers will have used up the world's microchip-grade silicon before 2040.

But there is another storage medium made of abundant atoms of carbon, hydrogen, oxygen, nitrogen, and phosphorus. It's called DNA. And you wouldn't need much of it. The entire world's data could be stored in just one kilogram of the stuff. So says Andy Extance in an intriguing article in Nature, "How DNA could store all the world's data."

For Nick Goldman, the idea of encoding data in DNA started out as a joke.
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. "We thought, 'What's to stop us using DNA to store information?'"

Then the laughter stopped. "It was a lightbulb moment," says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. [Emphasis added.]

Since that day, several companies have begun turning this "joke" into serious business. The Semiconductor Research Corporation (SRC) is backing it. IBM is getting on board. And the Defense Department has hosted workshops with major corporations, which is sure to lead to funding. The UK is already funding research into next-generation approaches to DNA storage.

When you look at Extance's chart, it's easy to see why DNA is "one of the strongest candidates yet" to replace silicon as the storage medium of the future. The read-write speed is about 30 times faster than your computer's hard drive. The expected data retention is 10 times longer. The power usage is ridiculously low, almost a billion times less than flash memory. And the data density is an astonishing 1019 bits per cubic centimeter, a thousand times more than flash memory and a million times more than a hard disk. At that density, the entire world's data could fit in one kilogram of DNA.

As with any new technology, baby steps are slow. Technicians face challenges of designing DNA strands to encode data, searching for it, and reading it back out reliably. How does one translate the binary bits in silicon into the A, C, T, and G of nucleic acids? Can DNA strands be manufactured cheaply enough? How can designers proofread the input?

Living things, though, have already solved these issues. After all, "a whole human genome fits into a cell that is invisible to the naked eye," Extance says. As for speed, DNA is accessed by numerous molecular machines simultaneously throughout the nucleus that know exactly where to start and stop reading. Genomic machinery in the cell proofreads errors to one typo per hundred billion bases, as Dr. Lee Spetner notes in his book Not by Chance! That's equivalent, he says, to the lifetime output of about 100 professional typists.

Life shows that it is possible in principle to overcome these challenges. That gives hope to the engineers on the cutting edge of DNA storage. Already, several experimenters have succeeded in encoding information in DNA. By 2013, EBI had encoded Shakespeare's sonnets and Martin Luther King's "I have a dream" speech. IBM and Microsoft topped that 739-kilobase effort shortly after with 200 megabases of storage. As far back as 2010, Craig Venter's lab encoded text within the genome of his synthetic bacterium, as Casey Luskin reported here. Everything alive demonstrates that DNA is already the world's most flexible and useful storage medium. We just need to learn how to harness the technology.

Goldman's EBI lab and other labs are thinking of ways to ensure accuracy. One method converts bits into "trits" (combinations of 0, 1, and 2) in an error-correcting scheme. Engineers are sure to think of robust solutions, just like the pioneers of digital computers did with parity bits and other mechanisms to guarantee accurate transmission over wired and wireless communications.

How long could DNA storage last? That's another potential advantage -- better than existing technology by orders of magnitude:

...these results convinced Goldman that DNA had potential as a cheap, long-term data repository that would require little energy to store. As a measure of just how long-term, he points to the 2013 announcement of a horse genome decoded from a bone trapped in permafrost for 700,000 years. "In data centres, no one trusts a hard disk after three years," he says. "No one trusts a tape after at most ten years. Where you want a copy safe for more than that, once we can get those written on DNA, you can stick it in a cave and forget about it until you want to read it."
With these advantages of density, stability, and durability, DNA is creating a burgeoning field of research. Worries about random access are already being overcome. With techniques like PCR and CRISPR/Cas9, we can expect that any remaining challenges will be solved. Look at what our neighbors at the University of Washington recently achieved:

As a demonstration, the Microsoft-University of Washington researchers stored 151 kB of images, some encoded using the EBI method and some using their new approach, in a single pool of strings. They extracted three -- a cat, the Sydney opera house and a cartoon monkey -- using the EBI-like method, getting one read error that they had to correct manually. They also read the Sydney Opera House image using their new method, without any mistakes.
Market forces drive innovation. The promise of DNA storage is so attractive, funding and capital are sure to follow. DNA synthesizing machines will come. Random-access machines with efficient search algorithms will be invented. Successes and new products will drive down prices. As with Moore's Law for silicon, the race for better DNA storage products will accelerate once it moves from lab to market. Extance concludes:

Goldman is confident that this is just a taste of things to come. "Our estimate is that we need 100,000-fold improvements to make the technology sing, and we think that's very credible," he says. "While past performance is no guarantee, there are new reading technologies coming onstream every year or two. Six orders of magnitude is no big deal in genomics. You just wait a bit."
So, here we have the best minds in information technology urgently trying to catch up to storage technologies that have been in use since life began. They're only a few billion years late to the party. The implications are as profound as they are intuitive.

Speaking of intuition, Douglas Axe in his recent book Undeniable: How Biology Confirms Our Intuition That Life Is Designed defines a quality he calls functional coherence: "the hierarchical arrangement of parts needed for anything to produce a high-level function -- each part contributing in a coordinated way to the whole." He writes:

No high-level function is ever accomplished without someone thinking up a special arrangement of things and circumstances for that very purpose and then putting those thoughts into action. The hallmark of all these special arrangements is high-level functional coherence, which we now know comes only by insight -- never by coincidence.

Scientists are seeking to match the same level of functional coherence that can be observed every second in the cells of our own bodies, and of the simplest microbes. The conclusion to draw from this hardly needs to be stated.