Search This Blog

Tuesday, 13 June 2017

Mammalian hearing v. Darwin.

Mammals Compute Sound Timing in the Microsecond Range
Evolution News @DiscoveryCSC

At a basic level, we all know that two ears give us the ability to detect the direction of a sound. Cover one ear, and it’s hard to tell. Uncover; we hear in stereo. But when you look into the physics of sound localization, the requirements are stringent.

Sound waves coming from the left hit your left eardrum only microseconds (millionths of a second) before they hit the right eardrum. Your ears must not only be able to capture that tiny difference in arrival time, but preserve the information through noisy channels on the way to the brain. And they must be able to do that continuously. Consider an ambulance siren moving left to right; the inter-aural time difference (ITD) is constantly changing. Your ears need to keep up with the microsecond-by-microsecond changes as they occur, without the prior information getting swamped by the new information.

Now consider being in an auditorium, listening to an orchestra with your eyes closed. You can tell where each instrument is located, even when they are playing together, just by the ITDs from each player. How amazing is that?

This can only work if the auditory system maintains the information all the way to the brain. The brain receives the timing differences after a delay: first, the eardrum converts pressure waves to membrane vibrations, which trigger mechanical movements of the middle ear bones (ossicles), which convert the mechanical motions into fluid waves in the cochlea, which converts the fluid waves to electrical impulses in the neurons. These things take time, but we’re still not there.

Each axon of each neuron has to cross synapses where the electrical information is converted to chemical information and back again in the next neuron. This is getting very complicated! There’s bound to be some noise in the transmission pathway. How can the ITD at the outer ear be maintained all the way to the brain through these multiple energy conversions?

Two neurobiologists from the Ludwig-Maximilian University of Munich, appreciating the problem of maintaining sound localization information, decided to run experiments on mice and gerbils. Think how much closer together those ears are than human ears! The smaller inter-aural distance compounds the problem, tightening the requirements even more. Under the news headline  Auditory perception: where microseconds matter,” Drs. Grothe and Pecka announce what they found.

Gerbils (who depend on sound localization more than mice) use multiple mechanisms to maintain accurate ITD information in their sound transmission apparatus. The researchers explain the challenge:

In the mammalian auditory system, sound waves impinging on the tympanic membrane of the ear are transduced into electrical signals by sensory hair cells and transmitted via the auditory nerve to the brainstem. The spatial localization of sound sources, especially low-frequency sounds, presents the neuronal processing system with a daunting challenge, for it depends on resolving the difference between the arrival times of the acoustic stimulus at the two ears. The ear that is closer to the source receives the signal before the contralateral ear. But since this interval – referred to as the interaural timing difference (ITD) — is on the order of a few microseconds, its neuronal processing requires exceptional temporal precision. [Emphasis added.]
Grothe and Pecka, along with seven other colleagues, published the results of their research in an open-access paper in the Proceedings of the National Academy of Sciences  (PNAS). They report “a specific combination of mechanisms, which plays a crucial role in ensuring that auditory neurons can measure ITDs with the required accuracy.”

Back in 2015, the team observed structural modifications of the myelin sheaths wrapping the auditory nerves. The axons of these neurons, they also noted, were particularly thick. Discontinuities in the sheaths, coupled with the axon thickness, seemed to turbo-charge the neurons “to enable rapid signal transmission.” That’s necessary for sound localization, but it’s not enough. If the synapses introduce additional varying delays, you’ll just get faulty information transmitted faster. There must be something else going on. Here’s what they found this time:

Before cells in the auditory brainstem can determine the ITD, the signals from both ears must first be transmitted to them via chemical synapses that connect them with the sensory neurons. Depending on the signal intensity, synapses themselves can introduce varying degrees of delay in signal transmission. The LMU team, however, has identified a pathway in which the synapses involved respond with a minimal and constant delay. “Indeed, the duration of the delay remains constant even when rates of activation are altered, and that is vital for the precise processing of interaural timing differences,” Benedikt Grothe explains.
Specifically, the team discovered “stable synaptic delays” in the transmission neurons by a unique mechanism previously unknown in other neural circuits. Without a unique “inhibitory pathway” described in the paper, synapse transmission times would vary under continuous excitation, wiping out the ITD information. (This can happen, for instance, as a result of changes in vesicle abundance needed to carry the neurotransmitter molecules across a synapse.)

Functionally, stable synaptic delays seem to represent a specific adaptation for faithful ITD processing, because it would prevent fluctuations in the relative timing of direct excitation and indirect inhibition for responses to onsets vs. ongoing sounds in the range of tens to hundreds of microseconds. Such fluctuations may be negligible for most neuronal computations, but not for microsecond ITD processing of low-frequency sounds.
We now know the challenge; something needs to keep these synapses in a consistent readiness state, so that the crossing time delays are constant. One method might be buffering, so that enough vesicles are always at the ready. That’s one mechanism they observed, but not the only one. The solution also involves computation. There are two bodies at the receiving end, named the LSO and the MSO, that share information. The LSO deals with sound levels, and is less stringent about timing. The MSO, however, requires precise time information to calculate ITDs. By comparing one another’s inputs, the LSO and MSO can “detect coincidences between inputs from the two ears.” The authors note another “striking shared structural feature is the contralateral inhibitory pathway that is specialized for speed and reliability.”

That’s still not all. Two other structures upstream from the MSO are involved, but they cannot inhibit too much, or they, too, will introduce noise. So they, too, are finely tuned:

Recently we showed that the inhibitory pathway conquers this challenge via a two- to threefold thicker axon diameter of GBCs [globular bushy cells] compared with the spherical bushy cells, which comprise the excitatory input. Moreover, we revealed the presence of a dramatic decrease of internode length toward the terminal region in both fiber classes.
The details of these specializations need not concern us here. Suffice it to say that multiple mechanisms ensure that ITD information is preserved from eardrum to brain: structural properties of axon diameter and sheathing patterns, buffering of vesicles, and computation of differences between inputs received at the auditory cortex. No other part of the body requires this level of timing precision, and no other circuit achieves it.

For a real-world application of this need for precision, consider the echolocating bat. This creature darts about in the air, making sudden turns every second, listening to echoes from its high-frequency chirps. Research at Johns Hopkins finds that bats respond to a noisy environment by turning up the volume. We humans do that, too, but bats do it in 30 milliseconds: 10 times faster than the blink of an eye! That means that these little flying mammals, with ears much closer together than ours, are able to respond to the sound location information calculated from their ITDs extremely fast, while simultaneously operating their wings in a constantly changing auditory environment.

Our brief look into the complexity of auditory localization in mammals provides a good example of not only Behe’s irreducible complexity, but also what Douglas Axe calls functional coherence, “the hierarchical arrangement of parts needed for anything to produce high-level function — each part contributing in a coordinated way to the whole” (Undeniable, p. 144). None of these parts (MSO, myelin, synapses) perform sound localization individually, but collectively, they do.


We could explore the hierarchy further by looking more closely at how molecular machines within the neuron cells participate in the “functional whole” of sound localization. Taking the wide-angle view, we see how all the lower levels in the hierarchy contribute to the bat’s amazing ability to catch food on the wing. Functional coherence is not just beyond the reach of chance (Axe, p. 160), it provides positive evidence for intelligent design. In all our uniform human experience, only minds are capable of engineering complex, hierarchical systems exhibiting functional coherence. The complexity of this one circuit — sound localization — makes that loud and clear.

Good luck with that.

Falsify Intelligent Design? Try Simulating the Cambrian Explosion Digitally
David Klinghoffer | @d_klinghoffer


Want to falsify the theory of intelligent design? Here’s one way.

Show with a convincing computer simulation – no cheating allowed — that the infusion of biological information in the Cambrian explosion could occur absent the intervention of a guiding intelligence: artificial life in a variety as we see in the Cambrian event, but without design.

Researchers have tried, in multiple cases, as Introduction to Evolutionary Informatics author Winston Ewert tells biologist Ray Bohlin on a new episode of ID the Future. But each time, the simulations hit a “complexity barrier,” as the scientists themselves concede, and fail. It’s a fascinating conversation. Listen to it here, or download it here.

Ewert calls it “the mystery of the missing digital Cambrian explosion,” observing that “something is missing from all of the different artificial life simulations.” There’s a secret ingredient, and guess what that is? Intelligent design.

Monday, 12 June 2017

On the publish or perish syndrome.

Peer-Review and the Corruption of Science
Jonathan M. September 13, 2011 6:00 AM


The Guardian features an interesting opinion column by the renowned British pharmacologist David Colquhoun. The article bears the intriguing headline, "Publish-or-perish: Peer review and the corruption of science." The author laments that "Pressure on scientists to publish has led to a situation where any paper, however bad, can now be printed in a journal that claims to be peer-reviewed."

Colquhoun explains,

The blame for this sad situation lies with the people who have imposed a publish-or-perish culture, namely research funders and senior people in universities. To have "written" 800 papers is regarded as something to boast about rather than being rather shameful. University PR departments encourage exaggerated claims, and hard-pressed authors go along with them.
The author proceeds to list a few examples of the failure of the peer-review system to ensure robust and accurate journal content. He argues that part of the reason for the lapse in academic publication standards is the pressure on academics to publish many papers. If a scientist publishes frequently, that should actually call into question, rather than enhance, his credibility as a diligent and focused researcher.
Those of us who follow the professional literature (or even the blogosphere) may recall the Nowak et al. (2010) paper that appeared in Nature back in May of last year. It was regarded by many evolutionary biologists (most notably University of Chicago's Jerry Coyne) as a "misguided attack on kin selection."

Coyne noted,

If the Nowak et al. paper is so bad, why was it published? That's obvious, and is an object lesson in the sociology of science. If Joe Schmo et al. from Buggerall State University had submitted such a misguided paper to Nature, it would have been rejected within an hour (yes, Nature sometimes does that with online submissions!). The only reason this paper was published is because it has two big-name authors, Nowak and Wilson, hailing from Mother Harvard. That, and the fact that such a contrarian paper, flying in the face of accepted evolutionary theory, was bound to cause controversy.
I have often read papers, published in reputable journals, that I thought should not have passed through peer-review. Consider, for example, this paper, published in PLoS Biology in May of last year. Indeed, the esteemed atheist blogger PZ Myers wrote about it in a blog post headlined "Junk DNA is still junk" (to which I responded briefly here). The paper erroneously concluded "Overall, ...we find that most of the genome is not appreciably transcribed. [emphasis added]"
There is actually a pretty good response to this article here. The methodology of the PLoS Biology article is fatally flawed, for they use a program called "RepeatMasker", which screens out all the repetitive DNA. But given that about 50% of our genome is comprised of repetitive DNA, the conclusions drawn by the authors seems to be a little disingenuous to say the least! In fact, the official description of RepeatMasker itself states that "On average, almost 50% of a human genomic DNA sequence currently will be masked by the program."

As if that weren't bad enough, the researchers then base their results "primarily on analysis of PolyA+ enriched RNA." But we've known since 2005 that, in humans, PolyA- sequences are twice as abundant as PolyA+ transcripts. So the authors not only exclude half the genome from their research, but also completely ignore two thirds of the RNA in what remains!

By citing that paper PZ Myers didn't do his own credibility any favors. The point being made by Myers is a false one anyway because it is known that even DNA that is not transcribed can play important roles.

Then there was, of course, that recent paper in PNAS telling us that "There's plenty of time for evolution" (also paraded by Myers). The substance of the argument presented in this paper was terrible (for some of the reasons why, see here and here). Reading that paper when it came out, I was frankly astonished that it was able to pass through peer-review.

Back in June of 2009, a paper appeared in PNAS by Ghosh et al. purporting to demonstrate the production of endospores in the genus Mycobacterium (which includes many pathogens such as M. tuberculosis and M. leprae). Traag et al. (2010) document the problems with the paper:

Here, we report that the genomes of Mycobacterium species and those of other high G+C Gram-positive bacteria lack orthologs of many, if not all, highly conserved genes diagnostic of endospore formation in the genomes of low G+C Gram-positive bacteria. We also failed to detect the presence of endospores by light microscopy or by testing for heat-resistant colony-forming units in aged cultures of M. marinum. Finally, we failed to recover heat-resistant colony-forming units from frogs chronically infected with M. marinum. We conclude that it is unlikely that Mycobacterium is capable of endospore formation.
As ID proponents know only too well, the peer-review system has not only become corrupted in allowing substandard content into the academic market. It has also been turned into a gate-keeping system for imposing ideological conformity. Recently, an editor resigned over the publication of a seminal article by Roy Spencer and William Braswell. The paper's purpose was to demonstrate that one of the feedbacks that the Intergovernmental Panel on Climate Change has been treating as a positive feedback is really a negative feedback. You can read Roy Spencer's defense of his paper here.
In a similar incident in 2004, Smithsonian Institute evolutionary biologist Richard Sternberg was punished and pressured to resign following the publication of a pro-ID article by Stephen C. Meyer in a journal of which Sternberg was the editor.

In still another incident, a recent pro-ID paper authored by mathematician Granville Sewell was retracted from publication (after it had been subjected to peer-review and approved) as the result of a complaint from a blogger writing to the journal's editor. The journal, Applied Mathematics Letters has since apologized and paid $10,000 in compensation to Dr. Sewell.

What's to be done? Colquhoun makes the following recommendation:

There is an alternative: publish your paper yourself on the web and open the comments. This sort of post-publication review would reduce costs enormously, and the results would be open for anyone to read without paying. It would also destroy the hegemony of half a dozen high-status journals.
And, indeed, this is exactly how the Biologic Institute-associated journal Bio-Complexity operates. This peer-reviewed journal, dedicated to discussions surrounding the respective scientific merits of neo-Darwinian evolution and intelligent design, is published freely on the web and is open for comments and published responses, hence allowing -- even encouraging -- post-publication review.
Colquhoun further suggests,

...it would be essential to allow anonymous comments. Most reviewers are anonymous at present, so why not online? Second, the vast flood of papers that make the present system impossible should be stemmed. I'd suggest scientists should limit themselves to an average of two original papers a year. They should also be limited to holding one research grant at a time. Anyone who thought their work necessitated more than this would have to be scrutinized very carefully. It's well known that small research groups give better value than big ones, so that should be the rule.
The benefit of such a system, as Colquhoun notes, is that "With far fewer papers being published, reviewers, grant committees and promotion committees might be able to read the papers, not just count them."

Colquhoun is to be commended. The goal of the peer-review system ought to be the ensuring of factual accuracy and the highlighting of necessary revisions and corrections. Its goal should not be the enforcement of ideological and paradigmatic conformity, nor should it be the upholding of "consensus science." Post-publication review ought to be encouraged, and moves should be made to make journal content more frequently open-access.

File under "well said" L

Only a virtuous people are capable of freedom. As nations become corrupt and vicious, they have more need of masters.

BENJAMIN FRANKLIN,


Still yet more on Maths v. Darwin.

Top Ten Questions and Objections to Introduction to Evolutionary Informatics
Robert J. Marks II


Five years ago, Gregory Chaitin, a co-founder of the fascinating and mind-bending field of algorithmic information theory, offered a challenge:1

The honor of mathematics requires us to come up with a mathematical theory of evolution and either prove that Darwin was wrong or right!
In  Introduction to Evolutionary Informatics2, co-authored by William A. Dembski, Winston Ewert, and myself, we answer Chaitin’s challenge in the negative: There exists no model successfully describing undirected Darwinian evolution. Period. By “model,” we mean definitive simulations or foundational mathematics required of a hard science.

We show that no meaningful information can arise from an evolutionary process unless that process is guided. Even when guided, the degree of evolution’s accomplishment is limited by the expertise of the guiding information source — a limit we call Basener’s ceiling. An evolutionary program whose goal is to master chess will never evolve further and offer investment advice.


Here I answer ten frequently posed questions about and objections to Introduction to Evolutionary Informatics.
1. Why yet another book dissing Darwinian evolution?

Solomon was right. “Of making many books there is no end, and much study wearies the body.”3 There are gobs of books written about evolution, pro and con. Many are excellent. So what’s so important about Introduction to Evolutionary Informatics? On the topic of evolution, the conclusion is in: There exists no model successfully describing undirected Darwinian evolution. Hard sciences are built on foundations of mathematics or definitive simulations. Examples include electromagnetics, Newtonian mechanics, geophysics, relativity, thermodynamics, quantum mechanics, optics, and many areas in biology. Those hoping to establish Darwinian evolution as a hard science with a model have either failed or inadvertently cheated. These models contain guidance mechanisms to land the airplane squarely on the target runway despite stochastic wind gusts. Not only can the guiding assistance be specifically identified in each proposed evolution model, its contribution to the success can be measured, in bits, as active information.

And, as covered in Introduction to Evolutionary Informatics, we suspect no model will ever exist to substantiate the claims of undirected Darwinian evolution.

2. But Darwinian evolution is so complicated, it can’t be modeled!

If this objection is true, we have reached the same conclusion by different paths: There exists no model successfully describing undirected Darwinian evolution.

3. You model evolution as a search. Evolution isn’t a search.

We echo Billy Joel: “We didn’t start the fire!” Models of Darwinian evolution, Avida and EV included, are searches with a fixed goal. For EV, the goal is finding specified nucleotide binding sites. Avida’s goal is to generate an EQU logic function. Other evolution models that we examine in Introduction to Evolutionary Informatics likewise seek a prespecified goal.

The evolution software Avida is of particular importance because Robert Pennock, one of the co-authors of the first paper describing Avida,4 gave testimony at the Darwin-affirming Kitzmiller et al. v. Dover Area School District bench trial. Pennock’s testimony contributed to Judge Jones’s ruling that teaching about intelligent design violates the establishment clause of the United States Constitution. Pennock testified, “In the [Avida computer program] system, we’re not simulating evolution. Evolution is actually happening.” If true, Avida and thus evolution are a guided search with a specified target bubbling over with active information supplied by the programmers.

The most celebrated attempt of an evolution model without a goal of which we’re aware is TIERRA. In an attempt to recreate something like the Cambrian explosion on a computer, the programmer created what was thought to be an information-rich environment where digital organisms would flourish and evolve. According to TIERRA’s ingenious creator, Thomas Ray, the project failed and was abandoned. There has to date been no success in open-ended evolution in the field of artificial life.5

Therefore, there exists no model successfully describing undirected Darwinian evolution.

4. You are not biologists. Why should anyone listen to you about evolution?

Leave aside that this question reeks of the genetic fallacy used in debate to steer conversation away from the topic at hand and down a rabbit trail of credential defense. The question is sincere, though, and deserves an answer. Besides, it lets me talk about myself.

The truth is that computer scientists and engineers know a lot about evolution and evolution models.

As we outline in Introduction to Evolutionary Informatics, proponents of Darwinian evolution became giddy about computers in the 1960s and 70s. Evolution was too slow to demonstrate in a wet lab, but thousands and more generations of evolution can be put in the bank when Darwinian evolution is simulated on a computer. Computer scientists and engineers soon realized that evolutionary search might assist in making computer-aided designs. In Introduction to Evolutionary Informatics, we describe how NASA engineers used guided evolutionary programs to design antennas resembling bent paper clips that today are floating and functioning in outer space.

Here’s my personal background. I first became interested in evolutionary computation late last century when I served as editor-in-chief of the IEEE6 Transactions on Neural Networks.7 I invited top researchers in the field, David Fogel and his father Larry Fogel, to be the guest editors of a special issue of my journal dedicated to evolutionary computing.8 The issue was published in January 1994 and led to David founding the IEEE Transactions on Evolutionary Computing9 which today is the top engineering/computer science journal dedicated to the topic.

My first conference paper using evolutionary computing was published a year later10 and my first journal publication on evolutionary computation was in 1999.11 That was then. More recently my work, funded by the Office of Naval Research, involves simulated evolution of swarm dynamics motivated by the remarkable self-organizing behavior of social insects. Some of the results were excitingly unexpected12 including individual member suicidal sacrifice to extend the overall lifetime of the swarm.13 Evolving digital swarms is intriguing and we have a whole web site devoted to the topic.14

So I have been playing in the evolutionary sandbox for a long time and have dirt under my fingernails to prove it.

But is it biology? In reviewing our book for the American Scientific Affiliation (ASA), my friend Randy Isaac, former executive director of the ASA, said of our book, “Those seeking insight into biological or chemical evolution are advised to look elsewhere.”15 We agree! But if you are looking for insights into the models and mathematics thus far proposed by supporters of Darwinian evolution that purport to describe the theory, Introduction to Evolutionary Informatics is spot on. And we show there exists no model successfully describing undirected Darwinian evolution.

5. You use probability inappropriately. Probability theory cannot be applied to events that have already happened.

In the movie Dumb and Dumber, Jim Carey’s character, Lloyd Christmas, is brushed off by beautiful Mary “Samsonite” Swanson when told his chances with her are one in a million. After a pause for introspective reflection, Lloyd’s emergent toothy grin shows off his happy chipped tooth. He enthusiastically blurts out, “So you’re telling me there’s a chance!” Similar exclamations are heard from Darwinian evolutionist advocates. “Darwinian evolution. So you’re telling me there’s a chance!” So again, we didn’t start the probability fire. Evolutionary models thrive on randomness described by probabilities.

The probability-of-the -gaps championed by supporters of Darwinian evolution is addressed in detail in Introduction to Evolutionary Informatics. We show that the probability resources of the universe and even string theory’s hypothetical multiverse are insufficient to explain the specified complexity surrounding us.

Besides, a posteriori probability is used all the time. The size of your last tweet can be measured in bits. Claude Shannon, who coined the term bits in his classic 1948 paper,16 based the definition of the bit on probability. Yet there sits your transmitted tweet with all of its a posteriori bits fully exposed. Another example is a posteriori Bayesian probability commonly used, for example, in email spam filters. What is the probability that your latest email from a Nigerian prince, already received and written on your server, is spam? Bayesian probabilities are also a posteriori probabilities.

So a hand-waving dismissal of a posteriori probabilities is ill-tutored. The application of probability in Introduction to Evolutionary Informatics is righteous and the analysis leads to the conclusion that there exists no model successfully describing undirected Darwinian evolution.

6. What about a biological anthropic principle? We’re here, so evolution must work.

Stephen Hawking has a simple explanation of the anthropic principle: “If the conditions in the universe were not suitable for life, we would not be asking why they are as they are.” Gabor Csanyi, who quotes from Hawking’s talk, says, “Hawking claims, the dimensionality of space and amount of matter in the universe is [a fortuitous] accident, which needs no further explanation.”17

“So you’re telling me there’s a chance!”

The question ignored by anthropic principle enthusiasts is whether or not an environment for even guided evolution could occur by chance. If a successful search requires equaling or exceeding some degree of active information, what is the chance of finding any search with as good or better performance? We call this a search-for-the-search. In Introduction to Evolutionary Informatics, we show that the search-for-the-search is exponentially more difficult that the search itself! So if you kick the can down the road, the can gets bigger.

Professor Sydney R. Coleman said after the Hawking’s MIT talk, “Anything else is better [than the ‘Anthropic Principle’ to explain something].”18 We agree. For example, check out our search-for-the-search analysis in Introduction to Evolutionary Informatics.

7. What about the claim that “All information is physical”?

This is a question we have heard from physicists.

In physics, Landauer’s principle pertains to the lower theoretical limit of energy consumption of computation and leads to his statement “all information is physical.”

Saying “All computers are mass and energy” offers a similar nearly useless description of computers. Like Landauer’s principle, it suffers from the same overgeneralized vagueness and is at best incomplete.

Claude Shannon counters Landauer’s claim:

It seems to me that we all define “information” as we choose; and, depending upon what field we are working in, we will choose different definitions. My own model of information theory…was framed precisely to work with the problem of communication.19
Landauer is probably correct within the narrow confines of his physics foxhole. Outside the foxhole is Shannon information which is built on unknown a priori probability of events which have not yet happened and are therefore not yet physical.

We spend an entire chapter in Introduction to Evolutionary Informatics defining information so there is no confusion when the concept is applied. And we conclude there exists no model successfully describing undirected Darwinian evolution.

8. Information theory cannot measure meaning.

Poppycock.

A hammer, like information theory, is a tool. A hammer can be used to do more than pound nails. And information theory can do more than assign a generic bit count to an object.

The most visible information theory models are Shannon information theory and KCS information.20 The consequence of Shannon’s theory on communication theory is resident in your cell phone where codes predicted by Shannon today allow maximally efficient use of available bandwidth. KCS stands for Kolmogorov-Chaitin-Solomonoff information theory named after the three men who independently founded the field. KCS information theory deals with the information content of structures. (Gregory Chaitin, by the way, gives a nice nod-of-the-head to Introduction to Evolutionary Informatics.21)

The manner in which information theory can be used to measure meaning is addressed in Introduction to Evolutionary Informatics. We explain, for example, why a picture of Mount Rushmore containing images of four United States presidents has more meaning to you than a picture of Mount Fuji even though both pictures might require the same number of bits when stored on your hard drive. The degree of meaning can be measured using a metric called algorithmic specified complexity.

Rather than summarize algorithmic specified complexity derived and applied in Introduction to Evolutionary Informatics, we refer instead to a quote from a paper from one of the world’s leading experts in algorithmic information theory, Paul Vitányi. The quote is from a paper he wrote over 15 years ago, titled “Meaningful Information.”22

One can divide…[KCS] information into two parts: the information accounting for the useful regularity [meaningful information] present in the object and the information accounting for the remaining accidental [meaningless] information.23
In Introduction to Evolutionary Informatics, we use information theory to measure meaningful information and show there exists no model successfully describing undirected Darwinian evolution.

9. To achieve specified complexity in nature, the fitness landscape in evolution keeps changing. So, contrary to your claim, Basener’s ceiling doesn’t apply in Darwinian evolution.

In search, complexity can’t be achieved beyond the expertise of the guiding oracle. As noted, we refer to this limit as Basener’s ceiling.24 However, if the fitness continues to change, it is argued, the evolved entity can achieve greater and greater specified complexity and ultimately perform arbitrarily great acts like writing insightful scholarly books disproving Darwinian evolution.

We analyze exactly this case in Introduction to Evolutionary Informatics and dub the overall search structure stair step active information. Not only is guidance required on each stair, but the next step must be carefully chosen to guide the process to the higher fitness landscape and therefore ever increasing complexity. Most of the next possible choices are deleterious and lead to search deterioration and even extinction. This also applies in the limit when the stairs become teeny and the stair case is better described as a ramp. As Aristotle said, “It is possible to fail in many ways…while to succeed is possible only in one way.”

Here’s an anecdotal illustration of the careful design needed in the stair step model. If a meteor hits the Yucatan Peninsula and wipes out all the dinosaurs and allows mammals to start domination of the earth, then the meteor’s explosion must be a Goldilocks event. If too strong all life on earth would be zapped. If too weak, velociraptors would still be munching on stegosaurus eggs.

Such fine tuning is the case of any fortuitous shift in fitness landscapes and increases, not decreases, the difficulty of evolution of ever-increasing specified complexity. It supports the case there exists no model successfully describing undirected Darwinian evolution.

10. Your research is guided by your ideology and can’t be trusted.

There’s that old derailing genetic fallacy again.

But yes! Of course, our research is impacted by our ideology! We are proud to be counted among Christians such as the Reverend Thomas Bayes, Isaac Newton, George Washington Carver, Michael Faraday, and the greatest of all mathematicians, Leonard Euler.25 The truth of their contributions stand apart from their ideology. But so does the work of atheist Pierre-Simon Laplace. Truth trumps ideology. And allowing the possibility of intelligent design, embraced by enlightened theists and agnostics alike, broadens one’s investigative horizons.

Alan Turing, the brilliant father of computer science and breaker of the Nazi’s enigma code, offers a great example of the ultimate failure of ideology trumping truth. As a young man, Turing lost a close friend to bovine tuberculosis. Devastated by the death, Turing turned from God and became an atheist. He was partially motivated in his development of computer science to prove man was a machine and consequently that there was no need for a god. But Turing’s landmark work has allowed researchers, most notably Roger Penrose,26 to make the case that certain of man’s attributes including creativity and understanding are beyond the capability of the computer. Turing’s ideological motivation was thus ultimately trashed by truth.

The relationship between human and computer capabilities is discussed in more depth in Introduction to Evolutionary Informatics.

Take Aways

In Introduction to Evolutionary Informatics, Chaitin’s challenge has been met in the negative and there exists no model successfully describing undirected Darwinian evolution. According to our current understanding, there never will be. But science should never say never. As Stephen Hawking notes, nothing in science is ever actually proved. We simply accumulate evidence.27

So if anyone generates a model demonstrating Darwinian evolution without guidance that ends in an object with significant specified complexity, let us know. No guiding, hand waving, extrapolation of adaptations, appealing to speculative physics, or anecdotal proofs allowed.

Until then, I guess you can call us free-thinking skeptics.

Thanks for listening.

Robert J. Marks II PhD is Distinguished Professor of Electrical and Computer Engineering at Baylor University.

Notes:

(1) Chaitin, Gregory. Proving Darwin: Making Biology Mathematical. Vintage, 2012.

(2) Marks II, Robert J., William A. Dembski, and Winston Ewert. Introduction to Evolutionary Informatics. World Scientific, 2017.

(3) Ecclesiastes 12:12b.

(4) Lenski, R.E., Ofria, C., Pennock, R.T. and Adami, C., 2003. “The evolutionary origin of complex features.” Nature, 423(6936), pp. 139-144.

(5) ID the Future podcast with Winston Ewert. “Why Digital Cambrian Explosions Fizzle…Or Fake It,” June 7, 2017.

(6) IEEE, the Institute of Electrical and Electrical Engineers, is the largest professional society in the world, with over 400,000 members.

(7) R.J. Marks II, “The Joumal Citation Report: Testifying for Neural Networks,” IEEE Transactions on Neural Networks, vol. 7, no. 4, July 1996, p. 801.

(8) Fogel, David B., and Lawrence J. Fogel. “Guest editorial on evolutionary computation,” IEEE Transactions on Neural Networks 5, no. 1 (1994): 1-14.

(9) R.J. Marks II, “Old Neural Network Editors Don’t Die, They Just Prune Their Hidden Nodes,” IEEE Transactions on Neural Networks, vol. 8, no. 6 (November, 1997), p. 1221.

(10) Russell D. Reed and Robert J. Marks II, “An Evolutionary Algorithm for Function Inversion and Boundary Marking,” Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 794-797, November 26-30, 1995.

(11) C.A. Jensen, M.A. El-Sharkawi and R.J. Marks II, “Power Security Boundary Enhancement Using Evolutionary-Based Query Learning,” Engineering Intelligent Systems, vol. 7, no. 9, pp. 215-218 (December 1999).

(12) Jon Roach, Winston Ewert, Robert J. Marks II and Benjamin B. Thompson, “Unexpected Emergent Behaviors from Elementary Swarms,” Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), Baylor University, March 11, 2013, pp. 41-50.

(13) Winston Ewert, Robert J. Marks II, Benjamin B. Thompson, Albert Yu, “Evolutionary Inversion of Swarm Emergence Using Disjunctive Combs Control,” IEEE Transactions on Systems, Man and Cybernetics: Systems, v. 43, #5, September 2013, pp. 1063-1076.

Albert R. Yu, Benjamin B. Thompson, and Robert J. Marks II, “Swarm Behavioral Inversion for Undirected Underwater Search,” International Journal of Swarm Intelligence and Evolutionary Computation, vol. 2 (2013). Albert R. Yu, Benjamin B. Thompson, and Robert J. Marks II, “Competitive Evolution of Tactical Multiswarm Dynamics,” IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 43, no. 3, pp. 563- 569 (May 2013).

Winston Ewert, Robert J. Marks II, Benjamin B. Thompson, Albert Yu, “Evolutionary Inversion of Swarm Emergence Using Disjunctive Combs Control,” IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 43, no. 5, September 2013, pp. 1063-1076.

(14) NeoSwarm.com.

(15) Review of Introduction to Evolutionary Informatics, Perspectives on Science and Christian Faith, vol. 69 no. 2, June 2017, pp. 104-108.

(16) Claude E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal 27: 379-423 and 623–656.

(17) Gabor Csanyi “Stephen Hawking Lectures on Controversial Theory,” The Tech, vol. 119, issue 48, Friday, October 8, 1999.

(18) The bracketed insertion in the quote is Csanyi’s, not ours.

(19) Quoted in P. Mirowski, Machine Dreams: Economics Becomes a Cyborg Science (New York: Cambridge University Press, 2002), 170.

(20) Cover, Thomas M., and Joy A. Thomas. Elements of Information Theory. John Wiley & Sons, 2012.

(21) Review for Introduction to Evolutionary Informatics.

(22) Paul Vitányi, “Meaningful Information,” in International Symposium on Algorithms and Computation: 13th International Symposium, ISAAC 2002, Vancouver, BC, Canada, November 21-23, 2002.

(23) Unlike our approach, Vitányi’s use of the so-called Kolmogorov sufficient statistic here does not take context into account.

(24) Basener, W.F., 2013. “Limits of Chaos and Progress in Evolutionary Dynamics.” Biological Information — New Perspectives. World Scientific, Singapore, pp. 87-104.

(25) Christian Calculus.

(26) See, e.g., Penrose, Roger. Shadows of the Mind. Oxford University Press, 1994.

(27) Hawking, Stephen. A Brief History of Time (1988). AppLife, 2014.

Scientism v. Classical liberalism?

Sure, “Teach the Controversy,” Says an Evolutionist – But You Know What’s Coming Next
Sarah Chaffee


PLOS, the “Public Library of Science,” is the publisher of a number of a high-profile open access science journals – PLOS ONE, PLOS Biology, PLOS Medicine, and others. They also publish a range of blogs, including Sci-Ed, which deals with issues relating to science and education. A recent headline there caught my eye,Go ahead and ‘teach the controversy:’ it is the best way to defend science.”
That’s provocative. But here’s the subtitle: “as long as teachers understand the science and its historical context.” Well, who could disagree?

But you can probably guess what’s coming. The author is Mike Klymkowsky, a University of Colorado Boulder Professor of Molecular, Cellular, and Developmental Biology. Klymkowsky, as expected, says teaching about the evolution controversy is fine as long as you show how absurd that “controversy” really is. In other words, you can expose students to diverse views on Darwinian theory, so long as the takeaway for them is the orthodox evolutionary one.

But I wanted to point out a comment left at the end of the article by someone identified as CWGross, who notes:

There is no good reason to believe the naturalist-materialist, Scientistic proposition that science has epistemological primacy (in fact, it is a self-contradictory axiom), so we automatically fail if we refuse to discuss the limitations of science and its inadequacies in questions of morals, politics, aesthetics, relationships, spirituality, etc. Without doing so, we implicitly teach science as an authoritarian system which WILL be rejected, as you note, when it conflicts with lived moral, political, aesthetic, relational, spiritual, etc. experience. You do it yourself, in your confession of ideological dogmatism: “Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité, laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or racial differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law.” You express disapproval over philosophies of theistic evolution while at the same time refusing to entertain the implications of a purely materialistic science for your own liberalism.
Klymkowsky wants to marshal a purely naturalistic science in the classroom. CWGross points out the conflict with Klymkowsky’s own liberal ideology. Only an “authoritarian” approach can bridge the gap, expecting students to embrace both (rigidly materialist science, liberalism) without acknowledging the contradiction.

Our preference is for a pedagogy that is much more modest, and more authentically liberal. Klymkowsky worries that students are “vulnerable to intelligent-design creationist arguments centered around probabilities.” ID isn’t creationism, and there’s a lot more to it than “probabilities,” but never mind. We  oppose pushing intelligent design into public school classrooms.

Instead, we want students and teachers to be able to explore scientific controversies over mechanisms of evolution and the origin of life discussed in mainstream, peer-reviewed scientific publications. Let them study these questions for themselves, and arrive at their own conclusions. Let them struggle, too, with the philosophical implications, but, of course, not in the science classroom.

On the evolution controversy, Klymkowsky would likely benefit from more study himself. His understanding of what we argue about appears to be limited. He writes, “For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual variation.”


For a start, he should review the discussions from November’s Royal Society meeting. In fact, some of the ideas presented there would be fascinating to share with students. In the world of professional science, at the highest levels, the foundations of evolutionary theory are up for debate. That fact should not be concealed from young people.

Molecular biology's real life cliffhangers v. Darwin.

Hat Grab: Cells Take Extreme Measures to Rescue Their DNA
Evolution News @DiscoveryCSC

There’s a famous  scene in an Indiana Jones movie  where the hero barely makes it under a closing gate descending on him in an underground tunnel. He rolls under the gate in the nick of time, but his signature fedora comes off. With fractions of a second to spare, he reaches his arm under the gate and snatches the hat.

Something like that happens in the cell. Sometimes, when chromosomes are being winched apart by the spindle into the daughter cells, fragments of DNA break off and become entangled in the spindle’s microtubules. Unless they are rescued and make it into the nuclei of the new cells, disaster could result. The resulting cells will become unstable, resulting in cancer or cell death. Time is of the essence! The cell is following a precisely choreographed screenplay, where thousands of actors must play their roles perfectly at the right time and place. Like the gate descending on Indiana Jones, the cleavage furrow is rapidly constricting the midpoint of the spindle, with those fragments stuck there. Can the cell rescue them in time?

This crisis happens daily in life. Like the city folk above ground, oblivious to Indiana Jones and his frantic brush with death under the streets, we hear and see nothing of the near-catastrophes happening inside our cells. But if it weren’t for the cell’s fast-acting hand, all would be lost. The dramatic true story is told in fascinating news from the University of California, Santa Cruz,under the title, “’Hail Mary’ mechanism can rescue cells with severely damaged chromosomes.” The authors liken what happens to a quarterback’s all-or-nothing long pass in the last seconds of a critical football game. It calls for desperate plays.

William Sullivan calls this a “worst case scenario” for the cell. The potential consequences include cell death or a cancerous cell growing out of control. But Sullivan, a professor of molecular, cell, and developmental biology at UC Santa Cruz, has found that the cell still has one more trick up its sleeve to rescue the broken chromosome.

The latest findings from Sullivan’s lab, published in the June 5 issue of Journal of Cell Biology, reveal new aspects of a remarkable mechanism that carries broken chromosomes through the process of cell division so that they can be repaired and function normally in the daughter cells. [Emphasis added.]
Sullivan’s research team studied a strain of fruit flies that they mutated to increase the incidence of DNA fragmentation. By inserting fluorescent tags, they were able to witness “this amazing mechanism, like a Hail Mary pass with time running out.” What they saw was not unlike Indiana Jones’s arm reaching for his hat.

The mechanism involves the creation of a DNA tether which acts as a lifeline to keep the broken fragment connected to the chromosome….

Sullivan’s research has shown that chromosome fragments don’t segregate with the rest of the chromosomes, but get pulled in later just before the newly forming nuclear membrane closes. “The DNA tether seems to keep the nuclear envelope from closing, and then the chromosome fragment just glides right in at the last moment,” Sullivan said.
It’s a good thing this tether works most of the time. When it doesn’t, the action-adventure movie turns into a horror flick.

If this mechanism fails, however, and the chromosome fragment gets left outside the nucleus, the consequences are dire. The fragment forms a “micronucleus” with its own membrane and becomes prone to extensive rearrangements of its genetic material, which can then be reincorporated into chromosomes during the next cell division. Micronuclei and genetic rearrangements are commonly seen in cancer cells.
Think about what is required for this trick to work. Genes have to construct the tether, and enzymes have to know where to attach it. This means that all the information to pull off this whole stunt has to be written into the script before the director calls, “Action!” Could evolution write a script like that? In the neo-Darwinist version, cells that did not have the tether would die or grow cancerous. The cost of selection would be enormous. All the players and their props would have to learn their roles by chance, figuring out by sheer dumb luck where to be and what to do before a cell could succeed at this stunt and survive. We don’t think Sullivan or his funding agencies are relying on chance to pull that off.

“We want to understand the mechanism that keeps that from happening,” Sullivan said. “We are currently identifying the genes responsible for generating the DNA tether, which could be promising novel targets for the next generation of cancer therapies.”

Sullivan has just received a new four-year, $1.5 million grant from the National Institute of General Medical Sciences to continue this research.
The “Hail Mary pass” is just one of a whole catalog of strategies the cell can draw on to protect its genome. Here’s another strategy announced at Rice University, where researchers determined that “Biology’s need for speed tolerates a few mistakes.”

Biology must be in a hurry. In balancing speed and accuracy to duplicate DNA, produce proteins and carry out other processes, evolution has apparently determined that speed is of higher priority, according to Rice University researchers.

Rice scientists are challenging assumptions that perfectly accurate transcription and translation are critical to the success of biological systems. It turns out a few mistakes here and there aren’t critical as long as the great majority of the biopolymers produced are correct.
Although the researchers are evolutionists, we can see that what they really found is optimization at work (a form of intelligent design in action).

A new paper shows how nature has optimized two processes, DNA replication and protein translation, that are fundamental to life. By simultaneously analyzing the balance between speed and accuracy, the Rice team determined that naturally selected reaction rates optimize for speed “as long as the error level is tolerable.”
When you think about what a cell has to do before it divides, there’s not much room for evolution in the mistakes. Millions of base pairs must be duplicated in a time crunch, while the molecular machinery is in operation. It’s like duplicating a factory while the machinery is running! A smart manager will recognize that the cost of being too precise is not worth the delay if the results are adequate to meet the requirements. They use an analogy we are familiar with:

Kinetic proofreading is the biochemical process that allows enzymes, such as those responsible for protein and DNA production, to achieve better accuracy between chemically similar substrates. Sequences are compared to templates at multiple steps and are either approved or discarded, but each step requires time and energy resources and as a result various tradeoffs occur.

“Additional checking processes slow down the system and consume extra energy,” Banerjee said. “Think of an airport security system that checks passengers. Higher security (accuracy) means a need for more personnel (energy), with longer waiting times for passengers (less speed).”
Despite the one evolution reference, these researchers smell design:

“That makes just as much sense for biology as it does for engineering,” Igoshin said. “Once you’re accurate enough, you stop optimizing.”
We see a similar optimization strategy in news from Brandeis University about double-stranded break (DSB) repair. When one strand of DNA breaks, that’s bad. When both strands of DNA become separated, that’s really bad. Specialized enzymes can inspect and repair these DSBs, but they also have to sacrifice accuracy for speed. The enzymes look for similar sequences to use as a template for the “bandage” that will re-join the strands.

But how perfect does the match have to be? Ranjith Anand, the first author on the Nature paper, said this was one of the central questions that the Haber lab wanted to answer.

They found that repair was still possible when every sixth base in a stretch of about 100 bases was different. Previous studies of RAD51 in the test tube had suggested that the protein had a much more stringent requirement for matching.

That one of the six base pairs could be a mismatch surprised the scientists. The process “is permissive of mismatches during the repairing,” says Anand….
We begin to see a kind of molecular triage going on, as if battlefield medics use whatever is on hand to keep the soldier from dying. “Most damage gets accurately repaired, so the cell is unaffected,” the article says. For somatic cells, imperfect bandages will probably cause no significant harm. Darwinism would require that the mistakes (1) become incorporated into the germline, and (2) provide functional innovations that are positively selected. And thus a wolf became a whale, and a dinosaur took flight into the skies.

Sensible viewers of these action adventures undoubtedly sense good directing, acting, and optimization behind them. Clifford Tabin expressed his amazement about life’s development in Phys.org back in 2013.

When I teach medical students, they’re more interested in the rare people who are born with birth defects, They want to understand embryology so they understand how things go awry, but I’m more interested in the fact that for everyone sitting in my classroom—all 200 of those medical students and dental students — it went right! And every one of them has a heart on the left side and every one of them has two kidneys, and how the heck do you do that?
You are not just a ball of cells, he says; you are the result of mechanical principles that guide the growth of structures through many stages, subject to physical forces, that usually work. And that is indeed astonishing.