Search This Blog

Saturday 13 August 2016

Solution to cambrian mystery:Just add yeast.

Cambrian Explosion Explained by Yeast Clumping Together

David Klinghoffer


We've made sort of a hobby of collating and dissecting theories of how an explosion of complex novel life forms, the Cambrian explosion, can be explained without reference to the obvious explanation, intelligent design.

It's almost too much to keep up with:

And more.

Now comes the yeast theory. From New Scientist:

Just a few generations after evolving multicellularity, lab yeasts have already settled into at least two distinct lifestyles.

The discovery suggests that organisms can swiftly fill new niches opened up by evolutionary innovations, just as the first multicellular animals appear to have done on Earth, hundreds of millions of years ago.

In the lab, yeast cells clumped together, forming larger and smaller "snowflakes."

In short, large and small yeast morphs specialise in different settling strategies, so both can coexist.

These two distinct ecological strategies appeared almost immediately once the multicellular yeasts themselves evolved, notes Travisano.

This provides experimental proof that when evolution makes a great leap forward -- such as the origin of multicellularity -- organisms can diversify rapidly to take advantage of the change.

Many years ago, palaeontologist Stephen Jay Gould suggested that a similar sudden ecological diversification may have led to the Cambrian Explosion in which most animal body forms arose in the fossil record within a few tens of millions of years.

"Possibly what we see here is the first step of what Gould's talking about -- the opening up of diversity due to a key innovation," says Travisano.

Yeast cells clump together. Ergo trilobites.

"Possibly," given a "few tens of millions of years," this could represent a "first step" toward massive diversification.

Ann Gauger has written here about not entirely dissimilar speculations about the development of multicellularity, with Volvox rather than yeast as the illustration. "Saying that something might have happened," she observes, "is not the same as showing that it actually could happen."

"A Simple Transition to Multicellularity -- Not!"

A simple transition from clumping yeast to menagerie of beasts leaves even more "white space," as Dr. Gauger politely puts it, to fill in with needed details.

"The White Space in Evolutionary Thinking"

The white space in the yeast theory is blinding. It's a blizzard of white, obscuring all vision.

The truth is that evolutionists have no idea what produced the Cambrian explosion. Yet, knowing this is an almost immeasurably vast defect in the armature of their theory, they keep throwing speculations at it in the hope that something will stick, or clump.

The solution, though, is right before their eyes, or anyway, your eyes:

Have Darwinian theories on human evolution run out of time?

The Origin of Man and the "Waiting Time" Problem
John Sanford 

Editor's note: We are pleased to welcome a contribution from Dr. Sanford, who is Courtesy Associate Professor, School of Integrative Plant Science, Cornell University.

My colleagues and I recently published a paper in Theoretical Biology and Medical Modeling, "The Waiting Time Problem in a Model Hominin Population." It is one of the journal's "highly accessed" articles. A pre-human hominin population of roughly 10,000 individuals is thought to have evolved into modern man, during a period of less than six million years. This would have required the establishment of a great deal of new biological information. That means, minimally, millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time. We show that there is simply not enough time for this type of evolution to have occurred in the population from which we supposedly arose.

Historically, Darwin-defenders have argued that time is on their side. They have claimed that given enough time, any evolutionary scenario is feasible. They have consistently argued that given millions of years, very large amounts of new biologically meaningful information can arise by the Darwinian process of mutation/selection. However, careful analysis of what is required to establish even a single genetic "word" (a short functional string of genetic letters) within a hominin genome shows just the opposite. Even given tens of millions of years, there is not enough time to generate the genetic equivalent of the simplest "word" (two or more nucleotides). Even in a hundred billion years, much longer than the age of the universe, there is not enough time to establish the genetic equivalent of a very simple "sentence" (ten or more nucleotides). This problem is so fundamental that it justifies a complete re-assessment of the basic Darwinian mechanism.

In my book Genetic Entropy, I have previously outlined the waiting time problem (for example, see the 2014 edition, Chapter 9, pp. 133-136). My calculations there, and calculations published by others (Behe, Snoke, Axe, Gauger et al.), all demonstrate the same basic problem. (For a complete literature review, see the link to our new paper given above.) What this new paper provides is an independent validation, by a totally different method, of the previous works done by Behe, others, and myself.

In our paper we examine the waiting time problem in a new way, employing state-of-the-art, comprehensive, numerical simulations to empirically document the time required to create a specific string of mutations. This method is an alternative to employing mathematical approximations, and is preferable for various reasons outlined in the paper. Our empirical experiments realistically enacted the establishment of short genetic sequences within biologically realistic virtual hominin populations. These experiments demonstrate the limits of the classic neo-Darwinian mechanism in a clearer, and more compelling way. Of special significance, we show that as genetic "word size" increases linearly, waiting time increases exponentially (see Table 2 in the new publication).

The waiting time problem has four basic elements. First, in a small population it takes a very long time for any specific nucleotide (genetic letter) to mutate into a specific alternate nucleotide. Second, it takes vastly more time for a given string of nucleotides to mutate into a specific alternative string of nucleotides (as is required to create a new beneficial genetic "word"). Third, any specific new word that arises is quickly lost due to genetic drift, and so must arise many times before it "catches hold" within the population. And fourth, even when the new word catches hold, it takes additional time for natural selection to amplify the new beneficial mutation to the point of fixation within the population.

Our paper shows that the waiting time problem cannot honestly be ignored. Even given best-case scenarios, using parameter settings that are grossly overgenerous (for example, rewarding a given string by increasing total fitness 10 percent), waiting times are consistently prohibitive. This is even for the shortest possible words. Establishment of just a two-letter word (two specific mutations within a hominin population of ten thousand) requires at least 84 million years. A three-letter word requires at least 376 million years. A six-letter word requires over 4 billion years. An eight-letter word requires over 18 billion years (again, see Table 2 in the paper). The waiting time problem is so profound that even given the most generous feasible timeframes, evolution fails. The mutation/selection process completely fails to reproducibly and systematically create meaningful strings of genetic letters in a pre-human population.

Other authors have published on the waiting time problem and they have consistently acknowledged its reality, but some have then tried to minimize the problem. In those cases, the authors have first shown the waiting problem is serious, but then go on to invoke very special atypical conditions, seeking to reduce waiting times as much as possible. This is evidently in the hope of saving neo-Darwinian theory. But when these "special conditions" are carefully examined, in every case they are far-fetched and ad hoc.

When the dismissive authors use the same formulation of the problem as we used in our paper, they see the same prohibitive waiting times (see our paper's discussion). For example Durrett and Schmidt (2007) model a human population of 10,000, just as we do. They show that for a specific set of eight required mutations (which must arise in the context of a specific genomic location), the waiting time is 650 million years. But most readers will miss the fact that this is just their estimated time to the "first instance" of the string. Elsewhere in their paper they acknowledge that the establishment and fixation of the specific set of mutations would take 100 times longer than the first instance (when they assume a 1 percent fitness reward). This would be 65 billion years! Using the same parameter settings (and applying a 1 percent fitness reward) our own experiments give waiting times of the same magnitude. Likewise, when Lynch and Abegg (2010) specify a population of 10,000, and when two specific mutations are required, they get waiting times exceeding 10 million generations (see their Figure 1). Assuming twenty years per generation for a human population, this is more than 200 million years (see our paper's discussion).

What will the primary counterargument be to the waiting time problem? The primary objection is, and will continue to be, as follows. Within a small population, a given string of letters cannot arise in a specific location without a prohibitive waiting time, yet somewhere else in the genome good things might still be happening. For example, if one is waiting for the sequence ATCG to be fixed in a specific genomic location, it will require very deep time, but it will take no time at all if one is waiting for ATCG to arise anywhere in the genome. Indeed, many copies of ATCG are already in the genome. This argument has three problems.

First, it ignores context. The sequence ATCG by itself is not useful information. It can never be beneficial (and hence selectable), except in a very specific context. Consider randomly changing one word in an encyclopedia -- will it consistently improve the text, regardless of where the change is made? All information is context-dependent. For example, if you have an executable computer program, inserting a certain random string of binary digits could conceivably improve the program's information content. But in such a very unlikely case, it would only be beneficial within an extremely specific context (location). When inserted out of context, the same string would almost certainly be deleterious.

Second, when we broaden our view to include the whole genome, we have to consider the problem of net loss of information, due to a multitude of nearly neutral deleterious mutations that are happening throughout the genome. Random mutation will cause ubiquitous genetic damage (especially in deep time), which will greatly overshadow the few rare strings that might arise in just the right context and might be sufficiently beneficial to be selectable.

Third, invoking "good things that might be happening in other parts of the genome" is essentially sleight of hand. Other potentially beneficial sets of mutations in other parts of the genome will each have their own waiting time problem. This is not a reasonable explanation for the origin of the massive amount of integrated biological information that is required to change an ape into a man (i.e., millions of complementary nucleotide substitutions established and fixed within the source hominin genome, in very little time).


Given that higher genomes must continuously accumulate deleterious mutations (as I show in Genetic Entropy), and given that beneficial mutations are very rare (as shown by the famous Lenski LTEE project, and also as shown in Genetic Entropy), and given that evolution cannot create meaningful genetic words (even given deep time), it seems that neo-Darwinian theory is coming undone on every level.

Darwinism Vs. the real world. XXXII

Can You Hear Me Now? Good, Then Thank Your Irreducibly Complex Ears.
Howard Glicksman 

Editor's note: Physicians have a special place among the thinkers who have elaborated the argument for intelligent design. Perhaps that's because, more than evolutionary biologists, they are familiar with the challenges of maintaining a functioning complex system, the human body. With that in mind, Evolution News is delighted to offer this series, "The Designed Body." For the complete series, see here. Dr. Glicksman practices palliative medicine for a hospice organization.

Thermometer measures temperature and a barometer measures air pressure. But how do they do it? Each device is essentially a sensory transducer with a mechanism that enables it to sense a physical phenomenon and convert it into useful information. The devices the body uses to detect physical phenomena so it knows what is going on outside and inside of it are sensory transducers as well. Hearing is the sensation we experience when vibrating molecules within a medium, typically air (but sometimes water), form mechanical waves within a specific wavelength range and enter our ears.

Common sense tells us that without this special sense our earliest ancestors could never have survived. Evolutionary biologists claim that similar auditory mechanisms in other life forms prove that it was easy for chance and the laws of nature alone to invent hearing. But as in the development of human technologies, experience teaches that intelligent design is a more plausible explanation. Darwinists oversimplify the need for the presence of all the parts of the ear for it to hear well enough for human survival. They also fail to take into account how our brain converts what it receives into what we experience as hearing.

In truth, hearing is a mystery that nobody, not even evolutionary biologists, understands. Nobody really understands how we can hear, so nobody should claim to understand how the ear and hearing came into being. Yet that doesn't stop Darwinists from telling us otherwise. Let's look at what makes up the ear, how it works, and what the brain receives from it which it then converts into the sensation we call hearing.

Sound waves are oscillations, the back and forth movement of molecules within a medium, such as air. These vibrations are transmitted to adjacent molecules and spreads out in all directions. Sound is not due to the linear movement of air -- that is called wind. Furthermore, since a vacuum has no air molecules it cannot transmit sound since there are no air molecules within it to vibrate. The physical nature of sound waves is that the air particles alternate between being packed together in areas of high concentration, called compressions, and spread apart in areas of low concentration called rarefactions. These compressions and rarefactions of air molecules form longitudinal pressure waves which, depending on the type of sound and the energy used to create them, have amplitude, wavelength, and frequency. Sound waves travel at about 330 m/sec, and since light travels at 300,000 km/sec, this means that light is literally about a million times faster than sound.

The human ear is a very complex sensory organ in which all of parts work together to produce and transmit mechanical waves of oscillating molecules to its cochlea. Although it is in the cochlea where the nerve impulses for hearing begin, the other parts of the ear play important roles that support cochlear function. The ear can be divided into three regions; the outer (external) ear, the middle ear, and the inner (internal) ear.

The outer ear consists of the pinna (ear flap), the ear canal, and the eardrum (tympanic membrane). The pinna acts like a satellite dish, collecting sound waves and funneling them down the ear canal to the eardrum. The pinna is made of flexible cartilage and is important for determining the location of different sounds. The ear canal produces wax which provides lubrication while at the same time protecting the eardrum from dust, dirt, and invading microbes and insects. The cells that line the ear canal form near the eardrum and naturally migrate outward toward the entrance of the ear canal, taking with them the overlying ear wax, and are shed from the ear. This provides a natural mechanism of wax removal. Sound waves enter through an opening in the skull called the external auditory meatus. They naturally move down the ear canal and strike the eardrum. The eardrum is a very thin cone-shaped membrane which responds to sound waves by vibrating to a degree that is determined by their amplitude, wave length, and frequency. It represents the end of the outer ear and the beginning of the middle ear.

The middle ear is an enclosed air-filled chamber in which the air pressure on either side of the eardrum must be equal to allow for adequate compliance, a measure of how easily the eardrum will move when stimulated by sound waves. The air in the middle ear tends to be absorbed by the surrounding tissue which, if not corrected, can lead to a vacuum effect, reduced eardrum compliance, and thus impaired hearing. The auditory tube in the middle ear connects with the back of the nose and pharynx. The muscular action of swallowing, yawning, or chewing causes the auditory tube to open, allowing ambient air to enter the middle ear, replacing what has been absorbed and equalizing the air pressure on both sides of the eardrum. Anyone who has flown in an airplane has experienced this vacuum effect as the plane descended and felt its resolution when a popping sound in the ear signified that air had entered the middle ear through the auditory tube.

The middle ear contains the three smallest bones in the body, the ossicles, which include the malleus (hammer), the incus (anvil), and the stapes (stirrup). The job of the ossicles is to efficiently transmit the vibrations of the eardrum into the inner ear which houses the cochlea. This is accomplished by the malleus being attached to the eardrum and the incus, the incus to the malleus and the stapes, and the stapes to the incus and the oval window of the cochlea.

The cochlea consists of three fluid-filled interrelated coiled chambers which spiral together for about two and half turns resembling a snail shell. Within the cochlea is the organ of Corti, the sensory receptor that converts the mechanical waves into nerve impulses. The vibrations, started by sound waves striking the eardrum and transmitted by the ossicles in the middle ear to the oval window of the cochlea, now produce fluid waves within it. The organ of Corti contains about 20,000 hair cells (neurons) running the length of the spiraled cochlea which when stimulated by these fluid waves causes them to bend and depolarize, sending impulses through the auditory nerve to the brain. Higher frequencies cause more motion at one end of the organ of Corti while lower frequencies cause more motion at the other end. The specific cochlear neurons that service specific hair cells along the organ of Corti respond to specific frequencies of sound which, when sent to the auditory cortex, are processed, integrated, and then interpreted as hearing. How the brain is able to perform this feat is as yet not fully understood.

Evolutionary biologists, using their well-developed imaginations, expound on how all the parts of the ear must have come together by chance and the laws of nature alone. However, as usual, they only try to explain how life looks and not how it actually works under the laws of nature to survive. Besides the development of all of its perfectly integrated parts, they never mention the problem the ear faces when it comes to transmitting the vibrations of the tympanic membrane to the organ of Corti with enough pressure to allow for adequate hearing to take place.

It is much easier to move through air than it is through water. That is because of water's higher density. This means that it is much easier for sound waves in the air to move from the eardrum through the middle ear than it is for the oval window to move waves of fluid through the cochlea. Without some sort of innovation, this difference in air/water density would have so reduced the amplitude of the fluid waves in the cochlea that the hearing ability of our earliest ancestors would have been severely compromised and with it, their survival capacity.

So, what novelty of engineering did our ears develop to let them transmit sound waves through the outer and middle ear to the cochlear fluid with enough amplitude to allow for adequate hearing? It is important to remember that F= PA, Force is equal to Pressure times Area. This means that with a given force, the pressure on a given surface is inversely related to its area. If the area decreases, the pressure on the surface increases, and if the area increases, the pressure decreases.

It just so happens that the surface area of the tympanic membrane is about twenty times larger than that of the oval window. This means that the force generated by the vibrations coming from the tympanic membrane through the ossicles to the oval window naturally increases twentyfold on the cochlear fluid. It was this mechanical advantage of their larger tympanic membranes transmitting vibrations through their ossicles to their smaller oval windows of their cochleae that allowed our earliest ancestors' ears to have adequate hearing so they could survive within the world of sound.

Evolutionary biologists seem to be completely ignorant of the fact that the parts used for hearing are not only irreducibly complex, but, to have functioned well enough so our earliest ancestors could hear well enough to survive, they must also have had a natural survival capacity. When it comes to the laws of nature, real numbers have real consequences.


But besides the cochlea there is another very important sensory transducer within the inner ear. Next time we'll look at vestibular function and how it let our earliest ancestors stay balanced.

Nature's navigators in the dock for Design.

Search for a Search: Does Evolutionary Theory Help Explain Animal Navigation?
Evolution News & Views

The living world is filled with searches. Moths find their mates. Bacteria find food sources. Plant roots find nutrients in the soil. Illustra's film Living Waters includes incredible examples of search: dolphins finding prey with echolocation, salmon navigating to their breeding grounds with their exceptional sense of smell, and sea turtles making their way thousands of miles to distant feeding grounds and back home again using the earth's magnetic field.

The subject of search looms large in William Dembski's ID books No Free Lunch and Being as Communion. When you think about search for a moment, several factors imply intelligent design. The entity (whether living or programmed) has to have a goal. It has to receive cues from the environment and interpret them. And it has to be able to move toward its target accurately. Dembski demonstrates mathematically that no evolutionary algorithm is superior to blind search unless extra information is added from outside the system.

In the Proceedings of the National Academy of Sciences this month, five scientists from Princeton and MIT encourage a multi-disciplinary effort to understand the natural search algorithms employed by living things.

The ability to navigate is a hallmark of living systems, from single cells to higher animals. Searching for targets, such as food or mates in particular, is one of the fundamental navigational tasks many organisms must execute to survive and reproduce. Here, we argue that a recent surge of studies of the proximate mechanisms that underlie search behavior offers a new opportunity to integrate the biophysics and neuroscience of sensory systems with ecological and evolutionary processes, closing a feedback loop that promises exciting new avenues of scientific exploration at the frontier of systems biology. [Emphasis added.]
Systems biology, a hot trend in science as Steve Laufmann has explained on ID the Future, looks at an organism the way a systems engineer would. These scientists (two evolutionary biologists and three engineers) refer several times to human engineering as analogous to nature's search algorithms. Specifically, "search research" to an engineer (finding a target in a mess of noisy data) reveals many similarities with the searches animals perform. By studying animal search algorithms, in fact, we might even learn to improve our searches.

The fact that biological entities of many kinds must overcome what appear, at least on the surface, to be similar challenges in their search processes raises a question: Has evolution led these entities to solve their respective search problems in similar ways? Clearly the molecular and biomechanical mechanisms a bacterium uses to climb a chemical gradient are different from the neural processes a moth uses to search for a potential mate. But at a more abstract level, it is tempting to speculate that the two organisms have evolved strategies that share a set of properties that ensure effective search. This leads to our first question: Do the search strategies that different kinds of organisms have evolved share a common set of features? If the answer to this question is "yes," many other questions follow. For example, what are the selective pressures that lead to such convergent evolution? Do common features of search strategies reflect common features of search environments? Can shared features of search strategies inform the design of engineered searchers, for example, synthetic microswimmers for use in human health applications or searching robots?
The paper is an interesting read. The authors describe several examples of amazing search capabilities in the living world. Living things daily reach their targets with high precision despite numerous challenges. Incoming data is often noisy and dynamic, changing with each puff of wind or cross current. Signal gradients are often patchy, not uniform. Yet somehow, bacteria can climb a chemical gradient, insects can follow very dilute pheromones, and mice can locate grain in the dark. Even in our own bodies, immune cells follow invisible cue gradients to their targets. Everywhere, from signal molecules inside cells to whole populations of higher organisms, searches are constantly going on in the biosphere.

The engineering required for a successful search, whether natural or artificial, exemplifies optimization -- an intelligent design science. It's not enough to have sensitive detectors, for instance. If too sensitive, a detector can become saturated by a strong signal. Many animal senses have adaptation mechanisms that can quench strong signals when necessary, allowing detection over many orders of magnitude. This happens in the human ear; automatic gain control in the hair cells of the cochlea gives humans a trillion-to-one dynamic range. A recent paper showed that human eyes are capable of detecting single photons! Yet we can adjust to bright sunlight with the same detectors, thanks to the automatic iris and other adaptation mechanisms in the retinal neurons.

In the PNAS paper, the authors describe additional trade-off challenges for natural search algorithms. For instance, should the organism go for the richest food source, if that will expose it to predators? Should populations with similar needs compete for resources, or divide them up? Each benefit incurs a cost. A well-designed search algorithm handles the trade-offs while maximizing the reward, even if the reward is less than ideal. Engineers have to solve similar optimization problems. They have a word for it: "satisficing" the need by reaching at least the minimum requirement. It's obvious that achieving the best solution to multiple competing goals in a dynamic, noisy environment is a huge challenge for both engineers and animals.

The otherwise insightful paper runs into problems when it tries to evolutionize search. They say, "We expect natural selection to drive the evolution of algorithms that yield high search performance, while balancing fitness costs, such as exposure to predation risk." Great expectations, but can they hold up to scrutiny?

The authors assume evolution instead of demonstrating it. They say that organisms "have evolved" strategies for searching. Because of the irreducible complexity of any system requiring sensors, detectors, interpreters and responders to pull off a successful search, this would amount to a miracle.

They appeal to "convergent evolution" to account for similar search algorithms in unrelated organisms. This multiplies the miracles required.

They speak of the environment as supplying "selective pressure" for organisms to evolve their algorithms. If the environment could pressure the formation of search algorithms, then rocks and winds would have them, too. The environment can influence the formation of a dust devil, but the whirlwind isn't searching for anything. The environment can make rocks fall and rivers flow in certain directions, but they don't care where they are going. It takes programming to find a target that has been specified in advance.

Most serious of all, the claim that natural selection can drive the evolution of search algorithms undermines itself. In a real sense, the scientists themselves are performing a search -- a search for a search. They want to search for a universal model to explain animal search algorithms. But if they themselves are products of natural selection, then they would have no way of arriving at their own target: that being, "understanding" the natural world and explaining how it emerged.

To see why their search is doomed, see Nancy Pearcey's article, "Why Evolutionary Theory Cannot Survive Itself." The authors in PNAS must apply their own explanation to themselves. But then it becomes a self-referential absurdity, because they would have to say that the environment pressured them to say what they said. Their explanation, furthermore, would have no necessary connection to truth -- only to survival. Remember Donald Hoffman's debunking of evolutionary epistemology? "According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness," he said. "Never." Consequently, the authors of the paper cannot be sure of anything, including their claim that natural selection drives the evolution of search algorithms.


What we can say is that every time we observe a search algorithm coming into being, whether the Google search engine or a class in orienteering, we know intelligence was involved. What we never see is a new search algorithm emerging from mindless natural causes. We therefore know of a vera causa -- a true cause -- that can explain highly successful search algorithms in nature.

Sunday 7 August 2016

File under "Well said" XXXII

Reading furnishes the mind only with materials of knowledge; it is thinking that makes what we read ours.
John locke.

On the supposed solution to the cambrian mystery or "It came from outer space"

To Create Cambrian Animals, Whack the Earth from Space
Evolution News & Views

It's surely not a coincidence that this season in science-journal publishing we've seen a variety of attempts to solve the enigma that Stephen Meyer describes in his new book, Darwin's Doubt. The problem, of course, is how to account for the geologically sudden eruption of complex new life forms in the Cambrian explosion. Meyer argues that the best explanation is intelligent design.

The orthodox materialist camp in mainstream science remains in full denial mode. They can't stomach the proposal of ID, but neither can they for the most part bring themselves to answer Meyer by name, or even admit there's a controversy on the subject. Charles Marshall, reviewing the book in Science, is the honorable exception. So we get what look like stealth responses to Meyer's book that claim to have figured out the Cambrian puzzle without telling you what the urgency for doing so really is, thus evading the task of responding to Meyer directly. (See David Klinghoffer's review of the reviewers of Darwin's Doubt, "A Taxonomy of Evasion.")

Probably the most hopeless solution so far ascribes some of the creative power to a blast in the ocean by a space impact. This supposedly helped "set the stage" for the rapid proliferation of new animal forms. When we examine the complexity of a single Cambrian fossil, though, such a notion, like the others on offer, leaves all the important questions unanswered.

To his credit, Grant M. Young, the author of the proposal, is somewhat modest in the way he formulates his idea. His paper in GSA Today is primarily concerned with looking for evidence of a "very large marine impact" prior to the Ediacaran Period that sent vast quantities of water and oxygen into the atmosphere, changed the obliquity of Earth's spin axis, and altered sea levels. The aftermath of that catastrophe, he speculates, played a role in the Cambrian explosion -- but a "crucial" one.

Attendant unprecedented environmental reorganization may have played a crucial role in the emergence of complex life forms. (Emphasis added.)

That's all Young had to say about it, but the suggestion was enough for NASA's Astrobiology Magazine to jump on it with a breathless headline: "Did a Huge Impact Lead to the Cambrian Explosion?" Author Johnny Bontemps catapulted that tease into the notion that "The ensuing environmental re-organization would have then set the stage for the emergence of complex life." Bontemps is correct about one thing:

These events marked the beginning of another drastic event known as the Cambrian explosion. Animal life on Earth suddenly blossomed, with all of the major groups of animals alive today making their first appearance.

Let's take a look at just one of the Cambrian animals, as seen in an exquisitely preserved new fossil from the Chengjiang strata in China, where so many beautiful fossils have been found (examples are shown in the Illustra film Darwin's Dilemma). The new fossil, Alalcomenaeus, published by Nature, was furnished with multiple claws like other Cambrian arthropods, but was so well preserved its nervous system could be outlined in detail. Even though it is dated from the early Cambrian at 520 million years old, it already had the nerves of modern spiders. Co-author Nick Strausfeld explains:

"We now know that the megacheirans had central nervous systems very similar to today's horseshoe crabs and scorpions," said Strausfeld, the senior author of the study and a Regents' Professor in the UA's Department of Neuroscience. "This means the ancestors of spiders and their kin lived side by side with the ancestors of crustaceans in the Lower Cambrian."'

Though tiny (about an inch long), its nervous system must have been fairly advanced, because the elongated creature was capable of swimming or crawling or both. In addition to about a dozen body segments with jointed appendages, it had a "pair of long, scissor-like appendages attached to the head, most likely for grasping or sensory purposes." It also had two pairs of eyes.

Iron deposits selectively accumulated in the nerve cells, allowing the research team to reconstruct the highly organized brain and nervous system. After processing with CT scans and iron scans, "out popped this beautiful nervous system in startling detail."

Comparing the outline of the fossil nervous system to nervous systems of horseshoe crabs and scorpions left no doubt that 520-million-year-old Alalcomenaeus was a member of the chelicerates.

Specifically, the fossil shows the typical hallmarks of the brains found in scorpions and spiders: Three clusters of nerve cells known as ganglia fused together as a brain also fused with some of the animal's body ganglia. This differs from crustaceans where ganglia are further apart and connected by long nerves, like the rungs of a rope ladder.

Other diagnostic features include the forward position of the gut opening in the brain and the arrangement of optic centers outside and inside the brain supplied by two pairs of eyes, just like in horseshoe crabs.

Horseshoe crabs survive as "living fossils" to this day, as residents near the Great Lakes know from the annual swarms. This fossil resembles modern chelicerates, one of the largest subphyla of arthropods, including horseshoe crabs, scorpions, spiders, mites, harvestmen, and ticks. Live Science adds, "The discovery of a fossilized brain in the preserved remains of an extinct 'mega-clawed' creature has revealed an ancient nervous system that is remarkably similar to that of modern-day spiders and scorpions."

Since crustaceans and chelicerates have both been found in the early Cambrian, Darwinian evolutionists are forced to postulate an unknown ancestor further back in time: "They had to come from somewhere," Strausfeld remarks. "Now the search is on." That sounds like the same challenge Charles Darwin gave fossil hunters 154 years ago to find the ancestors of the Cambrian animals.

The difficulty? It requires many different tissue types and interconnected systems to operate a complex animal like Alalcomenaeus, with its body segments, eyes, claws, mouth parts, gut and nervous system with a brain, to say nothing of coordinating the developmental programs that build these systems from a single cell. That is the major problem that Stephen Meyer emphasizes in Darwin's Doubt: where does the information come from to build complex body plans with hierarchical levels of organization?


Slamming a space rock at the Earth is hardly a plausible source of information. Meyer has been answering in detail the most serious and scholarly critique of his book, by Charles Marshall, refuting Marshall's criticisms point by point. Meanwhile the proposed alternative explanations for the Cambrian event keep coming, bearing increasingly the marks of desperation.

When the original technologist holds court.

Intelligent Designs in Nature Make Engineers Envious
Evolution News & Views

We've reported numerous times about the vibrant field of biomimetics: the science of imitating nature. There are whole departments at universities dedicated to this. There are journals like Bioinspiration and Biomimetics, the Journal of Biomimetics, Biomaterials, and Tissue Engineering, and Frontiers in Bioengineering and Biotechnology that regularly report on it. Entrepreneurs have started companies to build products mimicking nature. Biomimetics is on a roll. Here are a few of scientists' latest attempts to copy nature's designs. They wouldn't try so hard if the designs weren't intelligent.

Flight on the Small Scale

A news item from the University of Alabama shows Dr. Amy Lang studiously gazing at a Monarch butterfly on the wing. She has reason to stay focused. She just got a $280,000 grant from the National Science Foundation to study the scales on butterfly wings to find ways to improve flight aerodynamics for MAVs (micro area vehicles).

Butterflies don't require the scales to fly, but Dr. Lang knows they help the insects fly better. "The butterfly scales are beautifully arranged on the wing, and how the scales are arranged is where the aerodynamic benefit comes in," she says. This "unique micro pattern ... reduces drag and likely increases thrust and lift during flapping and glided flight." When the scales are removed, the butterfly has to flap its wings 10 percent more to maintain the same flight.

If you've seen Metamorphosis: The Beauty and Design of Butterflies you may recall the striking electron micrographs of the tiny scales, each less than a tenth of a millimeter in width, arranged like shingles on a roof. According to Dr. Lang, there's a reason: "the scales stick up slightly, trapping a ball of air under the scale and allowing air to flow smoothly over it." Her team wants to understand the physics behind this design before trying to model it on artificial flyers.

The article assumes butterflies happened upon these "evolutionary adaptations" by blind, unguided processes: "The scales covering butterfly and moth wings represent about 190 million years of natural selection for insect flight efficiency." Metamorphosis refutes that notion, but what matters in the story is not evolution, but design -- here is a natural design that the NSF feels is worth at least $280,000 to try to imitate. (Dr. Lang also "works with shark scales" in her "bio-designed engineering" lab.)

It's a Bird; It's a Plane; It's Robo Raven

You met nano-hummingbird in Illustra's film Flight: The Genius of Birds. Now here's Robo Raven, a flying drone built at the University of Maryland -- the first Micro Air Vehicle (MAV) using flapping flight. We've noted this briefly before. A video clip shows how Robo Raven III uses sunlight from solar panels built into its wings to charge batteries.

Nature, as usual, does it better. The Robo Raven III can only gather about 30 watts -- an order of magnitude too low to stay aloft indefinitely, IEEE Spectrum says, pointing out that real ravens get "crazy high power density" from meat. On his blog, Professor S. K. Gupta of the UMass design team compares performance between the two, noting that his invention also mimics another natural technology -- solar energy collection by plants:

However, nature has a significant edge over engineered system in other areas. For example, one gram of meat stores 20 times more energy than one gram of the current battery technology. So in terms of the energy density, we engineers have a lot of catching up to do. In nature, solar energy collection devices (e.g., trees) are not on-board ravens. Hence, ravens ultimately utilize a large collection area to gather energy into highly a dense storage source (e.g., meat), giving them a much longer range and better endurance than Robo Raven III. (Emphasis added.)

While Gupta notes that direct solar energy conversion to mechanical energy would be about an order of magnitude more efficient than an animal's metabolic pathway, "We still need to make significant improvements in solar cell efficiency and battery energy density to replicate the endurance of real ravens in Robo Raven III," he confesses. Real ravens also use that metabolism to perform many functions besides flapping flight -- including reproduction, navigation, and the operation of multiple senses. (Living birds can also fly at night.)

Short Takes

Solar power: "Inspired by nature: To maximise the efficiency of solar cells of the future, physicists are taking a leaf out of nature's book" (Cavendish Laboratory, University of Cambridge).
Robotics: "Amber 2 robot walks with a human gait." Why is that good? "People are able to walk so smoothly because of the seamless interaction between the muscles, bone, ligaments, etc. in the legs, ankles and feet ... Getting a robot to walk like us means not just building legs, ankles or feet like ours, it means programming them all to work together in way that is graceful when the robot walks, and that appears to be where the Amber 2 team is headed" (PhysOrg reporting on work at Texas A&M).
Sonar: An engineer was watching a nature show and wondered why dolphins blew bubbles to trap fish, when it would seemingly mess up their sonar signals. He found that the dolphins use two click frequencies that allow them to distinguish between the bubbles and fish. This "inspired the development of a cheap, coin-sized radar gadget that can sense hidden electronics" (New Scientist, reporting on work at University of Southampton).
Does Darwin-Talk Add Value?

Occasionally, news stories like these attribute the designs in question to natural selection. "Through billions of years of evolution, life on Earth has found intricate solutions to many of the problems scientists are currently grappling with," the item from Cambridge says. But then, most of the story marvels at the intricate design that blind nature supposedly arrived at.

Biology has evolved phenomenally subtle systems to funnel light energy around and channel it to the right places. It has also become incredibly good at building tiny devices that work with high efficiency, and at replicating them millions of times.

Similarly, New Scientist ends its biodesign story with: "Evolution has once again sparked ideas for remarkable innovation."
The Darwin language gets to be as annoying as those pop-up ads on the Internet that have nothing to do with the story. The focus is on design -- "intricate solutions" so good, they occupy the best minds in the world's finest academic institutions; designs so attractive, they are worth six-figure government grants to imitate.


You wouldn't want to insult bioengineers with the suggestion they are mimicking blind, unguided processes in their work. No, from our uniform experience, a good design comes from a good mind.

Saturday 6 August 2016

On the morality of abortion.

The bumblebee in the dock for design.

Flight of the Bumblebee Reveals Optimization at Multiple Levels
Evolution News & Views


A biological revolution is underway. Technology has allowed field biologists to track individual animals as small as insects, allowing scientists, for the first time, to gain real-time data on their lifetime behaviors. A team using this technology says in PLOS ONE:

Recent advances in animal-tracking technology have brought within reach the goal of tracking every movement of individual animals over their entire lifetimes. The potential of such life-long tracks to advance our understanding of animal behaviour has been compared to that of the advent of DNA sequencing, but the field is still in its infancy. [Emphasis added.]
If you saw Flight: The Genius of Birds you may remember how tiny geolocators allowed Carsten Egevang's team to monitor the pole-to-pole flight paths of Arctic terns from one year to the next. We've also reported on subsequent studies using geolocators on blackpoll warblers, frigate birds, and even giant flower beetles. Now, a team from Queen Mary University of London has attached radar antennas to bumblebees' heads, allowing them to monitor the entire lifetime flight behavior of these important pollinators.

The work adds to previous research that was more limited. Earlier teams used harmonic radar to track initial flights of bees. This is the first time that the technology was used to monitor the lifetime flights of four bumblebees (Bombus terrestris) over 6-15 days until contact was lost.

The studies described above revealed a great deal about the structure of exploratory and foraging flights, but opened up a number of key questions that are unanswered as yet. Does the change in flight structure from inexperienced to experienced bees occur gradually or as a sudden transition? When and how do bees discover the forage sources they go on to exploit? No prior study has been able to track the activity of individual insects throughout their entire life history, or even a significant portion of their life, making it impossible to address these questions.
You can see the headgear worn by the test bees in a summary on PhysOrg. There will always be some doubt about measurements obtained this way. Did the headgear alter the bees' normal behavior? Were they treated differently by other bees because they look weird? The scientists acknowledged other limitations of the study, such as the fact they were conducted sequentially, when different flowers were in bloom, and employed individuals from different colonies in different locations under different weather conditions. Nevertheless, they reached some tentative conclusions based on 15,000 minutes of data from 244 flights covering 180 kilometers.

Woodgate et al. were surprised to find more individuality than expected. "One of the most striking results to emerge from these data is the large degree to which our bees differed from one another," they write. The bees were not like little robots following a predetermined flight strategy. Each one divided its time differently between exploration of new food sources and exploitation of known food sources. One bee was a "lifelong vagabond," never settling down on any favorite patch. Another one, by contrast, quickly devoted most of its energy to patches with a high payoff. Overall, the pollinators seemed to balance their time between exploitation and exploration. It's a smart strategy, Woodgate says in the PhysOrg article:

"This study provided an unprecedented look at where the bees flew, how their behaviour changed as they gained experience and how they balanced the need to explore their surroundings - looking for good patches of flowers -- with the desire to collect as much food as possible from the places they had already discovered."
The bees made from 3 to 15 flights per day, depending on the distance to and quality of the resources. In general, exploratory flights occur within the first few days. That's when they discover most of the goods that they will return to most often. They will, however, make further exploratory flights at any time. The radar maps of their flight paths show extensive exploration of their surroundings in all directions, implying substantial brain power for memory, orientation and strategy to navigate over large areas and still find their way home.

Future work with larger numbers of bees, monitored simultaneously, will undoubtedly add to knowledge about their behaviors. For now, it appears that bumblebee colonies are programmed to use optimization algorithms - an indicator of intelligent design encoded in their brains. These algorithms work at both the individual and collective level. Variability leads some individuals to bring a lot of food back from reliable sources, and some to explore the environment for new and possibly better sources.

Although it is expected that randomly chosen individuals will tend to show variation in behaviour, the extent of the inter-individual differences we observed in flight behaviour is dramatic. These differences appear to persist over the bees' entire foraging career, and are likely to lead to high levels of variation in the contribution different foragers make to provisioning the colony.
An Instrument View of Bee Optimization

We've looked at lifetime flight behavior of these amazing flyers. Another article explores the flight equipment in more detail. The journal eLife reports new findings about how honeybees (Apis mellifera) position their antennae during flight. Antennae are well known as olfactory and tactile sense organs. During flight, they perform additional roles as speedometers and odometers. Experiments in wind tunnels showed that honeybees will position their antennae forward or backward in flight to measure speed, distance and odor sources:

To investigate how honeybees use different types of sensory information to position their antennae during flight, Roy Khurana and Sane first placed freely-flying and tethered bees in a wind tunnel. Flying forward causes air to flow from the front to the back of the bee. The experiments revealed that a bee brings its antennae forward and holds them in a specific position that depends on the rate of airflow. As the bee flies forward more quickly (or airflow increases), the antennae are positioned further forward.
Roy Khurana and Sane then investigated how the movement of images across the insect's eyes causes their antennae to change position. This unexpectedly revealed that moving images across the eye from front to back, which simulates what bees see when flying forward, causes the bees to move their antennae backward. However, exposing the bees to both the frontal airflow and front-to-back image motion as normally experienced during forward flight caused the bees to maintain their antennae in a fixed position. This behaviour results from the opposing responses of the antennae to the two stimuli.

This appears to be another optimization problem solved by the bees, because the mechanosensory input from the antennae can override the visual sense:

When flying in unpredictable conditions, sensory cues from a single modality are often unreliable measures of the ambient environmental parameters. For instance, purely optic flow-based measurements of self-motion can be misleading for insects which experience sideslip while flying in a crosswind. Moreover, reliance on optic flow may be problematic under dimly lit or overcast conditions, or when flying over lakes or deserts which present sparse visual feedback. In such situations, sampling from multiple sensory cues reduces the ambiguity arising from variability in feedback from single modalities (Wehner, 2003; Sherman and Dickinson, 2004; Wasserman et al., 2015). Hence, the integration of multimodal sensory cues is essential for most natural locomotory behaviours, including insect flight manoeuvres (Willis and Arbas, 1991; Frye et al., 2003; Verspui and Gray, 2009).
The researchers found that antenna position is part of this "multimodal sensory integration" that maximizes useful information from multiple -- sometimes antagonistic -- sources. It's like the IFR-trained pilot who learns to trust his instruments instead of his eyes when the sensory data seem to conflict.


Combined with bees' electrical sense, these pollinators of flowers and crops are pretty amazing little creatures. Neither paper explained how these abilities might have evolved. The second one on antenna positioning only mentions the "evolutionary significance of its function" because flight is impaired when it's broken. Blind processes don't achieve such marvels.


Darwinism v.the real world.XXXI

The Mystery of Vision
Howard Glicksman

Editor's note: Physicians have a special place among the thinkers who have elaborated the argument for intelligent design. Perhaps that's because, more than evolutionary biologists, they are familiar with the challenges of maintaining a functioning complex system, the human body. With that in mind, Evolution News is delighted to offer this series, "The Designed Body." For the complete series,  see here. Dr. Glicksman practices palliative medicine for a hospice organization.


Everyone knows that an odometer measures distance and a speedometer measures velocity. But how do they do it? Each device is essentially a sensory transducer with a mechanism that enables it to sense a physical phenomenon and convert it into useful information. The body has sensory transducers as well that it uses to detect physical phenomena and know what is going on within and without. Vision is the sensation we experience when light, usually reflecting off an object that is within a very narrow range of frequency, enters our eyes.

Common sense teaches that without this special sense our earliest ancestors could never have survived. Evolutionary biologists claim that the presence of different light-sensitive organs in early life forms made it easy for chance and the laws of nature alone to bring about vision. But just like the development of various inventions and technologies, all human experience teaches that intelligent design is a much more plausible explanation. The position of Darwinists not only oversimplifies the development of the irreducibly complex eye, but also does not take into account how our brain converts what it receives from our eyes so that we experience vision.

Nobody, not even evolutionary biologists, truly understands this mystery. The fact that nobody understands it should make any scientist wary of claiming to know how the eye and vision came into being. Yet Darwinists rush in to do just that. Let's look at what makes up the eye, how it works, what the brain receives from it, and how it converts that information into the sensation we call sight.

The human eye is a very complex sensory organ in which many parts work together to focus light on its retina. Although it is in the retina where the nerve impulses for vision begin, the other parts of the eye play important roles that support and protect retinal function. The five different bones that make up the orbital cavity protect about two-thirds of the eyeball and provide the base for the origin tendons of the muscles responsible for eye movement. The eyelids and lashes protect the eye from exposure to too much light or dust, dirt, bacteria, and other foreign objects. A film of tears, consisting of oil, water, and mucus is produced by the oil glands of the eyelids, the lacrimal gland, and the conjunctiva that overlies the sclera (the white outer protective coating of the eyeball). The tear film lubricates the eye, protects it from infection and injury, nourishes the surrounding tissue, and preserves a smooth surface to aid in light transmission.

The cornea is a transparent connective tissue that protects the front of the eye while allowing light to enter. The cornea is transparent because it lacks blood vessels (avascular), instead receiving oxygen, water, and nutrients from two sources. One is the tears that constantly wash across it by the blinking eyelids, and the other is the clear fluid (aqueous humor) within the anterior chamber that sits behind the cornea and in front of the lens. Light rays that reflect from an object more than twenty feet away enter parallel to each other and must be bent (refracted) to focus them on the area in the retina for central (macula) and sharp vision (fovea). The cornea's curvature plays a major role in focusing the light that enters the eye onto the retina.

The lens is a transparent, elastic biconvex structure that is kept in place by suspensory ligaments. Like the cornea, it is avascular and obtains its oxygen, water, and nutrients from the aqueous humor in the anterior chamber. As noted above, light rays from a distance (greater than twenty feet) enter the eye in parallel, whereas those from nearby (generally less than twenty feet away) spread out. To focus the light on the macula and fovea, this diverging light must be further refracted and the biconvex curvature of the lens accomplishes this task. Since what the eye focuses on close-up is always changing, the curvature of the lens can be reflexively adjusted (accommodation) so that the light rays will strike the retina in the area for sharp vision.

The choroid is the layer of tissue located between the sclera and the retina and provides the circulation to the back of the eye. The choroid also contains the retinal pigmented epithelium, which sits behind the retina and absorbs light. This prevents light from reflecting back on the photoreceptors and causing visual blurring. The extension of the choroid in the front of the eye is the colored iris, consisting of two different muscles that control the amount of light that enters through its opening (pupil).

Finally, the thick, transparent, and gelatinous substance that forms and shapes the eyeball is the vitreous. It is able to compress and return to its natural position, allowing the eyeball to withstand most physical stresses without serious injury.

Each eye has about one hundred twenty million rods arranged throughout the retina. The rods contain a photopigment called rhodopsin which is very sensitive to all the wavelengths of the visible light spectrum. In contrast, there are only about six million cones that are mostly concentrated in the macula, primarily in the cone-only fovea. Each cone contains one of three different photosensitive pigments, called photopsins, which tend to react stronger to either the red, green, or blue wavelengths of light. Both rhodopsin and the photopsins are dependent on Vitamin A.

When photons of light strike the retina they interact with the photoreceptor cells and cause an electrical change and the release of a neurotransmitter. Messages are passed through interconnecting neurons within the retina. These retinal interneurons process the information and send the resulting nerve signals along the optic nerve to the brain. About eighty percent of the optic nerve impulses travel to neurons within the brain. These pass on the sensory information to the visual cortex in the occipital lobes. However, the remaining twenty percent veer off and provide sensory data to the neurons in the brainstem that service muscles that help the eye to function better and provide protection.

For example, if you enter a dark room, the dilating muscle of the iris immediately contracts, causing the pupil to enlarge. This lets more light into the eye to help improve vision. But if you shine a bright light into your eye, the contracting muscle of the iris instantly goes into action, causing the pupil to diminish in size to protect the retina from too much light. This is called the pupillary light reflex, which is often used by physicians to determine the presence of brainstem function.

In considering the nature of the sensory data being presented from the eyes to the visual cortex, several points must be kept in mind. First, the use of the cornea and lens to refract and focus light on the retina results in a reversed and upside-down image. This means that what appears in the right upper half of the visual field is detected by the left lower half of the retina and what appears in the left lower half of the visual field is detected by the right upper half of the retina etc. Second, looking through one eye shows that there is an overlap in the nasal visual fields (the right half of the left eye and the left half of the right eye). This overlap provides the visual cortex with two different perspectives and allows for depth perception.

Finally, impulses sent along each optic nerve split-up on their way to the brain. The messages from the nasal half of the retina cross over from right to left and from left to right through what is called the optic chiasm. However the impulses from the temporal half of the retina (the left half of the left eye and the right half of the right eye) stay on the same side. This means that everything seen by the right half of each eye (the nasal field of the left eye and the temporal field of the right eye) goes to the left occipital lobe and everything seen by the left half of each eye (the nasal field of the right eye and the temporal field of the left eye) goes to the right occipital lobe. Our brain then takes this upside down, turned around, split-up and overlapping collection of photon-generated nerve impulses and provides us with what we experience as vision. How it is able to accomplish this feat remains entirely unknown.

If you have ever used a magnifying glass to focus light onto a paper to make it burn, then you know that the refractive power of a lens is dependent on its degree of curvature, which is inversely related to the distance it takes to bring the light together at a focal point. The higher the refractive power, the shorter the focal distance, and vice versa. The eye is dependent on the combined refractive power of the cornea and the lens (58 diopters) to focus light onto the area of the retina for sharp vision. And as luck would have it, the distance from the cornea to the retina (23 mm) is exactly what it should be to get the job done. What do you know?

For our earliest ancestors to have been able to safely find food and water and properly prepare and handle it for ingestion, would have required them having normal distance and near vision. Eye doctors know that about a four percent increase in the combined refractive power of the cornea and lens (or a lengthening of the eye) results in severe myopia (not being able to see the big E on the eye chart clearly). And a twenty five percent decrease in both of these leads to difficulties with distance and near vision.

When evolutionary biologists talk about vision, not only do they leave out how it is irreducibly complex (all of the parts of the eye and the brain are needed for proper function) but also that it demonstrates natural survival capacity, in that the combined refractive power of the cornea and lens and the lens's ability to adjust to close-up objects perfectly matches the diameter of the eyeball. Remember, when it comes to life and the laws of nature, real numbers have real consequences. Without the right refractive power or eyeball diameter our earliest ancestors would have been as blind as bats.


But in that case, as some people mistakenly argue, evolution would have just made them develop sonar instead, because that would have been what they needed to survive. Next time we'll look at hearing.