Search This Blog

Saturday 13 August 2016

Solution to cambrian mystery:Just add yeast.

Cambrian Explosion Explained by Yeast Clumping Together

David Klinghoffer


We've made sort of a hobby of collating and dissecting theories of how an explosion of complex novel life forms, the Cambrian explosion, can be explained without reference to the obvious explanation, intelligent design.

It's almost too much to keep up with:

And more.

Now comes the yeast theory. From New Scientist:

Just a few generations after evolving multicellularity, lab yeasts have already settled into at least two distinct lifestyles.

The discovery suggests that organisms can swiftly fill new niches opened up by evolutionary innovations, just as the first multicellular animals appear to have done on Earth, hundreds of millions of years ago.

In the lab, yeast cells clumped together, forming larger and smaller "snowflakes."

In short, large and small yeast morphs specialise in different settling strategies, so both can coexist.

These two distinct ecological strategies appeared almost immediately once the multicellular yeasts themselves evolved, notes Travisano.

This provides experimental proof that when evolution makes a great leap forward -- such as the origin of multicellularity -- organisms can diversify rapidly to take advantage of the change.

Many years ago, palaeontologist Stephen Jay Gould suggested that a similar sudden ecological diversification may have led to the Cambrian Explosion in which most animal body forms arose in the fossil record within a few tens of millions of years.

"Possibly what we see here is the first step of what Gould's talking about -- the opening up of diversity due to a key innovation," says Travisano.

Yeast cells clump together. Ergo trilobites.

"Possibly," given a "few tens of millions of years," this could represent a "first step" toward massive diversification.

Ann Gauger has written here about not entirely dissimilar speculations about the development of multicellularity, with Volvox rather than yeast as the illustration. "Saying that something might have happened," she observes, "is not the same as showing that it actually could happen."

"A Simple Transition to Multicellularity -- Not!"

A simple transition from clumping yeast to menagerie of beasts leaves even more "white space," as Dr. Gauger politely puts it, to fill in with needed details.

"The White Space in Evolutionary Thinking"

The white space in the yeast theory is blinding. It's a blizzard of white, obscuring all vision.

The truth is that evolutionists have no idea what produced the Cambrian explosion. Yet, knowing this is an almost immeasurably vast defect in the armature of their theory, they keep throwing speculations at it in the hope that something will stick, or clump.

The solution, though, is right before their eyes, or anyway, your eyes:

Have Darwinian theories on human evolution run out of time?

The Origin of Man and the "Waiting Time" Problem
John Sanford 

Editor's note: We are pleased to welcome a contribution from Dr. Sanford, who is Courtesy Associate Professor, School of Integrative Plant Science, Cornell University.

My colleagues and I recently published a paper in Theoretical Biology and Medical Modeling, "The Waiting Time Problem in a Model Hominin Population." It is one of the journal's "highly accessed" articles. A pre-human hominin population of roughly 10,000 individuals is thought to have evolved into modern man, during a period of less than six million years. This would have required the establishment of a great deal of new biological information. That means, minimally, millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time. We show that there is simply not enough time for this type of evolution to have occurred in the population from which we supposedly arose.

Historically, Darwin-defenders have argued that time is on their side. They have claimed that given enough time, any evolutionary scenario is feasible. They have consistently argued that given millions of years, very large amounts of new biologically meaningful information can arise by the Darwinian process of mutation/selection. However, careful analysis of what is required to establish even a single genetic "word" (a short functional string of genetic letters) within a hominin genome shows just the opposite. Even given tens of millions of years, there is not enough time to generate the genetic equivalent of the simplest "word" (two or more nucleotides). Even in a hundred billion years, much longer than the age of the universe, there is not enough time to establish the genetic equivalent of a very simple "sentence" (ten or more nucleotides). This problem is so fundamental that it justifies a complete re-assessment of the basic Darwinian mechanism.

In my book Genetic Entropy, I have previously outlined the waiting time problem (for example, see the 2014 edition, Chapter 9, pp. 133-136). My calculations there, and calculations published by others (Behe, Snoke, Axe, Gauger et al.), all demonstrate the same basic problem. (For a complete literature review, see the link to our new paper given above.) What this new paper provides is an independent validation, by a totally different method, of the previous works done by Behe, others, and myself.

In our paper we examine the waiting time problem in a new way, employing state-of-the-art, comprehensive, numerical simulations to empirically document the time required to create a specific string of mutations. This method is an alternative to employing mathematical approximations, and is preferable for various reasons outlined in the paper. Our empirical experiments realistically enacted the establishment of short genetic sequences within biologically realistic virtual hominin populations. These experiments demonstrate the limits of the classic neo-Darwinian mechanism in a clearer, and more compelling way. Of special significance, we show that as genetic "word size" increases linearly, waiting time increases exponentially (see Table 2 in the new publication).

The waiting time problem has four basic elements. First, in a small population it takes a very long time for any specific nucleotide (genetic letter) to mutate into a specific alternate nucleotide. Second, it takes vastly more time for a given string of nucleotides to mutate into a specific alternative string of nucleotides (as is required to create a new beneficial genetic "word"). Third, any specific new word that arises is quickly lost due to genetic drift, and so must arise many times before it "catches hold" within the population. And fourth, even when the new word catches hold, it takes additional time for natural selection to amplify the new beneficial mutation to the point of fixation within the population.

Our paper shows that the waiting time problem cannot honestly be ignored. Even given best-case scenarios, using parameter settings that are grossly overgenerous (for example, rewarding a given string by increasing total fitness 10 percent), waiting times are consistently prohibitive. This is even for the shortest possible words. Establishment of just a two-letter word (two specific mutations within a hominin population of ten thousand) requires at least 84 million years. A three-letter word requires at least 376 million years. A six-letter word requires over 4 billion years. An eight-letter word requires over 18 billion years (again, see Table 2 in the paper). The waiting time problem is so profound that even given the most generous feasible timeframes, evolution fails. The mutation/selection process completely fails to reproducibly and systematically create meaningful strings of genetic letters in a pre-human population.

Other authors have published on the waiting time problem and they have consistently acknowledged its reality, but some have then tried to minimize the problem. In those cases, the authors have first shown the waiting problem is serious, but then go on to invoke very special atypical conditions, seeking to reduce waiting times as much as possible. This is evidently in the hope of saving neo-Darwinian theory. But when these "special conditions" are carefully examined, in every case they are far-fetched and ad hoc.

When the dismissive authors use the same formulation of the problem as we used in our paper, they see the same prohibitive waiting times (see our paper's discussion). For example Durrett and Schmidt (2007) model a human population of 10,000, just as we do. They show that for a specific set of eight required mutations (which must arise in the context of a specific genomic location), the waiting time is 650 million years. But most readers will miss the fact that this is just their estimated time to the "first instance" of the string. Elsewhere in their paper they acknowledge that the establishment and fixation of the specific set of mutations would take 100 times longer than the first instance (when they assume a 1 percent fitness reward). This would be 65 billion years! Using the same parameter settings (and applying a 1 percent fitness reward) our own experiments give waiting times of the same magnitude. Likewise, when Lynch and Abegg (2010) specify a population of 10,000, and when two specific mutations are required, they get waiting times exceeding 10 million generations (see their Figure 1). Assuming twenty years per generation for a human population, this is more than 200 million years (see our paper's discussion).

What will the primary counterargument be to the waiting time problem? The primary objection is, and will continue to be, as follows. Within a small population, a given string of letters cannot arise in a specific location without a prohibitive waiting time, yet somewhere else in the genome good things might still be happening. For example, if one is waiting for the sequence ATCG to be fixed in a specific genomic location, it will require very deep time, but it will take no time at all if one is waiting for ATCG to arise anywhere in the genome. Indeed, many copies of ATCG are already in the genome. This argument has three problems.

First, it ignores context. The sequence ATCG by itself is not useful information. It can never be beneficial (and hence selectable), except in a very specific context. Consider randomly changing one word in an encyclopedia -- will it consistently improve the text, regardless of where the change is made? All information is context-dependent. For example, if you have an executable computer program, inserting a certain random string of binary digits could conceivably improve the program's information content. But in such a very unlikely case, it would only be beneficial within an extremely specific context (location). When inserted out of context, the same string would almost certainly be deleterious.

Second, when we broaden our view to include the whole genome, we have to consider the problem of net loss of information, due to a multitude of nearly neutral deleterious mutations that are happening throughout the genome. Random mutation will cause ubiquitous genetic damage (especially in deep time), which will greatly overshadow the few rare strings that might arise in just the right context and might be sufficiently beneficial to be selectable.

Third, invoking "good things that might be happening in other parts of the genome" is essentially sleight of hand. Other potentially beneficial sets of mutations in other parts of the genome will each have their own waiting time problem. This is not a reasonable explanation for the origin of the massive amount of integrated biological information that is required to change an ape into a man (i.e., millions of complementary nucleotide substitutions established and fixed within the source hominin genome, in very little time).


Given that higher genomes must continuously accumulate deleterious mutations (as I show in Genetic Entropy), and given that beneficial mutations are very rare (as shown by the famous Lenski LTEE project, and also as shown in Genetic Entropy), and given that evolution cannot create meaningful genetic words (even given deep time), it seems that neo-Darwinian theory is coming undone on every level.

Darwinism Vs. the real world. XXXII

Can You Hear Me Now? Good, Then Thank Your Irreducibly Complex Ears.
Howard Glicksman 

Editor's note: Physicians have a special place among the thinkers who have elaborated the argument for intelligent design. Perhaps that's because, more than evolutionary biologists, they are familiar with the challenges of maintaining a functioning complex system, the human body. With that in mind, Evolution News is delighted to offer this series, "The Designed Body." For the complete series, see here. Dr. Glicksman practices palliative medicine for a hospice organization.

Thermometer measures temperature and a barometer measures air pressure. But how do they do it? Each device is essentially a sensory transducer with a mechanism that enables it to sense a physical phenomenon and convert it into useful information. The devices the body uses to detect physical phenomena so it knows what is going on outside and inside of it are sensory transducers as well. Hearing is the sensation we experience when vibrating molecules within a medium, typically air (but sometimes water), form mechanical waves within a specific wavelength range and enter our ears.

Common sense tells us that without this special sense our earliest ancestors could never have survived. Evolutionary biologists claim that similar auditory mechanisms in other life forms prove that it was easy for chance and the laws of nature alone to invent hearing. But as in the development of human technologies, experience teaches that intelligent design is a more plausible explanation. Darwinists oversimplify the need for the presence of all the parts of the ear for it to hear well enough for human survival. They also fail to take into account how our brain converts what it receives into what we experience as hearing.

In truth, hearing is a mystery that nobody, not even evolutionary biologists, understands. Nobody really understands how we can hear, so nobody should claim to understand how the ear and hearing came into being. Yet that doesn't stop Darwinists from telling us otherwise. Let's look at what makes up the ear, how it works, and what the brain receives from it which it then converts into the sensation we call hearing.

Sound waves are oscillations, the back and forth movement of molecules within a medium, such as air. These vibrations are transmitted to adjacent molecules and spreads out in all directions. Sound is not due to the linear movement of air -- that is called wind. Furthermore, since a vacuum has no air molecules it cannot transmit sound since there are no air molecules within it to vibrate. The physical nature of sound waves is that the air particles alternate between being packed together in areas of high concentration, called compressions, and spread apart in areas of low concentration called rarefactions. These compressions and rarefactions of air molecules form longitudinal pressure waves which, depending on the type of sound and the energy used to create them, have amplitude, wavelength, and frequency. Sound waves travel at about 330 m/sec, and since light travels at 300,000 km/sec, this means that light is literally about a million times faster than sound.

The human ear is a very complex sensory organ in which all of parts work together to produce and transmit mechanical waves of oscillating molecules to its cochlea. Although it is in the cochlea where the nerve impulses for hearing begin, the other parts of the ear play important roles that support cochlear function. The ear can be divided into three regions; the outer (external) ear, the middle ear, and the inner (internal) ear.

The outer ear consists of the pinna (ear flap), the ear canal, and the eardrum (tympanic membrane). The pinna acts like a satellite dish, collecting sound waves and funneling them down the ear canal to the eardrum. The pinna is made of flexible cartilage and is important for determining the location of different sounds. The ear canal produces wax which provides lubrication while at the same time protecting the eardrum from dust, dirt, and invading microbes and insects. The cells that line the ear canal form near the eardrum and naturally migrate outward toward the entrance of the ear canal, taking with them the overlying ear wax, and are shed from the ear. This provides a natural mechanism of wax removal. Sound waves enter through an opening in the skull called the external auditory meatus. They naturally move down the ear canal and strike the eardrum. The eardrum is a very thin cone-shaped membrane which responds to sound waves by vibrating to a degree that is determined by their amplitude, wave length, and frequency. It represents the end of the outer ear and the beginning of the middle ear.

The middle ear is an enclosed air-filled chamber in which the air pressure on either side of the eardrum must be equal to allow for adequate compliance, a measure of how easily the eardrum will move when stimulated by sound waves. The air in the middle ear tends to be absorbed by the surrounding tissue which, if not corrected, can lead to a vacuum effect, reduced eardrum compliance, and thus impaired hearing. The auditory tube in the middle ear connects with the back of the nose and pharynx. The muscular action of swallowing, yawning, or chewing causes the auditory tube to open, allowing ambient air to enter the middle ear, replacing what has been absorbed and equalizing the air pressure on both sides of the eardrum. Anyone who has flown in an airplane has experienced this vacuum effect as the plane descended and felt its resolution when a popping sound in the ear signified that air had entered the middle ear through the auditory tube.

The middle ear contains the three smallest bones in the body, the ossicles, which include the malleus (hammer), the incus (anvil), and the stapes (stirrup). The job of the ossicles is to efficiently transmit the vibrations of the eardrum into the inner ear which houses the cochlea. This is accomplished by the malleus being attached to the eardrum and the incus, the incus to the malleus and the stapes, and the stapes to the incus and the oval window of the cochlea.

The cochlea consists of three fluid-filled interrelated coiled chambers which spiral together for about two and half turns resembling a snail shell. Within the cochlea is the organ of Corti, the sensory receptor that converts the mechanical waves into nerve impulses. The vibrations, started by sound waves striking the eardrum and transmitted by the ossicles in the middle ear to the oval window of the cochlea, now produce fluid waves within it. The organ of Corti contains about 20,000 hair cells (neurons) running the length of the spiraled cochlea which when stimulated by these fluid waves causes them to bend and depolarize, sending impulses through the auditory nerve to the brain. Higher frequencies cause more motion at one end of the organ of Corti while lower frequencies cause more motion at the other end. The specific cochlear neurons that service specific hair cells along the organ of Corti respond to specific frequencies of sound which, when sent to the auditory cortex, are processed, integrated, and then interpreted as hearing. How the brain is able to perform this feat is as yet not fully understood.

Evolutionary biologists, using their well-developed imaginations, expound on how all the parts of the ear must have come together by chance and the laws of nature alone. However, as usual, they only try to explain how life looks and not how it actually works under the laws of nature to survive. Besides the development of all of its perfectly integrated parts, they never mention the problem the ear faces when it comes to transmitting the vibrations of the tympanic membrane to the organ of Corti with enough pressure to allow for adequate hearing to take place.

It is much easier to move through air than it is through water. That is because of water's higher density. This means that it is much easier for sound waves in the air to move from the eardrum through the middle ear than it is for the oval window to move waves of fluid through the cochlea. Without some sort of innovation, this difference in air/water density would have so reduced the amplitude of the fluid waves in the cochlea that the hearing ability of our earliest ancestors would have been severely compromised and with it, their survival capacity.

So, what novelty of engineering did our ears develop to let them transmit sound waves through the outer and middle ear to the cochlear fluid with enough amplitude to allow for adequate hearing? It is important to remember that F= PA, Force is equal to Pressure times Area. This means that with a given force, the pressure on a given surface is inversely related to its area. If the area decreases, the pressure on the surface increases, and if the area increases, the pressure decreases.

It just so happens that the surface area of the tympanic membrane is about twenty times larger than that of the oval window. This means that the force generated by the vibrations coming from the tympanic membrane through the ossicles to the oval window naturally increases twentyfold on the cochlear fluid. It was this mechanical advantage of their larger tympanic membranes transmitting vibrations through their ossicles to their smaller oval windows of their cochleae that allowed our earliest ancestors' ears to have adequate hearing so they could survive within the world of sound.

Evolutionary biologists seem to be completely ignorant of the fact that the parts used for hearing are not only irreducibly complex, but, to have functioned well enough so our earliest ancestors could hear well enough to survive, they must also have had a natural survival capacity. When it comes to the laws of nature, real numbers have real consequences.


But besides the cochlea there is another very important sensory transducer within the inner ear. Next time we'll look at vestibular function and how it let our earliest ancestors stay balanced.

Nature's navigators in the dock for Design.

Search for a Search: Does Evolutionary Theory Help Explain Animal Navigation?
Evolution News & Views

The living world is filled with searches. Moths find their mates. Bacteria find food sources. Plant roots find nutrients in the soil. Illustra's film Living Waters includes incredible examples of search: dolphins finding prey with echolocation, salmon navigating to their breeding grounds with their exceptional sense of smell, and sea turtles making their way thousands of miles to distant feeding grounds and back home again using the earth's magnetic field.

The subject of search looms large in William Dembski's ID books No Free Lunch and Being as Communion. When you think about search for a moment, several factors imply intelligent design. The entity (whether living or programmed) has to have a goal. It has to receive cues from the environment and interpret them. And it has to be able to move toward its target accurately. Dembski demonstrates mathematically that no evolutionary algorithm is superior to blind search unless extra information is added from outside the system.

In the Proceedings of the National Academy of Sciences this month, five scientists from Princeton and MIT encourage a multi-disciplinary effort to understand the natural search algorithms employed by living things.

The ability to navigate is a hallmark of living systems, from single cells to higher animals. Searching for targets, such as food or mates in particular, is one of the fundamental navigational tasks many organisms must execute to survive and reproduce. Here, we argue that a recent surge of studies of the proximate mechanisms that underlie search behavior offers a new opportunity to integrate the biophysics and neuroscience of sensory systems with ecological and evolutionary processes, closing a feedback loop that promises exciting new avenues of scientific exploration at the frontier of systems biology. [Emphasis added.]
Systems biology, a hot trend in science as Steve Laufmann has explained on ID the Future, looks at an organism the way a systems engineer would. These scientists (two evolutionary biologists and three engineers) refer several times to human engineering as analogous to nature's search algorithms. Specifically, "search research" to an engineer (finding a target in a mess of noisy data) reveals many similarities with the searches animals perform. By studying animal search algorithms, in fact, we might even learn to improve our searches.

The fact that biological entities of many kinds must overcome what appear, at least on the surface, to be similar challenges in their search processes raises a question: Has evolution led these entities to solve their respective search problems in similar ways? Clearly the molecular and biomechanical mechanisms a bacterium uses to climb a chemical gradient are different from the neural processes a moth uses to search for a potential mate. But at a more abstract level, it is tempting to speculate that the two organisms have evolved strategies that share a set of properties that ensure effective search. This leads to our first question: Do the search strategies that different kinds of organisms have evolved share a common set of features? If the answer to this question is "yes," many other questions follow. For example, what are the selective pressures that lead to such convergent evolution? Do common features of search strategies reflect common features of search environments? Can shared features of search strategies inform the design of engineered searchers, for example, synthetic microswimmers for use in human health applications or searching robots?
The paper is an interesting read. The authors describe several examples of amazing search capabilities in the living world. Living things daily reach their targets with high precision despite numerous challenges. Incoming data is often noisy and dynamic, changing with each puff of wind or cross current. Signal gradients are often patchy, not uniform. Yet somehow, bacteria can climb a chemical gradient, insects can follow very dilute pheromones, and mice can locate grain in the dark. Even in our own bodies, immune cells follow invisible cue gradients to their targets. Everywhere, from signal molecules inside cells to whole populations of higher organisms, searches are constantly going on in the biosphere.

The engineering required for a successful search, whether natural or artificial, exemplifies optimization -- an intelligent design science. It's not enough to have sensitive detectors, for instance. If too sensitive, a detector can become saturated by a strong signal. Many animal senses have adaptation mechanisms that can quench strong signals when necessary, allowing detection over many orders of magnitude. This happens in the human ear; automatic gain control in the hair cells of the cochlea gives humans a trillion-to-one dynamic range. A recent paper showed that human eyes are capable of detecting single photons! Yet we can adjust to bright sunlight with the same detectors, thanks to the automatic iris and other adaptation mechanisms in the retinal neurons.

In the PNAS paper, the authors describe additional trade-off challenges for natural search algorithms. For instance, should the organism go for the richest food source, if that will expose it to predators? Should populations with similar needs compete for resources, or divide them up? Each benefit incurs a cost. A well-designed search algorithm handles the trade-offs while maximizing the reward, even if the reward is less than ideal. Engineers have to solve similar optimization problems. They have a word for it: "satisficing" the need by reaching at least the minimum requirement. It's obvious that achieving the best solution to multiple competing goals in a dynamic, noisy environment is a huge challenge for both engineers and animals.

The otherwise insightful paper runs into problems when it tries to evolutionize search. They say, "We expect natural selection to drive the evolution of algorithms that yield high search performance, while balancing fitness costs, such as exposure to predation risk." Great expectations, but can they hold up to scrutiny?

The authors assume evolution instead of demonstrating it. They say that organisms "have evolved" strategies for searching. Because of the irreducible complexity of any system requiring sensors, detectors, interpreters and responders to pull off a successful search, this would amount to a miracle.

They appeal to "convergent evolution" to account for similar search algorithms in unrelated organisms. This multiplies the miracles required.

They speak of the environment as supplying "selective pressure" for organisms to evolve their algorithms. If the environment could pressure the formation of search algorithms, then rocks and winds would have them, too. The environment can influence the formation of a dust devil, but the whirlwind isn't searching for anything. The environment can make rocks fall and rivers flow in certain directions, but they don't care where they are going. It takes programming to find a target that has been specified in advance.

Most serious of all, the claim that natural selection can drive the evolution of search algorithms undermines itself. In a real sense, the scientists themselves are performing a search -- a search for a search. They want to search for a universal model to explain animal search algorithms. But if they themselves are products of natural selection, then they would have no way of arriving at their own target: that being, "understanding" the natural world and explaining how it emerged.

To see why their search is doomed, see Nancy Pearcey's article, "Why Evolutionary Theory Cannot Survive Itself." The authors in PNAS must apply their own explanation to themselves. But then it becomes a self-referential absurdity, because they would have to say that the environment pressured them to say what they said. Their explanation, furthermore, would have no necessary connection to truth -- only to survival. Remember Donald Hoffman's debunking of evolutionary epistemology? "According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness," he said. "Never." Consequently, the authors of the paper cannot be sure of anything, including their claim that natural selection drives the evolution of search algorithms.


What we can say is that every time we observe a search algorithm coming into being, whether the Google search engine or a class in orienteering, we know intelligence was involved. What we never see is a new search algorithm emerging from mindless natural causes. We therefore know of a vera causa -- a true cause -- that can explain highly successful search algorithms in nature.