Search This Blog

Friday, 27 January 2017

Why a finite universe remains a problem for atheism

Cosmology Is Having Its Own Darwinian Crisis
Rob Sheldon 

Editor's Note: Denyse O'Leary writes in our current cover story about how "Many in cosmology have never made any secret of their dislike of the Big Bang," since on its evidence the universe appears "suddenly created" and "finely tuned." We asked another new contributor, physicist Rob Sheldon, for his take on an interesting 2010 arXive paper by Roger Penrose and V.G. Gurzadyan, "Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity," that tries to solve the problem of the Big Bang by substituting an "eternal, cyclic cosmos."


Dr. Sheldon received his PhD from the University of Maryland, College Park. After appointments at the University of Bern in Switzerland, Boston University, and the University of Alabama in Huntsville, he is currently consulting with NASA's Marshall Space Flight Center.



As you know by now, the finiteness of the universe is extremely disturbing to materialists, who want an infinite universe to avoid ever having to discuss a creator. It's a gambit pioneered by Democritus and Epicurus, ridiculed by Aristotle, and promoted by Lucretius and then the 17th-century materialists. The usual counter to materialism was biology, beginning with Aristotle, because of the inescapable evidence of purpose, of teleology. This is what made Darwin so very, very popular. He provided a materialist answer to the evidence of teleology in biology. 
But the success was short lived, because some sixty years later, around 1915-1919, Einstein developed his "General Theory of Relativity" demonstrating that the universe had a beginning. This is documented by the astronomer Robert Jastrow in his 1979 book God and the Astronomers. Stanley Jaki expands the critique in his important book God and the Cosmologists. Both of them point out that the discovery of the beginning of the universe undermines materialism. (Jaki's critique is, of course, the more scathing.)

Sir Roger Penrose is a member of the Humanist Society, which is the polite version of "New Atheists." So he has an interest in eliminating the appearance of a creation event. One of the early attempts at this was to posit a "bouncing" universe that would alternately expand and contract and expand again. Stephen Hawking teamed up with Penrose to demonstrate that this was impossible, because the contraction would lead to a black hole, from which nothing could bounce.

Recent suggestions coming from "quantum loop gravity" posit an incompressible "stringy" physics below the size scale of the proton that can cause the universe to bounce out of a black hole. My objection to most of those theories is that the forces they invoke are unobservable right now, so it is akin to adding a "tooth fairy" to the theory. One rule-of-thumb in physics is that every theory can invoke one tooth fairy, but never two. All these theories have a second tooth fairy that makes the first one vanish.

But the real demise of the "bouncing Big Bang" was the discovery that there wasn't enough matter in the universe to slow down the expansion of the Big Bang, so there will never be a "Big Crunch." Instead, the galaxies will fly further and further apart as the stars burn out into cold cinders and the black hole at the center of every galaxy will slowly consume every cinder until untold eons later the black holes evaporate via "Hawking radiation" into a vast emptiness of lonely photons.

Penrose, however, has lost neither his hope nor his imagination. He suggests that when the last black hole vanishes, the universe will have no measuring sticks, no matter in it. At this point it is ruled completely by the laws of electromagnetics and therefore will spontaneously shrink 50 orders of magnitude until it generates matter again, at which point it will look exactly the same as the Big Bang looked at 10^-34 seconds -- hot and seething with energy and creative potential. And you thought the Phoenix was a silly Greek myth?

Presumably, the signature of this shrinking will be a gravity wave set up in the fabric of space-time, such that the resulting Big Bang is the second event of creation. Thus we can look at the distribution of Cosmic Microwave Background and see an echo of the first event. Since Penrose is a theorist, he hired an experimentalist to do the data mining in the CMB data set, and the arXive paper supposedly finds a ring of brighter CMB that Penrose attributes to this effect. So, is this a classic "hypothesis -- prediction -- validation" paper?

I doubt it, for the following reasons:

Penrose's theory is so vague in particulars that it can be used to fit any set of data.

The ring that is observed looks too "perfect," which suggests it is an artifact of the data processing.

The processing of the CMB data also involves a "ring" type of comparison to remove the "noise" in the detector. Basically the CMB signal is about 2 orders of magnitude below the noise of stars, nebula, dust, etc., and it takes a huge amount of data processing to extract it. So I think this paper simply magnifies some of the deficiencies of the data collection.

I really hate to say this, but the paper never made it out of the arXiv server and into the peer-reviewed literature. So I would imagine that my criticisms were also made of the paper, and the authors either couldn't respond to them, or the effect went away when they did.

Inasmuch as Sir Roger's theory is particular, it makes certain predictions about reality that don't seem to work too well in the present. This "evaporation" of matter into photons, for example, was a common theory for thirty years about the instability of the proton. Sir Fred Hoyle wanted protons to spontaneously appear, which means they also spontaneously disappear. So if you can collect some 10^32 protons in one place and look for 10^8 seconds, one can put a rather strict upper limit on this "evaporation" likelihood. This was done in a detector in Japan, and no protons were ever seen to decay. This means we need to invoke a second, "cloaking tooth fairy" to cover the first, and the theory starts to look more and more like the pathology of Darwinism.


Which, in fact, it is.

Thursday, 26 January 2017

Complex specified information:It's everywhere.

The Spike Code: Another Information-Rich Signaling System in Neurons
Evolution News & Views

It's time for another paradigm change. "These findings suggest that a fundamental assumption of current theories of motor coding requires revision," as the Abstract of a new paper in the Proceedings of the National Academy of Sciences . Neuroscientists from Emory University have uncovered another coded signaling system, this time in nerves and muscles. The paper's categories include "Computational Neuroscience" and "Information Theory."

Neurons and muscles have a strong relationship. To get a bicep to flex, or a diaphragm to bend for breathing, the muscles involved need to be triggered. The triggers come from nerves connected to the muscle fibers. Until this paper came along, most neuroscientists figured that the brain just sped up the "spike rate" of pulses to the muscle to get them to respond. The emerging view is much richer in implications for intelligent design. It's not just the rate; it's the timing.

A crucial problem in neuroscience is understanding how neural activity (sequences of action potentials or "spikes") controls muscles, and hence motor behaviors. Traditional theories of brain function assume that information from the nervous system to the muscles is conveyed by the total number of spikes fired within a particular time interval. Here, we combine physiological, behavioral, and computational techniques to show that, at least in one relatively simple behavior--respiration in songbirds--the precise timing of spikes, rather than just their number, plays a crucial role in predicting and causally controlling behavior. These findings suggest that basic assumptions about neural motor control require revision and may have significant implications for designing neural prosthetics and brain-machine interfaces. [Emphasis added.]
Working with six male Bengalese finches that were anesthetized, the researchers monitored their breathing while recording neural spikes to the lungs. They were able to stimulate the motor neurons arbitrarily in vivo and watch what happens. This is delicate work; they had to work at 250 micro-amp levels. To locally block certain nerve-muscle junctions, they applied curare -- the compound Brazilian hunters use on poison darts -- but not enough to paralyze the poor birds! (How do you say that in scientese? "Applying too much curare and fully paralyzing EXP [expiratory muscle group] would endanger the wellbeing of the animal.")

Next, they analyzed triplets of spikes where the middle spike was variable. They wanted to test whether a "neural code" exists in the train of spikes. To do this, they had to measure interspike intervals (ISIs) at millisecond resolution. If the brain controls these intervals, and the muscles respond accordingly (for instance, with changes in air pressure), it would signify the presence of a neural code.

With these techniques they were able to isolate properties of the neuromuscular response for a variety of experimental tests. In particular, they were looking for the effects of different signal patterns. "Therefore, we believe that our muscle stimulation experiments were only activating the axons of motor neurons and were not activating muscle fibers directly," they say. "This finding allowed us to make insightful comparisons between the results of our spike pattern and stimulation analyses." After gathering large data sets and crunching them with software, they came to the conclusion they had found a code -- not just in songbirds, but all animals:

Overall, we have shown that respiratory motor unit activity is controlled on millisecond timescales, that precise timing of spikes in multispike patterns is correlated with behavior (air sac pressure), and that muscle force output and the behavior itself are causally affected by spike timing (all on similar temporal scales) (Figs. 2D, 3C, and 4C). These findings provide crucial evidence that precise spike timing codes casually [sic, causally] modulate vertebrate behavior. Additionally, they shift the focus from coding by individual spikes (1, 14, 19) to coding by multispike patterns and from using spike timing to represent time during a behavioral sequence (20, 21) to coding its structural features. Put another way, although it is clear that earlier activation of neurons would lead to earlier activation of muscles, this relationship only accounts for encoding when a behavior happens (10, 22). Here, we show that changing the timing of a single spike within a burst by ∼1 ms can also affect what the animal will do, not just when it will do it. Furthermore, we showed that the effect of moving a single spike is stable across animals (Fig. 2). We believe that this precise spike timing code reflects and exploits muscle nonlinearities: spikes less than ∼20 ms apart generate force supralinearly (SI Appendix, Fig. S12), with stronger nonlinearities for shorter ISIs [interspike intervals]. Thus, changing the first ISI from 12 to 10 ms significantly alters the effect of the spike pattern on air pressure (Fig. 2B). Such nonlinearities in force production as a function of spike timing have been observed in a number of species (23⇓-25), highlighting the necessity of examining the role of spike timing codes in the motor systems of other animals. Importantly, our findings show that the nervous system uses millisecond-timescale changes in spike timing to control behavior by exploiting these muscle nonlinearities, even though the muscles develop force on a significantly longer timescale (tens of milliseconds as shown in Fig. 3B).
They speak of the "surprising power of spike timing to predict behavior," indicating that patterns of spikes coming down the nerves are the determining factor in behavior, not just how fast they come.

Is this really a code? Well, count the number of times they refer to coding directly, beside the suggestion in the title, "Motor control by precisely timed spike patterns." Result: 29 times. "Information," a related concept in coding, gets 51 mentions. "Precision" and related terms, important for conveying information, gets 14 mentions. "Evolution" gets zero mentions.

Take a moment to watch this video of a nightingale singing on YouTube and prepare to say Wow!

How much information does the forebrain have to send to the vocal muscles to achieve that kind of performance? The authors note in their concluding discussion, "Because respiration is critical to vocalization in songbirds, it will be of special interest to record respiratory timing patterns during singing...." Indeed!

Think of the possibilities this discovery opens for further research. A multitude of questions come to mind: how does the brain know what pattern to send to a distant muscle to get it to act in a certain way? Are the codes inherited or learned? How reproducible are the patterns from one animal to another? Can a spike code from one bird sent to the nerves of another bird make it sing the same song? How does a human mind interact with the brain to turn a choice into an action? What translates the thought "I must run" into a spike timing pattern that makes you run? How rich, do you think, is the spike timing code in a performance of Chopin's Fantaisie-Impromptu? (See the video at the top.)


Being a new discovery, this "spike timing code" will undoubtedly prompt much more research on more animals in more settings. Since Darwinian theory provided no help to these researchers (how does chance produce a code, anyway?), a design approach is well placed to advance understanding in this area quickly in significant ways. Why?  ID already knows a lot about codes..

Wednesday, 25 January 2017

on darwinism creating darwinism

Evolution as Carpenter: Scientist Concludes Repetitive Elements "Are an Important Toolkit"

Cornelius Hunter


I'm not an expert carpenter, but if I know what needs to be built I'll eventually get there. It may not be beautiful, but given a blueprint I can build a structure.

What if I didn't have that blueprint, though? What if I had no idea what needed to be built -- no notion of where the task was headed? Furthermore, what if I had no knowledge of structures in general. Just randomly cutting wood and pounding nails probably would not end well. This is the elephant in the room for evolution, for according to evolutionary theory, random actions are precisely what built the world.

It is what the Epicureans claimed two thousand years ago, and this random-creation hypothesis fares no better today than it did then. In fact, with the findings of modern science we now know far more about the details than did the Epicureans, and it has just gotten worse for their hypothesis.

This is why evolutionists habitually appeal to teleological language. Regulatory genes "were reused to produce different functions," Dinosaurs "were experimenting" with flight, and the genome was "designed by evolution to sense and respond." Such Aristotelianism, which casts evolution as an intelligent process working toward a goal, makes the story more palatable; after all, evolution had a blueprint in mind.

All of this makes for a glaring internal contradiction: on the one hand evolution has goals; yet on the other hand evolution is a mindless, mechanical process driven by random, chance events. As University College London molecular neuroscientist Jernej Ule explains:

We're all here because of mutations. Random changes in genes are what creates variety in a species, and this is what allows it to adapt to new environments and eventually evolve into completely new species.
This makes evolution, rather inconveniently, dependent on random events (no, natural selection doesn't change this -- it cannot coax the right mutations to occur) which, by definition, do not work towards a goal -- they do not build anything:

This ambiguity creates a great challenge. On the one hand, mutations are needed for biological innovation, and on the other hand they cause diseases.
Indeed. This is not looking good. As Washington State University biologist Michael Skinner recently wrote:

[T]he rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed. Scientists, well aware of the issue, have proposed a variety of genetic mechanisms to compensate: genetic drift, in which small groups of individuals undergo dramatic genetic change; or epistasis, in which one set of genes suppress another, to name just two. Yet even with such mechanisms in play, genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change [between species if evolution is true] for a host of traits, from adjustments in metabolism to resistance to disease.
Whereas Skinner appeals to epigenetics to save the theory, Ule appeals to repetitive elements. Evidence has shown that far from being "junk DNA," repetitive elements play a genetic regulatory role. As a result evolutionists such as Ule have concluded repetitive elements "are an important toolkit for evolution."

Like any good carpenter, evolution has a toolkit.

Ule and his co-workers are now elaborating on the details of how repetitive element toolkit might work. It goes like this: (i) Random mutations gradually modify repetitive elements, (ii) these repetitive elements are sometimes incorporated as part of the blueprint instructions for making a protein, (iii) there are several complicated molecular machines that either repress or allow such incorporation of these repetitive elements in the blueprint.

According to Ule, this complicated process, including these two opposing machines that are "tightly coupled," allows evolution to experiment and successfully evolve more complicated species, such as humans:

We've known for decades that evolution needs to tinker with genetic elements so they can accumulate mutations while minimising disruption to the fitness of a species. ... This [process we have discovered] allows the Alu elements to remain in a harmless state in our DNA over long evolutionary periods, during which they accumulate a lot of change via mutations. As a result, they become less harmful and gradually start escaping the repressive force. Eventually, some of them take on an important function and became indispensable pieces of human genes. To put it another way, the balanced forces buy the time needed for mutations to make beneficial changes, rather than disruptive ones, to a species. And this is why evolution proceeds in such small steps - it only works if the two forces remain balanced by complementary mutations, which takes time. Eventually, important new molecular functions can emerge from randomness.
These suggestions from Skinner and Ule are the latest in a long, long line of ideas evolutionists have come up with, in an attempt to make sense of their random-creation hypothesis. In modern evolutionary thought, the first such idea was natural selection.

The reason there is a long, long line of ideas is none of them work. They are becoming ever more complicated, ever more unlikely, and equally useless in solving the basic problem of random events constructing the world.

But Ule's latest attempt highlights yet another problem: serendipity. All of the solutions, from natural selection on up to epigenetics and repetitive elements, rely on serendipity, and this reliance is increasing. Ule's solution is serendipity on steroids, for the idea holds that evolution just happened to create (i) repetitive elements, and (ii) the complicated, finely tuned, opposing molecular machines that repress or allow those repetitive elements into the protein instructions.

This isn't going to work, but the point here is that even if it did somehow work, it amounts to evolution creating evolution. In order for evolution to have created so many of the species, it first must have lucked into creating these incredible mechanisms, which then in turn allowed evolution to occur. And all of this must have occurred with no foresight.

Imagine a car factory that uses highly complex machines, such as drill presses and lathes, to build the cars. Now imagine the factory first creating those machines by random chance, so that then the cars could be built by yet more random chance events. This violates the very basics of science. It is just silly.

Zygote v. Darwin.

From Genome to Body Plan: A Mystery
Evolution News & Views 

Decoding genomes has been one of the most important advances of the last sixty years, but it's really just a start of a far larger mystery: the mystery of development. You can appreciate the magnitude of the problem in Illustra's animation of a chick embryo in "Embryonic Development" from  Flight: The Genius of Birds. An even more majestic depiction closer to home takes you from the moment of conception to the birth of a baby in this animation by RenderingCG. How does a linear genome produce such an astounding product? Then, how does the moving, living being reduce its information back down to a genome in a single cell?

Three German scientists discuss the mystery in a paper in Nature, "From morphogen to morphogenesis and back," which can be loosely translated, "From genome to body plan and back."

A long-term aim of the life sciences is to understand how organismal shape is encoded by the genome. An important challenge is to identify mechanistic links between the genes that control cell-fate decisions and the cellular machines that generate shape, therefore closing the gap between genotype and phenotype. The logic and mechanisms that integrate these different levels of shape control are beginning to be described, and recently discovered mechanisms of cross-talk and feedback are beginning to explain the remarkable robustness of organ assembly. The 'full-circle' understanding of morphogenesis that is emerging, besides solving a key puzzle in biology, provides a mechanistic framework for future approaches to tissue engineering.
Stop right there. Why must the framework be mechanistic? Didn't they just speak of "the logic and mechanisms that integrate" at different levels? Logic is not mechanistic; it is conceptual. Logic can be instantiated in circuits, on paper, and in human language. Mechanism may be the primary aspect of morphogenesis that natural science can investigate, but restricting one's investigation to a "mechanistic framework" is sure to miss the message in a book by considering only the paper and the ink.

After a brief history of morphogenesis theory from Aristotle to the era of molecular genetics, the authors claim that problems in the "mechanics centered approach" were finally solved in the 1970s. Here, they confuse the football players for the strategy of the play (so to speak). They describe the actions of the players, as if they operate mechanically, while hiding the quarterback's game plan behind passive-voice verbs ("is controlled" -- by whom?).

The initial landmark publication from this herculean project revealed that the first step in morphogenesis is the subdivision of the embryo into discrete regions by a cascade of 'patterning' genes4. Only then is each domain converted to the corresponding region of the body through a bespoke morphogenetic program, therefore establishing that the timing, positioning and inheritance of tissue-shaping events is controlled genetically. Subsequent molecular characterization in Drosophila and other systems revealed that patterning genes mainly encode signalling pathways that mediate long-range tissue patterning and gene-regulatory networks that control fate decisions; however, such genes do not control cell and tissue shape directly. Rather, the task of physically shaping cells and tissues is performed using a toolbox of essential cellular machines discovered by cell biologists, which are present in all cells in the embryo.
We appreciate the mention of a program, a toolbox, and machines, but who wrote the program? Who designed the tools and machines? It's as if the authors are watching tools moving and operating without any hands:

Collectively, these studies reveal a picture in which the shape of tissues is determined by the combined actions of genetic, cellular and mechanical inputs (Box 1). Although a number of the main players are now known, and their functions understood, we still know surprisingly little about how the various levels of shape control are integrated during morphogenesis.
"Are integrated" -- by whom? Passive voice verbs screen these authors from identifying plausible causes. And so by restricting their attention to how pieces of matter "are integrated," they witness rabbits coming out of hats without a magician:

The focus of this Review is the logic and mechanisms that connect gene regulation, cellular effectors and tissue-scale mechanics -- the troika of tissue shaping. We describe how shape, at the local level, emerges from the interaction of tissue-specific genetic inputs and the self-organizing behaviour of core intracellular machines. We then discuss how this mechanistic logic is used in several modified forms to produce a variety of shaping modes. It is becoming clear that the chain of command from gene to shape is not unidirectional, owing to the discovery of mechanisms that enable changes in tissue architecture and mechanics to feed back to 'upstream' patterning networks. The emerging integrated view of tissue shaping therefore goes full circle, from morphogen to morphogenesis and back.
Mechanistic philosophy gets hopelessly muddled here. To see why, convert the passive voice to active voice. "Mechanistic logic is used" should mean, "Somebody or something uses logic to operate a machine." A baby's shape doesn't just "emerge" by "self-organizing behavior" except in the imagination of a philosophical materialist.

From there, the authors get into the weeds, discussing blastocysts, fruit flies, "evolutionarily conserved mechanosensitive pathways" and other matters. It should be obvious, though, that if you start on the wrong track you are not going to get where you want to go (i.e., understanding morphogenesis). In this dreamland, rabbits will pop out of hats by emergence. Babies will self-organize. Programs will work without a programmer.

The authors marvel at how "organoids" emerge from induced pluripotent stem cells. Is this an example of self-organization? After thinking about it, they admit that more must be going on.

A stunning demonstration of the full-circle nature of morphogenesis, in which genes regulate tissue shaping and vice versa, comes from the study of organoids. Here, cultured pluripotent cells self-assemble into organ-like structures that are remarkably similar to those formed in the embryo. Organoids can even be generated from patient-derived induced pluripotent stem cells, which means that this technology has the potential to herald a new era in tissue engineering for the modelling of disease and the development of therapies that is based on the principles of developmental biology.... Organoid formation itself demonstrates that cells can become organized in the absence of predetermined long-range external patterning influences such as morphogen gradients or mechanical forces, which are a cornerstone of classic developmental biology. This unexpected lack of requirement for long-range pre-patterning has led to organoid formation being described as an example of 'self-organization', which is defined classically as the spontaneous emergence of order through the interaction of initially homogeneous components. Although some aspects of organoid formation may show self-organizing properties, it is already clear that cell heterogeneity and patterned gene expression play a crucial part throughout.
The organoids will never form by self-organization, therefore, unless the coded instructions in each cell direct them according to "patterned gene expression" -- that is what is crucial. They have a game plan, like band players in a "scatter" formation on the field self-organizing into a formation. Each player knows where to go.

The same issue of Nature takes a mechanistic look at the related issue of hierarchical organization. How does that "emerge"? In their article "Scaling single-cell genomics from phenomenology to mechanism," Tanay and Regev begin:

Three of the most fundamental questions in biology are how individual cells differentiate to form tissues, how tissues function in a coordinated and flexible fashion and which gene regulatory mechanisms support these processes. Single-cell genomics is opening up new ways to tackle these questions by combining the comprehensive nature of genomics with the microscopic resolution that is required to describe complex multicellular systems. Initial single-cell genomic studies provided a remarkably rich phenomenology of heterogeneous cellular states, but transforming observational studies into models of dynamics and causal mechanisms in tissues poses fresh challenges and requires stronger integration of theoretical, computational and experimental frameworks.
Even though they seek a mechanistic framework again, they are employing intelligent design to get there: tackling questions, combining concepts, seeking causes. Will a "stronger integration of theoretical, computational and experimental frameworks" emerge by unguided material processes? Well, they seem to think cells did some remarkable things that way:

Multicellular organisms have evolved sophisticated strategies for cooperation between cells, such that a single genome encodes numerous specialized and complementary functional programs that maximize fitness when they work together. Compartmentalization at several levels -- cells, tissues and organs -- leads to functional diversification of cells and systems with the same underlying genome. Physical copies of the genome are embedded in cells to enable them to maintain a semi-autonomous decision-making process through the selective management of small-molecule, RNA and protein concentrations in cytoplasmic and nuclear compartments. Theoretically, this permits genomes to break the inherent symmetry that is imposed by the precise duplication of DNA in multicellular species. In particular, it facilitates cellular differentiation through the progressive acquisition of specific intracellular molecular compositions, enabling epigenetic mechanisms to emerge and implement cellular memory. At a higher level of organization, intercellular signalling, extracellular structures and environmental cues are used to form complex spatial structures in which cells (and their genomes) are physically embedded. This creates further levels of compartmentalization that encode complex and structured tissues.
More muddle. On the one hand, strategies, codes, programs, decision-making, cues, and signaling -- implying rationality. On the other hand, evolution, emergence, and physical stuff -- implying materialism. The authors mix oil and water, thinking the oil evolved out of the water and both cooked themselves into a soufflé.

After some diversion into issues like whether or not cell types can be classified in some Linnaean system, they take pride that science is beginning to move from descriptive accounts to predictive understanding:

Efforts towards the mapping and classification of cellular programs in humans and model organisms are becoming increasingly ambitious, aiming to provide a comprehensive atlas of the cell types and subtypes in organs and whole organisms. This opens up remarkable opportunities to move beyond descriptive studies of cell type and state and to develop mechanistic-predictive models of regulatory programs.

There's no question that mechanisms are involved in development. But to mix in another metaphor, they're focused on how billiard balls move and interact on the pool table but ignoring the expertise of the players. Even if the players are robots, and the shots are predictable and repeatable, you'll miss the talent of the game without considering the intelligent design that directs each ball into its own pocket in the correct sequence. The design employs the laws of nature, but does not emerge from them.

Monday, 23 January 2017

Piscine wonders v. Darwin.

"Happy Salmon" and Other Wonders of the Fish World's Migrating Marvel

Evolution News & Views


Salmon may not be happy when we eat them, but we're happy learning about them. So in a symbiotic relationship, we should take care of them so that future observers of these masters of migration can continue to inspire future generations of nature lovers. In Living Waters , Illustra Media tells the story of the salmon's amazing life cycle. What's new about these fish that swim thousands of miles at sea, yet find their native freshwater streams years later? Several discoveries have come to light since the film was released.

Drugged Salmon

One news article  says that "Happy salmon swim better." Like people, salmon can get anxious. "Current research from UmeÃ¥ University shows that the young salmon's desire to migrate can partly be limited by anxiety," this article says. Fear of the unknown downstream slows down the young migrants. But is this experiment ethical?

The research team studied how salmon migration was affected both in a lab, where salmon migrated in a large artificial stream, and in a natural stream outside of Umeå in Northern Sweden. In both environments, researchers found that salmon treated with anxiety medication migrated nearly twice as fast as salmon who had not been subjected to treatment. Several billion animals migrate yearly and the results presented here, i.e. that anxiety limits migration intensity, is not only important for understanding salmon migration but also for understanding migration in general. [Emphasis added.]
Well, maybe these salmon got a little too happy! The scientists may have only discovered that whatever they gave them made them reckless, like snowboarders on stimulants. Natural anxiety might serve to protect salmon from unnecessary risks. In any case, we do not recommend letting your kid give Ritalin to your goldfish as a science project.

Daredevil Salmon

You can imagine the stress on a salmon in this next story. Look at the video of 9-to-10 pound chum salmon swimming across a Washington state highway, right in front of an oncoming car. Why did the salmon cross the road? Because the scent of its natal stream took a shortcut over the highway after heavy rain, National Geographic explains. In the article you can also watch a bobcat take advantage of the opportunity.

Drowning Salmon

The last story was about too little water; this one on Phys.org is about too much. "How will salmon survive in a flooded future?" Fishery scientists, realizing how important salmon fishing is to the northwest economy (it's a $1 billion industry in Alaska), are worried that flood conditions in spawning grounds might scour the delicate salmon eggs out of their nests and wash them away downstream. The key to preserving their breeding grounds, they found, is keeping the area's rivers and floodplains pristine.

"Flood plains essentially act as pressure release valves that can dissipate the energy of large floods," says Sloat. "In fact, most salmon prefer to spawn in stretches of river with intact floodplains, which is probably no coincidence because these features of the landscape help protect salmon eggs from flood events."
Thermoregulation and Osmoregulation

The salmon's ability to change its gill physiology when going from freshwater to salt water and back is called osmoregulation (see how that's a great design story, here ). Now, researchers at Oregon State University  have found that northern sockeye salmon can regulate their temperature as well, "despite evolutionary inexperience." Imagine that! Maybe they took a class in fish school.

Sockeye salmon that evolved in the generally colder waters of the far north still know how to cool off if necessary, an important factor in the species' potential for dealing with global climate change....
Research by Oregon State University revealed that sockeyes at the northern edge of that range, despite lacking their southern counterparts' evolutionary history of dealing with heat stress, nevertheless have an innate ability to "thermoregulate."

The salmon regulate their body heat by finding water just right for their needs. Sounds simple, doesn't it?

While it may seem obvious that any fish would move around to find the water temperature it needed, prior research has shown thermoregulation is far from automatic -- even among populations living where heat stress is a regular occurrence.
By monitoring tagged fish, the researchers found that the salmon knew how to cool off at tributary plumes or in deeper water. It ends up saving them a lot of energy to stay at their optimum "Goldilocks" temperature -- not too hot, not too cold. The scientists never do explain how the sockeye salmon learned to do this despite "evolutionary inexperience."

Diving Deeper into the Salmon Nose

Fans of Living Waters probably remember the dramatic animated dive into a salmon's nostrils (see it here). Recently, we added new information about turbines in the nose. Now, we can learn about another wonder at the molecular level. Salmon and other fish, as well as mammals, have a molecular amplifier involving chloride ions. Stephan Frings, a molecular biologist at Heidelberg University, talks about the discovery in the Proceedings of the National Academy of Sciences. First, let's hear him wax ecstatic about olfaction in general.

The sense of smell and its astonishing performance pose biologists with ever new riddles. How can the system smell almost anything that gets into the nose, distinguish it from countless other odors, memorize it forever, and trigger reliably adequate behavior? Among the senses, the olfactory system always seems to do things differently. The olfactory sensory neurons (OSNs) in the nose were suggested to use an unusual way of signal amplification to help them in responding to weak stimuli. This chloride-based mechanism is somewhat enigmatic and controversial. A team of sensory physiologists from The Johns Hopkins University School of Medicine has now developed a method to study this process in detail. Li et al. demonstrate how OSNs amplify their electrical response to odor stimulation using chloride currents.
The mammalian olfactory system seems to have the capacity to detect an unlimited number of odorants. To date, nobody has proposed a testable limit to the extent of a dog´s olfactory universe. Huge numbers from 1012 to 1018 of detectable odorants emerge from calculations and estimations, but these are basically metaphorical substitutes for the lack of visible limits to chemical variety among odorous compounds. Dogs can cope with their odor world by using just 800 different odorant receptor proteins, a comparably tiny set of chemical sensors, expressed -- one receptor type per cell -- in 100 million OSNs in the olfactory epithelium. Olfactory research has revealed how it is possible to distinguish 1018 odorants with 800 receptors. To do this, the receptors have to be tolerant with respect to odorant structure. After all, the huge numbers suggest that an average receptor must be able to bind millions of different odorants. Low-selectivity odorant receptors are, therefore, indispensable for olfaction. The olfactory system nevertheless extracts high-precision information from an array of low-precision receptors by looking at the activity of all its OSNs simultaneously. The combined activity pattern of all neurons together provides the precise information about odor quality that each individual OSN cannot deliver. Thus, combinatorial coding is the solution to the problem of low-selectivity receptors.

If you are not sufficiently boggled by that, consider that the incoming signals are very weak. A typical OSN (the only neuron exposed to the environment) has only a millisecond to sense an odorant. Because that is too short to trigger the receptor, it has to integrate 35 sensations in 50 milliseconds. To increase their sensitivity, the cilia at the tips of the OSNs -- where the action takes place -- charge their receptors with chloride ions. These ions boost depolarization and promote electrical excitation, amplifying the output signal. Here's where salmon come in:

Interestingly, the components of this mechanism were discovered in freshwater fish, amphibian, reptiles, birds, and mammals, indicating that the interplay of cation currents and chloride currents is important for OSN function throughout the animal kingdom.
A recent study appears to confirm this hypothesis in some cases. You, too, may be "smelling better with chloride." (Here, have some salt on your salmon fillet.) But Frings admits, "The relation between OSN activity at the onset and odor perception at the conclusion of signal processing is far from being understood." The olfactory system is "very different in virtually all respects" from the other senses, like vision and hearing.

First, thousands of OSN axons -- all with the same odorant receptor protein -- converge onto a common projection neuron in the olfactory bulb. This extreme convergence shapes the signal that enters the brain, and we still have to find out how ORN electrical amplification contributes to this process. Second, when the olfactory information enters the piriform cortex, the largest cortical area in the olfactory system, it enters a world quite different from the primary visual cortex. Extensive horizontal communication between the principal neurons and continuous exchange with multiple other brain regions turn the original afferent signal into highly processed information. Finally, the way to perception leads through brain regions that establish, evaluate, and use olfactory memory. Thus, much signal processing has to take place before a mouse [or a salmon, for that matter] performs in an operant conditioning experiment.
Next time you go fishing, take a second to look into the eyes and nose of your catch. Our of reverence, you may just want to throw it back.

Sunday, 22 January 2017

Neville Chamberlain was right to seek peace in his time?:Pros and cons.

The original technologist continues to school humankind's johnny come latelies.

The World's Ideal Storage Medium Is "Beyond Silicon"
Evolution News & Views

The world is facing a data storage crisis. As information proliferates in everything from YouTube videos to astronomical images to emails, the need for storing that data is growing exponentially. If trends continue, data centers will have used up the world's microchip-grade silicon before 2040.

But there is another storage medium made of abundant atoms of carbon, hydrogen, oxygen, nitrogen, and phosphorus. It's called DNA. And you wouldn't need much of it. The entire world's data could be stored in just one kilogram of the stuff. So says Andy Extance in an intriguing article in Nature, "How DNA could store all the world's data."

For Nick Goldman, the idea of encoding data in DNA started out as a joke.
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. "We thought, 'What's to stop us using DNA to store information?'"

Then the laughter stopped. "It was a lightbulb moment," says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. [Emphasis added.]

Since that day, several companies have begun turning this "joke" into serious business. The Semiconductor Research Corporation (SRC) is backing it. IBM is getting on board. And the Defense Department has hosted workshops with major corporations, which is sure to lead to funding. The UK is already funding research into next-generation approaches to DNA storage.

When you look at Extance's chart, it's easy to see why DNA is "one of the strongest candidates yet" to replace silicon as the storage medium of the future. The read-write speed is about 30 times faster than your computer's hard drive. The expected data retention is 10 times longer. The power usage is ridiculously low, almost a billion times less than flash memory. And the data density is an astonishing 1019 bits per cubic centimeter, a thousand times more than flash memory and a million times more than a hard disk. At that density, the entire world's data could fit in one kilogram of DNA.

As with any new technology, baby steps are slow. Technicians face challenges of designing DNA strands to encode data, searching for it, and reading it back out reliably. How does one translate the binary bits in silicon into the A, C, T, and G of nucleic acids? Can DNA strands be manufactured cheaply enough? How can designers proofread the input?

Living things, though, have already solved these issues. After all, "a whole human genome fits into a cell that is invisible to the naked eye," Extance says. As for speed, DNA is accessed by numerous molecular machines simultaneously throughout the nucleus that know exactly where to start and stop reading. Genomic machinery in the cell proofreads errors to one typo per hundred billion bases, as Dr. Lee Spetner notes in his book Not by Chance! That's equivalent, he says, to the lifetime output of about 100 professional typists.

Life shows that it is possible in principle to overcome these challenges. That gives hope to the engineers on the cutting edge of DNA storage. Already, several experimenters have succeeded in encoding information in DNA. By 2013, EBI had encoded Shakespeare's sonnets and Martin Luther King's "I have a dream" speech. IBM and Microsoft topped that 739-kilobase effort shortly after with 200 megabases of storage. As far back as 2010, Craig Venter's lab encoded text within the genome of his synthetic bacterium, as Casey Luskin reported here. Everything alive demonstrates that DNA is already the world's most flexible and useful storage medium. We just need to learn how to harness the technology.

Goldman's EBI lab and other labs are thinking of ways to ensure accuracy. One method converts bits into "trits" (combinations of 0, 1, and 2) in an error-correcting scheme. Engineers are sure to think of robust solutions, just like the pioneers of digital computers did with parity bits and other mechanisms to guarantee accurate transmission over wired and wireless communications.

How long could DNA storage last? That's another potential advantage -- better than existing technology by orders of magnitude:

...these results convinced Goldman that DNA had potential as a cheap, long-term data repository that would require little energy to store. As a measure of just how long-term, he points to the 2013 announcement of a horse genome decoded from a bone trapped in permafrost for 700,000 years. "In data centres, no one trusts a hard disk after three years," he says. "No one trusts a tape after at most ten years. Where you want a copy safe for more than that, once we can get those written on DNA, you can stick it in a cave and forget about it until you want to read it."
With these advantages of density, stability, and durability, DNA is creating a burgeoning field of research. Worries about random access are already being overcome. With techniques like PCR and CRISPR/Cas9, we can expect that any remaining challenges will be solved. Look at what our neighbors at the University of Washington recently achieved:

As a demonstration, the Microsoft-University of Washington researchers stored 151 kB of images, some encoded using the EBI method and some using their new approach, in a single pool of strings. They extracted three -- a cat, the Sydney opera house and a cartoon monkey -- using the EBI-like method, getting one read error that they had to correct manually. They also read the Sydney Opera House image using their new method, without any mistakes.
Market forces drive innovation. The promise of DNA storage is so attractive, funding and capital are sure to follow. DNA synthesizing machines will come. Random-access machines with efficient search algorithms will be invented. Successes and new products will drive down prices. As with Moore's Law for silicon, the race for better DNA storage products will accelerate once it moves from lab to market. Extance concludes:

Goldman is confident that this is just a taste of things to come. "Our estimate is that we need 100,000-fold improvements to make the technology sing, and we think that's very credible," he says. "While past performance is no guarantee, there are new reading technologies coming onstream every year or two. Six orders of magnitude is no big deal in genomics. You just wait a bit."
So, here we have the best minds in information technology urgently trying to catch up to storage technologies that have been in use since life began. They're only a few billion years late to the party. The implications are as profound as they are intuitive.

Speaking of intuition, Douglas Axe in his recent book Undeniable: How Biology Confirms Our Intuition That Life Is Designed defines a quality he calls functional coherence: "the hierarchical arrangement of parts needed for anything to produce a high-level function -- each part contributing in a coordinated way to the whole." He writes:

No high-level function is ever accomplished without someone thinking up a special arrangement of things and circumstances for that very purpose and then putting those thoughts into action. The hallmark of all these special arrangements is high-level functional coherence, which we now know comes only by insight -- never by coincidence.

Scientists are seeking to match the same level of functional coherence that can be observed every second in the cells of our own bodies, and of the simplest microbes. The conclusion to draw from this hardly needs to be stated.

We feel your pain Mike.

Irony Alert: Michael Shermer on "When Facts Fail"
Cornelius Hunter

When an evolutionist, such as Michael Shermer in this case, warns readers that people don't change their minds even when presented with the facts, the irony should be savored. Shermer writes in Scientific American ("How to Convince Someone When Facts Fail").

Have you ever noticed that when you present people with facts that are contrary to their deepest held beliefs they always change their minds? Me neither. In fact, people seem to double down on their beliefs in the teeth of overwhelming evidence against them. The reason is related to the worldview perceived to be under threat by the conflicting data. [Emphasis added.]
Yes, there certainly are conflicting data. It gets worse:

Creationists, for example, dispute the evidence for evolution in fossils and DNA because they are concerned about secular forces encroaching on religious faith.
Evidence for evolution in DNA? What exactly would that be? Ultra-conserved elements, orphans, replication, duplication, the universal DNA code, protein synthesis, protein coding genes, genetic regulation, recurrent evolution, convergence, cascades of convergence, and...well you get the idea. This evolutionist is demonstrating some of those "facts that fail" and the attendant doubling down, right before our eyes.

And what about those fossils? More "evidence for evolution"? How about those fossils that appear "as though they were planted there," as Richard Dawkins once admitted. One of those "planted" classes, the humble trilobites, had eyes that were perhaps the most complex ever produced by nature.1 One expert called them "an all-time feat of function optimization."

And even Shermer's go-to source, Wikipedia, admits ancestral forms, err, "do not seem to exist":

Early trilobites show all the features of the trilobite group as a whole; transitional or ancestral forms showing or combining the features of trilobites with other groups (e.g. early arthropods) do not seem to exist.
Likewise, even the evolutionist Niles Eldredge admitted2 they didn't make sense in light of standard evolutionary theory:

If this theory were correct, then I should have found evidence of this smooth progression in the vast numbers of Bolivian fossil trilobites I studied. I should have found species gradually changing through time, with smoothly intermediate forms connecting descendant species to their ancestors.
Instead I found most of the various kinds, including some unique and advanced ones, present in the earliest known fossil beds. Species persisted for long periods of time without change. When they were replaced by similar, related (presumably descendant) species, I saw no gradual change in the older species that would have allowed me to predict the anatomical features of its younger relative.

And it just gets worse:

The story of anatomical change through time that I read in the Devonian trilobites of Gondwana is similar to the picture emerging elsewhere in the fossil record: long periods of little or no change, followed by the appearance of anatomically modified descendants, usually with no smoothly intergradational forms in evidence.
Any more facts, Michael Shermer?

Notes:

(1) Lisa J. Shawver, "Trilobite Eyes: An Impressive Feat of Early Evolution," Science News, p. 72, Vol. 105, February 2, 1974.


(2) Niles Eldridge, "An Extravagance of Species," Natural History, p. 50, Vol. 89, No. 7, The American Museum of Natural History, 1980.

Friday, 20 January 2017

The English language is doomed?:Pros and cons.

On the C-word.

Whatever You Do, Don't Say "Irreducible Complexity"
Evolution News & Views

hile browsing through the articles forthcoming in the Journal of Molecular Evolution, we ran across the following sentence:

Since the subject of cellular emergence of life is unusually complicated (we avoid the term 'complex' because of its association with 'biocomplexity' or 'irreducible complexity'), it is unlikely that any overall theory of life's nature, emergence, and evolution can be fully formulated, quantified, and experimentally investigated.

Shhh! Don't say...well, just don't say THAT word. You know the one. The "c" word...ending in "x." Because people might think of...you know. The irreducible thing and that pest Michael Behe.

What are you doing -- you said his name! Don't do that!

Oh, and isn't BIO-Complexity the title of a peer-reviewed science journal open to examining ideas supportive of intelligent design? Yes. In that case, whatever you do, don't say "biocompexity," either!

Say "complicated" instead. "Rather complicated." That's better. Fewer of those nasty associations.

Alas, trying desperately to avoid discussing a topic by policing your language or thought only calls attention psychologically to the very topic one seeks to avoid. The phenomenon is called the "white bear problem."


An example might be Victorian ladies covering piano legs with skirts, although we understand that that's only an urban legend. The sentence above, however? All too real; from here.

Bring me my design filter?

Book Stumps Decoders: Design Filter, Please?
Evolution News & Views 

Here's a new book about an old book. The new book was designed for a purpose: to try to understand an old book that's a mystery. We know the author of the new book; the author of the old book, the Voynich manuscript, is unknown. Raymond Clemens explores the mystery in The Voynich Manuscript (Yale University Press), examining "a work that has long defied decoders." Reviewing Clemens's book for Nature, Andrew Robinson calls our attention to this "calligraphic conundrum," providing another opportunity to think about intelligent design theory.

In the past, we've explored two cases of intelligent design science in action: cryptology and archaeology. Both of them unite in this story that will intrigue puzzle aficionados. Clemens introduces the riddle of the Voynich manuscript:

In a Connecticut archive sits a manuscript justifiably called the most mysterious in the world. Since its rediscovery more than a century ago, the Voynich manuscript has been puzzled over by experts ranging from leading US military cryptographer William Friedman to cautious (and incautious) humanities scholars. Since 1969, it has been stored in Yale University's Beinecke Rare Book and Manuscript Library in New Haven.
The fine calligraphy of the 234-page 'MS 408', apparently alphabetic, has never been decoded. Copious illustrations of bathing women, semi-recognizable plants and apparent star maps remain undeciphered. No one knows who created it or where, and there is no reliable history of ownership. Its parchment was radiocarbon-dated in 2009 to between 1404 and 1438, with 95% probability. The manuscript could still be a forgery using medieval parchment, but most experts, including Yale's, are convinced it is genuine. [Emphasis added.]

It's like trying to read Egyptian hieroglyphics without a Rosetta Stone. Who wrote it? Why? What does it mean? The best minds in the world have not figured it out for six centuries. Want to try? You can view the whole thing online at the Beinecke digital library. Solve it and you'll be famous.

Giving his article some Indiana Jones mystique, Robinson describes the cloak-and-dagger route of the manuscript from where it was sold in a Jesuit archive "under condition of absolute secrecy" by a shady antiquities dealer named Wilfrid Voynich, to another dealer, to its current home at Yale. While interesting, those facts don't concern our current discussion about the validity of the inference to intelligent design.

The story of the various failed attempts to decipher the script, told by Clemens and Renaissance scholar William Sherman, is particularly fascinating. It begins in the 1920s, when US philosopher William Newbold convinced himself that the text was meaningless, but that each letter concealed an ancient Greek shorthand readable under magnification. He further claimed that this 'finding' proved the authorship of [Roger] Bacon, who he claimed had invented a microscope centuries before Antonie van Leeuwenhoek. After Newbold's death, the 'shorthand' was revealed to be random cracks left by drying ink.
There's a hint of design inference right there: how does one tell the difference between intentional calligraphy and cracks left by drying ink? Newbold was mistaken. He found a false positive, something that wasn't designed that he thought was designed. Proper use of the Design Filter would have prevented his mistake.

What hope is there of decoding the script? Not much at present, I fear. The Voynich manuscript reminds me of another uncracked script, on the Phaistos disc from Minoan Crete, discovered in 1908. The manuscript offers much more text to analyse than does the disc, but in each case there is only one sample to work with, and no reliable clue as to the underlying language -- no equivalent of the Rosetta Stone (A. Robinson Nature 483, 27-28; 2012). Professional cryptographers have been rightly wary of the Voynich manuscript ever since the disastrous self-delusion of Newbold. But inevitably, many sleuths will continue to attack the problem from various angles, aided by this excellent facsimile. Wide margins are deliberately provided for readers' notes on their own ideas. "Bonne chance!" writes Clemens. I'll second that.
Before leaping from a clearly designed book to applying the design inference in reference to a living cell (you suspect that's where we are headed, right?), let's review some facts about design theory.

It's not necessary to know the identity of the designer.

It's not necessary to know the purpose or meaning of the design.

Design is evident from the arrangement of parts themselves when chance and natural law can be effectively ruled out.

Viewers may recall the scenes of Egyptian hieroglyphics in Unlocking the Mystery of Life, where the narrator says, "No one would attribute the shapes and arrangements of these symbols to natural causes, like sandstorms or erosion. Instead, we recognize them as the as the work of ancient scribes, intelligent human agents." Wind and erosion can create remarkable patterns, but not markings like those. We immediately recognize them as symbols, even if no one understood them until the Rosetta Stone was deciphered.

The same is true with the Voynich manuscript. As Doug Axe argues in Undeniable, our universal design intuition immediately recognizes the difference between designed objects and the work of unguided processes. The theory of intelligent design formalizes our intuition in robust ways.

Now we can address the comparison of DNA to cryptic writing. One might think the situation is too different to do so. Imagine the response: Everybody knows that books and drawings of plants and bathing women are made by human beings. DNA is made of chemicals. It's called a genetic "code," but humans don't write that way. We just use the word "code" as a metaphor.

Oh? Remember Craig Venter? His team inscribed their names and other messages in the genome of their synthetic bacterium using DNA letters. Other bio-engineers have made molecular nanomachines out of DNA. Some are considering building DNA computers. Could an investigator unaware of these projects tell the difference between artificial DNA structures and living genomes? If not, the investigator would commit a false negative, calling something not designed when it is designed. If, on the other hand, the investigator did make a valid design inference for the artificial structures, why not use the same reasoning for the rest of the genome?

Paul Davies, in fact, has considered the possibility that intelligent extraterrestrials might have left their mark in our DNA. Proving that would presuppose the ability to distinguish intelligent causes from natural causes. Natural laws are incapable of symbolic logic. Only minds can make symbols mean something or do something they would never naturally do. So it's not just ID advocates who look at DNA for evidence of intentional design. If DNA is indeed a code -- written in symbols that have meaning -- then a design inference is justified. For more on why speaking of a genetic "code" is more than just a metaphor, listen to Charles Thaxton on ID the Future.


The attempt to decipher the Voynich manuscript offers us another illustration of ID principles at work, right in the pages of Nature. One can't exclude ID as a scientific theory and then apply it to a scientific question in the world's leading science journal.

Thursday, 19 January 2017

Yet more on Darwinism's convenient convergences.

Sugar Gliders, Flying Squirrels, and How Evolutionists Explain Away Uncooperative Data
Cornelius Hunter

The scientific evidence contradicts evolutionary theory. Consider, for example, the problem of tracing out the mammalian evolutionary tree.

According to evolution similar species should be neighbors on the evolutionary tree. For example, the flying squirrel and sugar glider certainly are similar -- they both sport distinctive "wings" stretching from arm to leg. Shouldn't they be neighboring species? The problem is that, while they have incredible similarities, they also have big differences. Most notably, the flying squirrel is a placental and the sugar glider is a marsupial. So they must be placed far apart in the mammalian evolutionary tree. The problem in this example is that different characters, across the two species, are not congruent. Here is how evolutionists rationalize the contradiction:

Flying squirrels and sugar gliders are only distantly related. So why do they look so similar then? Their gliding "wings" and big eyes are analogous structures. Natural selection independently adapted both lineages for similar lifestyles: leaping from treetops (hence, the gliding "wings") and foraging at night (hence, the big eyes). [Emphasis added.]
This is a good example of how contradictory evidence drives evolutionists to embrace irrational just-so stories. Natural selection cannot "adapt" anything. Natural selection kills off the bad designs. It cannot influence the random mutations that must, somehow, come up with such amazing designs. This is the hard reality, but in order to rationalize the evidence, evolutionists must resort to this sort of teleological language, personifying and endowing natural selection with impossible powers. As often happens, a distinctive grammatical form -- "for similar lifestyles" -- is a dead giveaway. Natural selection becomes a designer.

This example is by no means exceptional. In fact, this sort of incongruence is rampant in biology. Evolutionists have attempted to deny it in the past, but it is undeniable. It is the rule rather than the exception. As one recent paper, entitled "Mammal madness: is the mammal tree of life not yet resolved?" admitted:

Despite the keen interest in mammals, the evolutionary history of this clade has been and remains at the center of heated scientific debates. In part, these controversies stem from the widespread occurrence of convergent morphological characters in mammals.
In addition to the morphological characters, evolutionists make extensive use of molecular sequence data using the so-called molecular clock method. This method, however, has a long history of problems. You can see here and here how the molecular clock method has failed, but an entirely different problem is the non-scientific misuse of this approach. Consider how evolutionists have misused it in the mammalian evolutionary tree problem:

Two articles in this issue address one such node, the root of the tree of living placental mammals, and come to different conclusions. The timing of the splitting event -- approximately 100 Ma based on molecular clocks -- is not in debate, at least among molecular evolutionists. Rather the question is the branching order of the three major lineages: afrotherians (e.g., elephants, manatees, hyraxes, elephant shrews, aardvarks, and tenrecs), xenarthrans (sloths, anteaters, and armadillos), and boreoeutherians (all other placentals; fig. 1).
Such overly optimistic interpretation of the molecular clock results unfortunately has a long history. Dan Graur and William Martin have showed how such overconfidence became common in evolutionary studies. They write:

We will relate a dating saga of ballooning inapplicability and snowballing error through which molecular equivalents of the 23rd October 4004 BC date have been mass-produced in the most prestigious biology journals.
Graur and Martin chronicle how a massive uncertainty was converted to, err, zero, via a sequence of machinations, including the arbitrary filtering out of data simply because they do not fit the theory:

A solution to the single-calibration conundrum would be to use multiple primary calibrations because such practices yield better results than those obtained by relying on a single point. Indeed, it was stated that "the use of multiple calibration points from the fossil record would be desirable if they were all close to the actual time of divergence." However, because no calibrations other than the 310 +/- 0 MYA value were ever used in this saga, the authors must have concluded that none exists. This is not true. Moreover, deciding whether a certain fossil is "close to the actual time of divergence" presupposes a prior knowledge of the time of divergence, which in turn will make the fossil superfluous for dating purposes.
Not only are uncooperative data discarded, but tests are altogether dropped if they don't produce the right answer:

The results indicated that 25% of the homologous protein sets in birds and mammals failed the first part of the consistency test, that is, in one out of four cases the data yielded divergence times between rodents and primates that were older than those obtained for the divergence between synapsids and diapsids. One protein yielded the absurd estimate of 2333 MYA for the human-chicken divergence event, and as an extreme outlier was discarded. For the remaining proteins, the mean bird-mammalian divergence estimate was 393 MYA with a 95% confidence interval of 471-315 MYA. In other words, the 310 MYA landmark was not recovered. Because neither condition of the consistency test was met, it was concluded that the use of the secondary calibration is unjustified.
In one example, a monumental dating uncertainty, roughly equal to the age of the universe, is magically reduced by a factor of 40:

Were calibration and derivation uncertainties taken into proper consideration, the 95% confidence interval would have turned out to be at least 40 times larger (~14.2 billion years).

Now of course there is little question that evolutionists will resolve their evolutionary tree problems. A combination of filtering the data, selecting the right method, and, of course, deciding there is nothing at all improbable about natural selection "adapting" designs in all manner of ways, can solve any problem. But at what cost? As the paper concludes, "Unfortunately, no matter how great our thirst for glimpses of the past might be, mirages contain no water."

On materialism's latest god.

How Physicists Learned to Love the Multiverse

Cornelius Hunter



Theoretical physicist Tasneem Zehra Husain has an excellent article on the multiverse in this month's Nautilus. In this age of the expert whom we must trust to give us the truth, Husain's transparent and clear explanation of some of the underlying philosophical concerns regarding the multiverse is refreshing. I only wish that her writing was more aware of the historical plenitude traditions. Many of the philosophical concerns regarding the multiverse interact heavily with, or even are mandated by, plenitude thinking. Husain makes this quite clear, and locating this thinking in the historical matrix of plenitude traditions would further enrich and elucidate her explanation of the multiverse hypothesis.

Plenitude thinking holds that everything that can exist will exist. As Aruther Lovejoy observed, it had an obvious influence on a range of thinkers since antiquity, including Bruno's infinity of worlds (read extra-terrestrials) and Leibniz's view that the species are "closely united," and "men are linked with the animals."

Though I don't suspect plenitude thinking had a direct influence on the initial development of the multiverse hypothesis, it doesn't take a physicist to see a fairly obvious connection. If everything that can exist will exist, then why should there be only one universe?

But a more interesting interaction comes in how physicists evaluate and justify the multiverse hypothesis which, after all, isn't very satisfying. With the multiverse, difficult scientific questions are answered not with clever, enlightening, solutions but with a sledgehammer. Things are the way they are because things are every possible way they could be. We are merely living in one particular universe, with one set of circumstances, so that is what we observe. But every possible set of circumstances exists out there in the multiverse. There is no profound explanation for our incredible world. No matter how complicated, no matter how unlikely, no matter how uncanny, our world is just another ho-hum universe. All outcomes exist, and all are equally likely. Nothing special here, move along.

As Princeton cosmologist Paul Steinhardt puts it, the multiverse is the "Theory of Anything," because it allows everything but explains nothing. Given this rather unsatisfying aspect of the multiverse, how can it be defended?

Enter plenitude thinking. An important theme in plenitude thinking is that there should be no arbitrary designs in nature. If everything that can exist will exist, then no particular designs will exist where others are also possible.

This has become a powerful element in evolutionary philosophies of science. As Leibniz explained, the entire, continuous, range of designs should be manifest in nature, rather than a particular, arbitrary design. That would be capricious.

This rule holds unless there is sufficient reason for it not to (Leibniz's PSR). If only one design can arise in the first place, due to some reason or technicality, then all is good -- the design is no longer viewed as arbitrary. The problem is, we can find no such reason or technicality for our universe. It seems any old universe could just as easily arise.

Plenitude thinking mandates that the designs we find in nature should fill the space of feasible designs. We should not find particular designs where others are possible. But this seems to be precisely what we find in our universe. It is a particular design where others are possible. Theoreticians have been unable to find any reason for why this design should have occurred.

If we say the universe was designed, then it is a design that is arbitrary, and that violates the Principle of Plenitude. The solution to this conundrum is the multiverse.

This is how physicists can learn to love the multiverse. Yes, it is a sledgehammer approach, but it satisfies plenitude thinking. Our universe is no longer arbitrary. Instead, the full range of universes exists out here. Husain beautifully explains this, and here is the money passage:

For decades, scientists have looked for a physical reason why the [universe's] fundamental constants should take on the values they do, but none has thus far been found. ... But to invoke design isn't very popular either, because it entails an agency that supersedes natural law. That agency must exercise choice and judgment, which -- in the absence of a rigid, perfectly balanced, and tightly constrained structure, like that of general relativity -- is necessarily arbitrary. There is something distinctly unsatisfying about the idea of there being several logically possible universes, of which only one is realized. If that were the case, as cosmologist Dennis Sciama said, you would have to think "there's [someone] who looks at this list and says 'well we're not going to have that one, and we won't have that one. We'll have that one, only that one.' "
Personally speaking, that scenario, with all its connotations of what could have been, makes me sad. Floating in my mind is a faint collage of images: forlorn children in an orphanage in some forgotten movie when one from the group is adopted; the faces of people who feverishly chased a dream, but didn't make it; thoughts of first-trimester miscarriages. All these things that almost came to life, but didn't, rankle. Unless there's a theoretical constraint ruling out all possibilities but one, the choice seems harsh and unfair.

Clearly such an arbitrary design of the universe is unacceptable. (By the way, Husain also adds the problem of evil as an associated problem: If the universe was designed, then "how are we to explain needless suffering?")

The multiverse solves all this. True, the multiverse is an unsatisfactory, sledgehammer approach. But it saves plenitude, and that is the more important consideration.

Husain's article is a thoughtful, measured explanation of how physicists today are reckoning with the multiverse hypothesis. But make no mistake, religion does the heavy lifting. The centuries-old plenitude thinking is a major theme, running all through the discourse. That, along with a sprinkling of the problem of evil, make for decisive arguments.

The multiverse is another good example of how religion drives science in ways that are far more complex than is typically understood.