Search This Blog

Sunday 22 January 2017

Neville Chamberlain was right to seek peace in his time?:Pros and cons.

The original technologist continues to school humankind's johnny come latelies.

The World's Ideal Storage Medium Is "Beyond Silicon"
Evolution News & Views

The world is facing a data storage crisis. As information proliferates in everything from YouTube videos to astronomical images to emails, the need for storing that data is growing exponentially. If trends continue, data centers will have used up the world's microchip-grade silicon before 2040.

But there is another storage medium made of abundant atoms of carbon, hydrogen, oxygen, nitrogen, and phosphorus. It's called DNA. And you wouldn't need much of it. The entire world's data could be stored in just one kilogram of the stuff. So says Andy Extance in an intriguing article in Nature, "How DNA could store all the world's data."

For Nick Goldman, the idea of encoding data in DNA started out as a joke.
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. "We thought, 'What's to stop us using DNA to store information?'"

Then the laughter stopped. "It was a lightbulb moment," says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. [Emphasis added.]

Since that day, several companies have begun turning this "joke" into serious business. The Semiconductor Research Corporation (SRC) is backing it. IBM is getting on board. And the Defense Department has hosted workshops with major corporations, which is sure to lead to funding. The UK is already funding research into next-generation approaches to DNA storage.

When you look at Extance's chart, it's easy to see why DNA is "one of the strongest candidates yet" to replace silicon as the storage medium of the future. The read-write speed is about 30 times faster than your computer's hard drive. The expected data retention is 10 times longer. The power usage is ridiculously low, almost a billion times less than flash memory. And the data density is an astonishing 1019 bits per cubic centimeter, a thousand times more than flash memory and a million times more than a hard disk. At that density, the entire world's data could fit in one kilogram of DNA.

As with any new technology, baby steps are slow. Technicians face challenges of designing DNA strands to encode data, searching for it, and reading it back out reliably. How does one translate the binary bits in silicon into the A, C, T, and G of nucleic acids? Can DNA strands be manufactured cheaply enough? How can designers proofread the input?

Living things, though, have already solved these issues. After all, "a whole human genome fits into a cell that is invisible to the naked eye," Extance says. As for speed, DNA is accessed by numerous molecular machines simultaneously throughout the nucleus that know exactly where to start and stop reading. Genomic machinery in the cell proofreads errors to one typo per hundred billion bases, as Dr. Lee Spetner notes in his book Not by Chance! That's equivalent, he says, to the lifetime output of about 100 professional typists.

Life shows that it is possible in principle to overcome these challenges. That gives hope to the engineers on the cutting edge of DNA storage. Already, several experimenters have succeeded in encoding information in DNA. By 2013, EBI had encoded Shakespeare's sonnets and Martin Luther King's "I have a dream" speech. IBM and Microsoft topped that 739-kilobase effort shortly after with 200 megabases of storage. As far back as 2010, Craig Venter's lab encoded text within the genome of his synthetic bacterium, as Casey Luskin reported here. Everything alive demonstrates that DNA is already the world's most flexible and useful storage medium. We just need to learn how to harness the technology.

Goldman's EBI lab and other labs are thinking of ways to ensure accuracy. One method converts bits into "trits" (combinations of 0, 1, and 2) in an error-correcting scheme. Engineers are sure to think of robust solutions, just like the pioneers of digital computers did with parity bits and other mechanisms to guarantee accurate transmission over wired and wireless communications.

How long could DNA storage last? That's another potential advantage -- better than existing technology by orders of magnitude:

...these results convinced Goldman that DNA had potential as a cheap, long-term data repository that would require little energy to store. As a measure of just how long-term, he points to the 2013 announcement of a horse genome decoded from a bone trapped in permafrost for 700,000 years. "In data centres, no one trusts a hard disk after three years," he says. "No one trusts a tape after at most ten years. Where you want a copy safe for more than that, once we can get those written on DNA, you can stick it in a cave and forget about it until you want to read it."
With these advantages of density, stability, and durability, DNA is creating a burgeoning field of research. Worries about random access are already being overcome. With techniques like PCR and CRISPR/Cas9, we can expect that any remaining challenges will be solved. Look at what our neighbors at the University of Washington recently achieved:

As a demonstration, the Microsoft-University of Washington researchers stored 151 kB of images, some encoded using the EBI method and some using their new approach, in a single pool of strings. They extracted three -- a cat, the Sydney opera house and a cartoon monkey -- using the EBI-like method, getting one read error that they had to correct manually. They also read the Sydney Opera House image using their new method, without any mistakes.
Market forces drive innovation. The promise of DNA storage is so attractive, funding and capital are sure to follow. DNA synthesizing machines will come. Random-access machines with efficient search algorithms will be invented. Successes and new products will drive down prices. As with Moore's Law for silicon, the race for better DNA storage products will accelerate once it moves from lab to market. Extance concludes:

Goldman is confident that this is just a taste of things to come. "Our estimate is that we need 100,000-fold improvements to make the technology sing, and we think that's very credible," he says. "While past performance is no guarantee, there are new reading technologies coming onstream every year or two. Six orders of magnitude is no big deal in genomics. You just wait a bit."
So, here we have the best minds in information technology urgently trying to catch up to storage technologies that have been in use since life began. They're only a few billion years late to the party. The implications are as profound as they are intuitive.

Speaking of intuition, Douglas Axe in his recent book Undeniable: How Biology Confirms Our Intuition That Life Is Designed defines a quality he calls functional coherence: "the hierarchical arrangement of parts needed for anything to produce a high-level function -- each part contributing in a coordinated way to the whole." He writes:

No high-level function is ever accomplished without someone thinking up a special arrangement of things and circumstances for that very purpose and then putting those thoughts into action. The hallmark of all these special arrangements is high-level functional coherence, which we now know comes only by insight -- never by coincidence.

Scientists are seeking to match the same level of functional coherence that can be observed every second in the cells of our own bodies, and of the simplest microbes. The conclusion to draw from this hardly needs to be stated.

We feel your pain Mike.

Irony Alert: Michael Shermer on "When Facts Fail"
Cornelius Hunter

When an evolutionist, such as Michael Shermer in this case, warns readers that people don't change their minds even when presented with the facts, the irony should be savored. Shermer writes in Scientific American ("How to Convince Someone When Facts Fail").

Have you ever noticed that when you present people with facts that are contrary to their deepest held beliefs they always change their minds? Me neither. In fact, people seem to double down on their beliefs in the teeth of overwhelming evidence against them. The reason is related to the worldview perceived to be under threat by the conflicting data. [Emphasis added.]
Yes, there certainly are conflicting data. It gets worse:

Creationists, for example, dispute the evidence for evolution in fossils and DNA because they are concerned about secular forces encroaching on religious faith.
Evidence for evolution in DNA? What exactly would that be? Ultra-conserved elements, orphans, replication, duplication, the universal DNA code, protein synthesis, protein coding genes, genetic regulation, recurrent evolution, convergence, cascades of convergence, and...well you get the idea. This evolutionist is demonstrating some of those "facts that fail" and the attendant doubling down, right before our eyes.

And what about those fossils? More "evidence for evolution"? How about those fossils that appear "as though they were planted there," as Richard Dawkins once admitted. One of those "planted" classes, the humble trilobites, had eyes that were perhaps the most complex ever produced by nature.1 One expert called them "an all-time feat of function optimization."

And even Shermer's go-to source, Wikipedia, admits ancestral forms, err, "do not seem to exist":

Early trilobites show all the features of the trilobite group as a whole; transitional or ancestral forms showing or combining the features of trilobites with other groups (e.g. early arthropods) do not seem to exist.
Likewise, even the evolutionist Niles Eldredge admitted2 they didn't make sense in light of standard evolutionary theory:

If this theory were correct, then I should have found evidence of this smooth progression in the vast numbers of Bolivian fossil trilobites I studied. I should have found species gradually changing through time, with smoothly intermediate forms connecting descendant species to their ancestors.
Instead I found most of the various kinds, including some unique and advanced ones, present in the earliest known fossil beds. Species persisted for long periods of time without change. When they were replaced by similar, related (presumably descendant) species, I saw no gradual change in the older species that would have allowed me to predict the anatomical features of its younger relative.

And it just gets worse:

The story of anatomical change through time that I read in the Devonian trilobites of Gondwana is similar to the picture emerging elsewhere in the fossil record: long periods of little or no change, followed by the appearance of anatomically modified descendants, usually with no smoothly intergradational forms in evidence.
Any more facts, Michael Shermer?

Notes:

(1) Lisa J. Shawver, "Trilobite Eyes: An Impressive Feat of Early Evolution," Science News, p. 72, Vol. 105, February 2, 1974.


(2) Niles Eldridge, "An Extravagance of Species," Natural History, p. 50, Vol. 89, No. 7, The American Museum of Natural History, 1980.

Friday 20 January 2017

The English language is doomed?:Pros and cons.

On the C-word.

Whatever You Do, Don't Say "Irreducible Complexity"
Evolution News & Views

hile browsing through the articles forthcoming in the Journal of Molecular Evolution, we ran across the following sentence:

Since the subject of cellular emergence of life is unusually complicated (we avoid the term 'complex' because of its association with 'biocomplexity' or 'irreducible complexity'), it is unlikely that any overall theory of life's nature, emergence, and evolution can be fully formulated, quantified, and experimentally investigated.

Shhh! Don't say...well, just don't say THAT word. You know the one. The "c" word...ending in "x." Because people might think of...you know. The irreducible thing and that pest Michael Behe.

What are you doing -- you said his name! Don't do that!

Oh, and isn't BIO-Complexity the title of a peer-reviewed science journal open to examining ideas supportive of intelligent design? Yes. In that case, whatever you do, don't say "biocompexity," either!

Say "complicated" instead. "Rather complicated." That's better. Fewer of those nasty associations.

Alas, trying desperately to avoid discussing a topic by policing your language or thought only calls attention psychologically to the very topic one seeks to avoid. The phenomenon is called the "white bear problem."


An example might be Victorian ladies covering piano legs with skirts, although we understand that that's only an urban legend. The sentence above, however? All too real; from here.

Bring me my design filter?

Book Stumps Decoders: Design Filter, Please?
Evolution News & Views 

Here's a new book about an old book. The new book was designed for a purpose: to try to understand an old book that's a mystery. We know the author of the new book; the author of the old book, the Voynich manuscript, is unknown. Raymond Clemens explores the mystery in The Voynich Manuscript (Yale University Press), examining "a work that has long defied decoders." Reviewing Clemens's book for Nature, Andrew Robinson calls our attention to this "calligraphic conundrum," providing another opportunity to think about intelligent design theory.

In the past, we've explored two cases of intelligent design science in action: cryptology and archaeology. Both of them unite in this story that will intrigue puzzle aficionados. Clemens introduces the riddle of the Voynich manuscript:

In a Connecticut archive sits a manuscript justifiably called the most mysterious in the world. Since its rediscovery more than a century ago, the Voynich manuscript has been puzzled over by experts ranging from leading US military cryptographer William Friedman to cautious (and incautious) humanities scholars. Since 1969, it has been stored in Yale University's Beinecke Rare Book and Manuscript Library in New Haven.
The fine calligraphy of the 234-page 'MS 408', apparently alphabetic, has never been decoded. Copious illustrations of bathing women, semi-recognizable plants and apparent star maps remain undeciphered. No one knows who created it or where, and there is no reliable history of ownership. Its parchment was radiocarbon-dated in 2009 to between 1404 and 1438, with 95% probability. The manuscript could still be a forgery using medieval parchment, but most experts, including Yale's, are convinced it is genuine. [Emphasis added.]

It's like trying to read Egyptian hieroglyphics without a Rosetta Stone. Who wrote it? Why? What does it mean? The best minds in the world have not figured it out for six centuries. Want to try? You can view the whole thing online at the Beinecke digital library. Solve it and you'll be famous.

Giving his article some Indiana Jones mystique, Robinson describes the cloak-and-dagger route of the manuscript from where it was sold in a Jesuit archive "under condition of absolute secrecy" by a shady antiquities dealer named Wilfrid Voynich, to another dealer, to its current home at Yale. While interesting, those facts don't concern our current discussion about the validity of the inference to intelligent design.

The story of the various failed attempts to decipher the script, told by Clemens and Renaissance scholar William Sherman, is particularly fascinating. It begins in the 1920s, when US philosopher William Newbold convinced himself that the text was meaningless, but that each letter concealed an ancient Greek shorthand readable under magnification. He further claimed that this 'finding' proved the authorship of [Roger] Bacon, who he claimed had invented a microscope centuries before Antonie van Leeuwenhoek. After Newbold's death, the 'shorthand' was revealed to be random cracks left by drying ink.
There's a hint of design inference right there: how does one tell the difference between intentional calligraphy and cracks left by drying ink? Newbold was mistaken. He found a false positive, something that wasn't designed that he thought was designed. Proper use of the Design Filter would have prevented his mistake.

What hope is there of decoding the script? Not much at present, I fear. The Voynich manuscript reminds me of another uncracked script, on the Phaistos disc from Minoan Crete, discovered in 1908. The manuscript offers much more text to analyse than does the disc, but in each case there is only one sample to work with, and no reliable clue as to the underlying language -- no equivalent of the Rosetta Stone (A. Robinson Nature 483, 27-28; 2012). Professional cryptographers have been rightly wary of the Voynich manuscript ever since the disastrous self-delusion of Newbold. But inevitably, many sleuths will continue to attack the problem from various angles, aided by this excellent facsimile. Wide margins are deliberately provided for readers' notes on their own ideas. "Bonne chance!" writes Clemens. I'll second that.
Before leaping from a clearly designed book to applying the design inference in reference to a living cell (you suspect that's where we are headed, right?), let's review some facts about design theory.

It's not necessary to know the identity of the designer.

It's not necessary to know the purpose or meaning of the design.

Design is evident from the arrangement of parts themselves when chance and natural law can be effectively ruled out.

Viewers may recall the scenes of Egyptian hieroglyphics in Unlocking the Mystery of Life, where the narrator says, "No one would attribute the shapes and arrangements of these symbols to natural causes, like sandstorms or erosion. Instead, we recognize them as the as the work of ancient scribes, intelligent human agents." Wind and erosion can create remarkable patterns, but not markings like those. We immediately recognize them as symbols, even if no one understood them until the Rosetta Stone was deciphered.

The same is true with the Voynich manuscript. As Doug Axe argues in Undeniable, our universal design intuition immediately recognizes the difference between designed objects and the work of unguided processes. The theory of intelligent design formalizes our intuition in robust ways.

Now we can address the comparison of DNA to cryptic writing. One might think the situation is too different to do so. Imagine the response: Everybody knows that books and drawings of plants and bathing women are made by human beings. DNA is made of chemicals. It's called a genetic "code," but humans don't write that way. We just use the word "code" as a metaphor.

Oh? Remember Craig Venter? His team inscribed their names and other messages in the genome of their synthetic bacterium using DNA letters. Other bio-engineers have made molecular nanomachines out of DNA. Some are considering building DNA computers. Could an investigator unaware of these projects tell the difference between artificial DNA structures and living genomes? If not, the investigator would commit a false negative, calling something not designed when it is designed. If, on the other hand, the investigator did make a valid design inference for the artificial structures, why not use the same reasoning for the rest of the genome?

Paul Davies, in fact, has considered the possibility that intelligent extraterrestrials might have left their mark in our DNA. Proving that would presuppose the ability to distinguish intelligent causes from natural causes. Natural laws are incapable of symbolic logic. Only minds can make symbols mean something or do something they would never naturally do. So it's not just ID advocates who look at DNA for evidence of intentional design. If DNA is indeed a code -- written in symbols that have meaning -- then a design inference is justified. For more on why speaking of a genetic "code" is more than just a metaphor, listen to Charles Thaxton on ID the Future.


The attempt to decipher the Voynich manuscript offers us another illustration of ID principles at work, right in the pages of Nature. One can't exclude ID as a scientific theory and then apply it to a scientific question in the world's leading science journal.

Thursday 19 January 2017

Yet more on Darwinism's convenient convergences.

Sugar Gliders, Flying Squirrels, and How Evolutionists Explain Away Uncooperative Data
Cornelius Hunter

The scientific evidence contradicts evolutionary theory. Consider, for example, the problem of tracing out the mammalian evolutionary tree.

According to evolution similar species should be neighbors on the evolutionary tree. For example, the flying squirrel and sugar glider certainly are similar -- they both sport distinctive "wings" stretching from arm to leg. Shouldn't they be neighboring species? The problem is that, while they have incredible similarities, they also have big differences. Most notably, the flying squirrel is a placental and the sugar glider is a marsupial. So they must be placed far apart in the mammalian evolutionary tree. The problem in this example is that different characters, across the two species, are not congruent. Here is how evolutionists rationalize the contradiction:

Flying squirrels and sugar gliders are only distantly related. So why do they look so similar then? Their gliding "wings" and big eyes are analogous structures. Natural selection independently adapted both lineages for similar lifestyles: leaping from treetops (hence, the gliding "wings") and foraging at night (hence, the big eyes). [Emphasis added.]
This is a good example of how contradictory evidence drives evolutionists to embrace irrational just-so stories. Natural selection cannot "adapt" anything. Natural selection kills off the bad designs. It cannot influence the random mutations that must, somehow, come up with such amazing designs. This is the hard reality, but in order to rationalize the evidence, evolutionists must resort to this sort of teleological language, personifying and endowing natural selection with impossible powers. As often happens, a distinctive grammatical form -- "for similar lifestyles" -- is a dead giveaway. Natural selection becomes a designer.

This example is by no means exceptional. In fact, this sort of incongruence is rampant in biology. Evolutionists have attempted to deny it in the past, but it is undeniable. It is the rule rather than the exception. As one recent paper, entitled "Mammal madness: is the mammal tree of life not yet resolved?" admitted:

Despite the keen interest in mammals, the evolutionary history of this clade has been and remains at the center of heated scientific debates. In part, these controversies stem from the widespread occurrence of convergent morphological characters in mammals.
In addition to the morphological characters, evolutionists make extensive use of molecular sequence data using the so-called molecular clock method. This method, however, has a long history of problems. You can see here and here how the molecular clock method has failed, but an entirely different problem is the non-scientific misuse of this approach. Consider how evolutionists have misused it in the mammalian evolutionary tree problem:

Two articles in this issue address one such node, the root of the tree of living placental mammals, and come to different conclusions. The timing of the splitting event -- approximately 100 Ma based on molecular clocks -- is not in debate, at least among molecular evolutionists. Rather the question is the branching order of the three major lineages: afrotherians (e.g., elephants, manatees, hyraxes, elephant shrews, aardvarks, and tenrecs), xenarthrans (sloths, anteaters, and armadillos), and boreoeutherians (all other placentals; fig. 1).
Such overly optimistic interpretation of the molecular clock results unfortunately has a long history. Dan Graur and William Martin have showed how such overconfidence became common in evolutionary studies. They write:

We will relate a dating saga of ballooning inapplicability and snowballing error through which molecular equivalents of the 23rd October 4004 BC date have been mass-produced in the most prestigious biology journals.
Graur and Martin chronicle how a massive uncertainty was converted to, err, zero, via a sequence of machinations, including the arbitrary filtering out of data simply because they do not fit the theory:

A solution to the single-calibration conundrum would be to use multiple primary calibrations because such practices yield better results than those obtained by relying on a single point. Indeed, it was stated that "the use of multiple calibration points from the fossil record would be desirable if they were all close to the actual time of divergence." However, because no calibrations other than the 310 +/- 0 MYA value were ever used in this saga, the authors must have concluded that none exists. This is not true. Moreover, deciding whether a certain fossil is "close to the actual time of divergence" presupposes a prior knowledge of the time of divergence, which in turn will make the fossil superfluous for dating purposes.
Not only are uncooperative data discarded, but tests are altogether dropped if they don't produce the right answer:

The results indicated that 25% of the homologous protein sets in birds and mammals failed the first part of the consistency test, that is, in one out of four cases the data yielded divergence times between rodents and primates that were older than those obtained for the divergence between synapsids and diapsids. One protein yielded the absurd estimate of 2333 MYA for the human-chicken divergence event, and as an extreme outlier was discarded. For the remaining proteins, the mean bird-mammalian divergence estimate was 393 MYA with a 95% confidence interval of 471-315 MYA. In other words, the 310 MYA landmark was not recovered. Because neither condition of the consistency test was met, it was concluded that the use of the secondary calibration is unjustified.
In one example, a monumental dating uncertainty, roughly equal to the age of the universe, is magically reduced by a factor of 40:

Were calibration and derivation uncertainties taken into proper consideration, the 95% confidence interval would have turned out to be at least 40 times larger (~14.2 billion years).

Now of course there is little question that evolutionists will resolve their evolutionary tree problems. A combination of filtering the data, selecting the right method, and, of course, deciding there is nothing at all improbable about natural selection "adapting" designs in all manner of ways, can solve any problem. But at what cost? As the paper concludes, "Unfortunately, no matter how great our thirst for glimpses of the past might be, mirages contain no water."

On materialism's latest god.

How Physicists Learned to Love the Multiverse

Cornelius Hunter



Theoretical physicist Tasneem Zehra Husain has an excellent article on the multiverse in this month's Nautilus. In this age of the expert whom we must trust to give us the truth, Husain's transparent and clear explanation of some of the underlying philosophical concerns regarding the multiverse is refreshing. I only wish that her writing was more aware of the historical plenitude traditions. Many of the philosophical concerns regarding the multiverse interact heavily with, or even are mandated by, plenitude thinking. Husain makes this quite clear, and locating this thinking in the historical matrix of plenitude traditions would further enrich and elucidate her explanation of the multiverse hypothesis.

Plenitude thinking holds that everything that can exist will exist. As Aruther Lovejoy observed, it had an obvious influence on a range of thinkers since antiquity, including Bruno's infinity of worlds (read extra-terrestrials) and Leibniz's view that the species are "closely united," and "men are linked with the animals."

Though I don't suspect plenitude thinking had a direct influence on the initial development of the multiverse hypothesis, it doesn't take a physicist to see a fairly obvious connection. If everything that can exist will exist, then why should there be only one universe?

But a more interesting interaction comes in how physicists evaluate and justify the multiverse hypothesis which, after all, isn't very satisfying. With the multiverse, difficult scientific questions are answered not with clever, enlightening, solutions but with a sledgehammer. Things are the way they are because things are every possible way they could be. We are merely living in one particular universe, with one set of circumstances, so that is what we observe. But every possible set of circumstances exists out there in the multiverse. There is no profound explanation for our incredible world. No matter how complicated, no matter how unlikely, no matter how uncanny, our world is just another ho-hum universe. All outcomes exist, and all are equally likely. Nothing special here, move along.

As Princeton cosmologist Paul Steinhardt puts it, the multiverse is the "Theory of Anything," because it allows everything but explains nothing. Given this rather unsatisfying aspect of the multiverse, how can it be defended?

Enter plenitude thinking. An important theme in plenitude thinking is that there should be no arbitrary designs in nature. If everything that can exist will exist, then no particular designs will exist where others are also possible.

This has become a powerful element in evolutionary philosophies of science. As Leibniz explained, the entire, continuous, range of designs should be manifest in nature, rather than a particular, arbitrary design. That would be capricious.

This rule holds unless there is sufficient reason for it not to (Leibniz's PSR). If only one design can arise in the first place, due to some reason or technicality, then all is good -- the design is no longer viewed as arbitrary. The problem is, we can find no such reason or technicality for our universe. It seems any old universe could just as easily arise.

Plenitude thinking mandates that the designs we find in nature should fill the space of feasible designs. We should not find particular designs where others are possible. But this seems to be precisely what we find in our universe. It is a particular design where others are possible. Theoreticians have been unable to find any reason for why this design should have occurred.

If we say the universe was designed, then it is a design that is arbitrary, and that violates the Principle of Plenitude. The solution to this conundrum is the multiverse.

This is how physicists can learn to love the multiverse. Yes, it is a sledgehammer approach, but it satisfies plenitude thinking. Our universe is no longer arbitrary. Instead, the full range of universes exists out here. Husain beautifully explains this, and here is the money passage:

For decades, scientists have looked for a physical reason why the [universe's] fundamental constants should take on the values they do, but none has thus far been found. ... But to invoke design isn't very popular either, because it entails an agency that supersedes natural law. That agency must exercise choice and judgment, which -- in the absence of a rigid, perfectly balanced, and tightly constrained structure, like that of general relativity -- is necessarily arbitrary. There is something distinctly unsatisfying about the idea of there being several logically possible universes, of which only one is realized. If that were the case, as cosmologist Dennis Sciama said, you would have to think "there's [someone] who looks at this list and says 'well we're not going to have that one, and we won't have that one. We'll have that one, only that one.' "
Personally speaking, that scenario, with all its connotations of what could have been, makes me sad. Floating in my mind is a faint collage of images: forlorn children in an orphanage in some forgotten movie when one from the group is adopted; the faces of people who feverishly chased a dream, but didn't make it; thoughts of first-trimester miscarriages. All these things that almost came to life, but didn't, rankle. Unless there's a theoretical constraint ruling out all possibilities but one, the choice seems harsh and unfair.

Clearly such an arbitrary design of the universe is unacceptable. (By the way, Husain also adds the problem of evil as an associated problem: If the universe was designed, then "how are we to explain needless suffering?")

The multiverse solves all this. True, the multiverse is an unsatisfactory, sledgehammer approach. But it saves plenitude, and that is the more important consideration.

Husain's article is a thoughtful, measured explanation of how physicists today are reckoning with the multiverse hypothesis. But make no mistake, religion does the heavy lifting. The centuries-old plenitude thinking is a major theme, running all through the discourse. That, along with a sprinkling of the problem of evil, make for decisive arguments.

The multiverse is another good example of how religion drives science in ways that are far more complex than is typically understood.

post human Nanny?

Turn Over Child-Raising to a Computer?
Brendan Dixon

When our kids were small, my wife and I used a baby monitor. It was quite basic: We could hear our kids (when we left the volume up) and watch a jerky jiggle of red lights as noises ebbed and flowed. But the monitor kept us engaged with our kids. We knew better than to answer their every squeak and cry, but were concerned (and what young, new parents are not?) to know if some real danger or problem arose. The monitor covered the distance between our actual and imagined fears.

Our baby monitor made us better, not poorer, parents. If the kids needed a nap, we could, without risk, lay them down and go on to other tasks. It freed us to garden, wash the cars, do the laundry, or whatever, and remain connected to our kids. In a sense, it extended our abilities. We remained the parents. We were still "in charge." The monitor was a tool to help us do better that which we already had to do, that which was ours to do.

That describes the proper use of technology: Good technology extends us so we can be better humans. Tools that ease work, enhance connections, augment vision, and so forth enlarge us. They enable us to do better that which we need to do and that which we ought to do. Technology, however, designed to squeeze humans out by replacing our unique skills makes us less than we are. Granted, we can use some tools either way, to extend or to reduce us. But raising children is uniquely human and best not replaced by machines.

So, while we owned and benefited from a baby monitor, Mattel's new "smart baby monitor," their digital nanny dubbed Aristotle, leaves me flummoxed. Mattel, and their partner Microsoft, introduced the "Echo for Kids" at this year's Consumer Electronics Show. (Let me be clear, before going further, that, while I work for Microsoft, these are my opinions. I, in no way, represent Microsoft nor am I expressing corporate opinions on such matters.) To calm parents, Mattel and Microsoft assure that they take security and privacy seriously, following and exceeding the relevant government standards (such as, COPPA) with additional protections (since no parent wants their digital nanny to answer an innocent request with pornography).

Sidestepping Aristotle's "perky, 25-year-old kindergarten teacher" female voice, Microsoft and Mattel engineered the nanny to "read kids bedtime stories, help teens with their homework, and auto-soothe babies when they wake up wailing." Three different Artificial Intelligence engines power Aristotle. They can identify each child by voice, instruct them, and change behavior as the child grows from toddler to early-teen. Parents can also configure Aristotle; for example, to withhold that bedtime story if child fails to say "please." As one reviewer wryly comments, "never will you have to touch your child again."

Neil Postman noted our inability to measure all things or reduce them to numbers (what would it mean, for example, to say I am 31.6 percent less handsome than Bill Gates?) without loss. Neither can we reduce all things to automation without twisting the tasks into something else. Automation, even trendy, AI-driven automation, entails algorithms. Algorithms, and this includes those guiding "unsupervised" Machine Learning, encode decisions and perspectives. Someone, somewhere, somehow told the machine that this thing matters and that thing does not. Information does not arise spontaneously from matter. Digitizing divides, into zeros and ones, the flow and flux of the world. Something will get missed. Something will get cut out. Something will be valued over another thing. Neutral software does not exist.

Raising children is not a task we can or should automate. Raising a child entails training them in to live fully into that which they are: A person with gifts and abilities. Parents are responsible to inculcate values in their children. Parents, through example and training, teach children how to move through and contribute to society. And, what's more, as every parent who regrets after-the-fact uttering an inappropriate phrase, children learn at least as much by observation as by instruction. My wife and I, more than once, wondered if instruction was a waste of breath and that our children only learned by watching.

What would a child learn from a digital, even if AI-empowered, nanny? Certainly not how to be fully human. Certainly not how to behave within society. Certainly not those values and traditions and choices that make a family unique. Handing off our children to these tools reduces, not expands, our humanity. We, and our children, end up as less than we should be.


I cannot predict the future. But, I suspect, Aristotle will eventually go the way of Sony's AI dog, Aibo. Machines can be stunning, helpful tools. But, even "Deep Learning" Artificially Intelligent machines with "convolutional neural networks" are pitiful replacements for human beings. Good technology amplifies the best of us, inhibits our faults, and promotes the flourishing of the planet. Technology that replaces humans, devalues our unique gifts, and spoils where we live is not technology we should pursue.

Monday 16 January 2017

On the historicity of the biblical Moses.:The Watchtower Society's commentary.

Moses—Man or Myth?

MOSES was born under the shadow of death. His people were a group of nomadic families who had settled in Egypt with their father Jacob, or Israel, to escape starvation. For decades they had coexisted peacefully with their Egyptian neighbors. But then came an ominous change. A respected historical report says: “There arose over Egypt a new king . . . And he proceeded to say to his people: ‘Look! The people of the sons of Israel are more numerous and mightier than we are. Come on! Let us deal shrewdly with them, for fear they may multiply.’” The plan? To control the Israelite population by making them “slave under tyranny” and then by ordering the Hebrew midwives to kill any male children that they delivered. (Exodus 1:8-10, 13, 14) Because of the courage of their midwives who refused to obey the order, the Israelites prospered nevertheless. Hence, Egypt’s king decreed: “Every newborn son you are to throw into the river Nile.”—Exodus 1:22.

One Israelite couple, Amram and Jochebed, “did not fear the order of the king.” (Hebrews 11:23) Jochebed gave birth to a son who would later be described as “divinely beautiful.” * (Acts 7:20) Perhaps they somehow discerned that this child was favored by God. In any event, they refused to give their child up for execution. At the risk of their own lives, they decided to conceal him.


After three months, Moses’ parents could no longer hide him. Running out of options, they took action. Jochebed placed the infant in a papyrus vessel and set him afloat on the Nile River. Unwittingly, she was launching him into history!—Exodus 2:3, 4

Credible Events?

Many scholars today dismiss these events as fiction. “The fact is,” says Christianity Today, “that not one shred of direct archaeological evidence has been found for [the years] the children of Israel sojourned in Egypt.” While direct physical proof may be lacking, there is considerable indirect evidence that the Bible account is credible. In his book Israel in Egypt, Egyptologist James K. Hoffmeier says: “Archaeological data clearly demonstrates  that Egypt was frequented by the peoples of the Levant [countries bordering on the eastern Mediterranean], especially as a result of climatic problems that resulted in drought . . . Thus, for a period roughly from 1800 to 1540 B.C., Egypt was an attractive place for the Semitic-speaking people of western Asia to migrate.”

Furthermore, it has long been acknowledged that the Bible’s description of Egyptian slavery is accurate. The book Moses—A Life reports: “The biblical account of the oppression of the Israelites appears to be corroborated in one often-reproduced tomb painting from ancient Egypt in which the making of mud bricks by a gang of slaves is depicted in explicit detail.”

 The Bible’s description of the tiny ark Jochebed used likewise rings true. The Bible says that it was made of papyrus, which, according to Cook’s Commentary, “was commonly used by the Egyptians for light and swift boats.”


Still, is it not hard to believe that a national leader would order the cold-blooded murder of infants? Scholar George Rawlinson reminds us: “Infanticide . . . has prevailed widely at different times and places, and been regarded as a trivial matter.” Indeed, one need not look far to find equally chilling examples of mass murder in modern times. The Bible account may be disturbing, but it is all too credible.

Moses’ Rescue—A Pagan Legend?

ritics say that Moses’ rescue from the Nile River sounds suspiciously similar to the ancient legend of King Sargon of Akkad—a story that some say predates the story of Moses. It also tells of an infant in a basket who was rescued from a river.

However, history is full of coincidences. And placing an infant in a river may not have been as unusual as it might seem. Observes Biblical Archaeology Review: “We should note that Babylonia and Egypt are both riverine cultures and that putting the baby in a waterproof basket might be a slightly more satisfactory way to dispose of an infant than throwing it on the rubbish heap, which was more usual. . . . The story of the foundling rising to eminence may be a motif of folklore, but that is surely because it is a story that occurs repeatedly in real life.”


In his book Exploring Exodus, Nahum M. Sarna observes that while there are some similarities, the story of Moses’ birth departs from “The Legend of Sargon” in “many significant respects.” Claims that the Bible account was derived from a pagan legend thus ring hollow.

Adopted Into Pharaoh’s Household

The fate of Jochebed’s infant was not left to chance. She “put [the ark] among the reeds by the bank of the river Nile.” This was likely a spot where she hoped it might be discovered. Here Pharaoh’s daughter came to bathe, perhaps regularly. *—Exodus 2:2-4.

The tiny ark was quickly spotted. “When [Pharaoh’s daughter] opened it she got to see the child, and here the boy was weeping. At that she felt compassion for him, although she said: ‘This is one of the children of the Hebrews.’” The Egyptian princess thus decided to adopt him. Whatever name his parents had originally called him is long forgotten. Today he is known the world over by the name his adoptive mother gave him—Moses. *—Exodus 2:5-10.

Is it not farfetched, though, to believe that an Egyptian princess would take in such a child? No, for Egyptian religion taught that kind deeds were a requisite for entrance into heaven. As for the adoption itself, archaeologist Joyce Tyldesley observes: “Egyptian women achieved parity with Egyptian men. They enjoyed the same legal and economic rights, at least in theory, and . . . women could make adoptions.” The ancient Adoption Papyrus actually documents one Egyptian woman’s adoption of her slaves. As for the hiring of Moses’ mother as a wet nurse, The Anchor Bible Dictionary says: “The payment of Moses’ natural mother to nurse him . . . echoes identical arrangements in Mesopotamian adoption contracts.”

Now that he had been adopted, would Moses’ Hebrew heritage be kept from him as a dark secret? Some Hollywood films have made it appear that way. The Scriptures indicate otherwise. His sister, Miriam, cleverly arranged for Moses to be nursed by his own mother, Jochebed. Surely this godly woman would not have concealed the truth from her son! And since children in ancient times were often breast-fed for several years, Jochebed had ample opportunity to teach Moses about ‘the God of Abraham, Isaac, and Jacob.’ (Exodus 3:6) Such a spiritual foundation served Moses well, for after being handed over to Pharaoh’s daughter, “Moses was instructed in all the wisdom of the Egyptians.” The claim of historian Josephus that Moses rose to the rank of general in a war with Ethiopia cannot be verified. However, the Bible does say that Moses “was powerful in his words and deeds.” *—Acts 7:22.


By the age of 40, Moses was likely poised to become a prominent Egyptian leader. Power and wealth could be his if he remained  in Pharaoh’s household. Then an event took place that changed his life.

Exile in Midian

One day Moses “caught sight of a certain Egyptian striking a certain Hebrew of his brothers.” For years, Moses had enjoyed the best of both the Hebrew and Egyptian worlds. But seeing a fellow Israelite beaten—perhaps in a life-threatening manner—moved Moses to make a dramatic choice. (Exodus 2:11) He “refused to be called the son of the daughter of Pharaoh, choosing to be ill-treated with the people of God.”—Hebrews 11:24, 25.

Moses took swift and irrevocable action: “He struck the Egyptian down and hid him in the sand.” (Exodus 2:12) This was not the act of someone “given to sudden outbursts of anger,” as one critic alleged. It was likely an act of faith—albeit misguided—in God’s promise that Israel would be delivered from Egypt. (Genesis 15:13, 14) Perhaps Moses naively believed that his actions would spur his people on to revolt. (Acts 7:25) To his chagrin, though, his fellow Israelites refused to acknowledge his leadership. When news of the killing reached Pharaoh, Moses was forced to flee into exile. He settled in Midian, marrying a woman named Zipporah, the daughter of a nomadic chieftain named Jethro.


For 40 long years, Moses lived as a simple shepherd, his hope of being a deliverer shattered. One day, though, he drove Jethro’s flocks to a spot near Mount Horeb. There, Jehovah’s angel appeared to Moses in a burning bush. Picture the scene: “Bring my people the sons of Israel out of Egypt,” God commands. But the Moses who replies is hesitant, diffident, unsure of himself. “Who am I,” he pleads, “that I should go to Pharaoh and that I have to bring the sons of Israel out of Egypt?” He even reveals a personal flaw that some moviemakers have obscured: He evidently has a speech impediment. How different Moses is from the heroes of ancient myths and legends! His 40 years of shepherding have humbled and mellowed this man. Although Moses is unsure of himself, God is confident that he is suited for leadership!—Exodus 3:1–4:20.

Deliverance From Egypt

Moses leaves Midian and appears before Pharaoh, demanding that God’s people be freed. When the stubborn monarch refuses, ten devastating plagues are unleashed. The tenth plague results in the death of the  firstborn of Egypt, and a broken Pharaoh finally sets the Israelites free.—Exodus, chapters 5-13.

These events are well-known to most readers. But are any of them historical? Some argue that since the Pharaoh is not named, the account must be fiction. * However, Hoffmeier, quoted earlier, notes that Egyptian scribes often deliberately omitted the names of Pharaoh’s enemies. He argues: “Surely historians would not dismiss the historicity of Thutmose III’s Megiddo campaign because the names of the kings of Kadesh and Megiddo are not recorded.” Hoffmeier suggests that Pharaoh is unnamed for “good theological reasons.” For one thing, by leaving Pharaoh unnamed, the account draws attention to God, not Pharaoh.

Even so, critics balk at the notion of a large-scale exodus of Jews from Egypt. Scholar Homer W. Smith argued that such a mass movement “would certainly have resounded loudly in Egyptian or Syrian history . . . It is more likely that the legend of the exodus is a garbled and fanciful account of the flight from Egypt to Palestine of a relatively few members.”

 True, no Egyptian record of this event has been found. But the Egyptians were not above altering historical records when the truth proved to be embarrassing or went against their political interests. When Thutmose III came to power, he tried to obliterate the memory of his predecessor, Hatshepsut. Says Egyptologist John Ray: “Her inscriptions were erased, her obelisks surrounded by a wall, and her monuments forgotten. Her name does not appear in later annals.” Similar attempts to alter or conceal embarrassing facts have even taken place in modern times.

As for the lack of archaeological evidence for the wilderness sojourn, we must remember that the Jews were nomads. They built no cities; they planted no crops. Presumably, they left behind little more than footprints. Still, convincing evidence of that sojourn can be found within the Bible itself. Reference is made to it throughout that sacred book. (1 Samuel 4:8; Psalm 78; Psalm 95; Psalm 106; 1 Corinthians 10:1-5) Significantly, Jesus Christ also testified that the wilderness events took place.—John 3:14.


Unquestionably, then, the Bible’s account of Moses is credible, truthful. Even so, he lived a long time ago. What impact can Moses have on your life today?

Who Wrote the “Books of Moses”?

Traditionally, Moses has been credited with being the author of the first five books of the Bible, called the Pentateuch. Moses may have drawn some of his information from earlier historical sources. Many critics believe, though, that Moses did not write the Pentateuch at all. “It is thus clearer than the sun at noonday that the Pentateuch was not written by Moses,” asserted the 17th-century philosopher Spinoza. In the latter half of the 19th century, the German scholar Julius Wellhausen popularized the “documentary” theory—that the books of Moses are an amalgam of the works of several authors or teams of authors.
  Wellhausen said that one author consistently used the personal name of God, Jehovah, and is thus called J. Another, dubbed E, called God “Elohim.” Another, P, supposedly wrote the priestly code in Leviticus, and yet another, called D, wrote Deuteronomy. Though some scholars have embraced this theory for decades, the book The Pentateuch, by Joseph Blenkinsopp, calls Wellhausen’s hypothesis a theory “in crisis.”

The book Introduction to the Bible, by John Laux, explains: “The Documentary Theory is built up on assertions which are either arbitrary or absolutely false. . . . If the extreme Documentary Theory were true, the Israelites would have been the victims of a clumsy deception when they permitted the heavy burden of the Law to be imposed upon them. It would have been the greatest hoax ever perpetrated in the history of the world.”

Another argument is that stylistic differences in the Pentateuch are evidence of multiple authors. However, K. A. Kitchen notes in his book Ancient Orient and Old Testament: “Stylistic differences are meaningless, and reflect the differences in detailed subject-matter.” Similar style variations can also be found “in ancient texts whose literary unity is beyond all doubt.”

The argument that the use of different names and titles for God is evidence of multiple authorship is particularly weak. In just one small portion of the book of Genesis, God is called “the Most High God,” “Producer of heaven and earth,” “Sovereign Lord Jehovah,” “God of sight,” “God Almighty,” “God,” “the true God,” and “the Judge of all the earth.” (Genesis 14:18, 19; 15:2; 16:13; 17:1, 3, 18; 18:25) Did different authors write each of these Bible texts? Or what about Genesis 28:13, where the terms “Elohim” (God) and “Jehovah” are used together? Did two authors collaborate to write that one verse?

The weakness of this line of reasoning becomes particularly evident when applied to a contemporary piece of writing. In one recent book about World War II, the chancellor of Germany is termed “Führer,” “Adolf Hitler,” and simply “Hitler” in the course of just a few pages. Would anyone dare claim that this is evidence of three different authors?

Nevertheless, variations on Wellhausen’s theories continue to proliferate. Among them is the theory propounded by two scholars regarding the so-called J author. They not only deny that it was Moses but also proclaim that “J was a woman.”

Yet more on using I.D to debunk I.D.

Robert Wright Asks: Can Evolution Have a Higher Purpose?
Brian Miller

Writing for the New York Times philosophy forum "The Stone," journalist Robert Wright asks a good question: "Can Evolution Have a 'Higher Purpose'?" He describes an interview with evolutionist William Hamilton, who developed the theory of kin selection. Hamilton postulated that some type of "ultimate good, which is of a religious nature," could exist, and to understand it we have to "look beyond what the evolutionary theory tells us" to some higher source. If so, life could have some transcendent higher purpose.

Hamilton goes on to describe the higher source not as God or any other nonmaterial entity but as aliens who set up earth as a type of zoo. These visitors could have introduced self-replicating molecules, which evolved over time through natural processes. The aliens could also have on rare occasions intervened to prevent such undesirable consequences as humans driving themselves to extinction. Their interventions might even explain religious stories of miracles. Similarly, Richard Dawkins allows for the possibility of design in life, so long as that design was generated by aliens, who themselves were the product of purely materialistic causes. Both scientists, in other words, open the door to considering alien, but not otherwise intelligent, design.

Wright reassures readers that any understanding of purpose in evolution must remain entirely within a purely materialist framework. He dispels what he considers four common myths:

" To say that there's in some sense a 'higher purpose' means there are 'spooky forces' at work."

"To say that evolution has a purpose is to say that it is driven by something other than natural selection."

"Evolution couldn't have a purpose, because it doesn't have a direction."

"If evolution has a purpose, the purpose must have been imbued by an intelligent being."

This is the standard evolutionary doxology that the appearance of purpose in nature is simply a product of natural selection directing species toward certain outcomes, which only mimic design. For instance, the vertebrate eye appears to have been designed by some intelligence for the purpose of advanced high-resolution vision, but it actually developed through successive evolutionary steps which each provided some immediate benefit. In this familiar view, the steps were not directed toward any end goal. The ultimate form of the eye is simply a happy byproduct of blind mutations and selection.

However, this simple description of evolution producing the appearance of purpose is increasingly questioned by scientists. Even some who are determined to operate within a purely materialistic framework, such as proponents of the Extended Evolutionary Synthesis, doubt the creative power of natural selection to operate as the sole source of innovation in nature.

Still more significant, scientists studying evolutionary algorithms have come to recognize that no search process (e.g., natural selection) is capable of generating a biological novelty (e.g., an eye) unless information is provided about the desired outcome in advance. So, natural selection might be capable of slightly improving a population of moths by making them lighter or darker, or it might be able to increase the size of finch beaks. But, it could never make such innovative changes as morphing the arms of a dinosaur into the wings of a bird. To generate the information to find such a target, intelligence is required.

This conclusion is further supported by experimental evidence that even a single typical protein could not come about by natural selection. And even if evolutionary pathways were direct and simple, the required timescales are far greater than what the fossil record would allow. (See here, here, and here.) As a result, the ubiquitous appearance of purpose in nature cannot be explained away by natural processes with occasional tweaking by aliens. Instead, it requires continuous guidance, often dramatic, throughout the history of life.

In addition, the evidence that the laws of physics were fine-tuned for life indicates that the designer, unlike aliens, had to exist outside of our universe. Wright actually addresses this argument with some very creative responses. He describes the attempt by physicist Lee Smolin to apply evolution to entire universes:

Smolin thinks our universe may itself be a product of a kind of evolution: maybe universes can replicate themselves via black holes, so over time -- over a lot of time -- you get universes whose physical laws are more and more conducive to replication.

Several theories like this have been proposed, seeking to explain how a universe-generating mechanism could produce a multiverse, so that a life-permitting universe could come about purely by chance. However, no such scenario can escape the need for a designer, for they all require that the underlying laws and initial conditions be finely tuned for the generator to work properly.

Wright goes on to describe an even more extraordinary theory:

That said, one interesting feature of current discourse is a growing openness among some scientifically minded people to the possibility that our world has a purpose that was imparted by an intelligent being. I'm referring to "simulation" scenarios, which hold that our seemingly tangible world is actually a kind of projection emanating from some sort of mind-blowingly powerful computer; and the history of our universe, including evolution on this planet, is the unfolding of a computer algorithm whose author must be pretty bright.

That's a stretch, but Wright's or Smolin's way of viewing the world demands something like it. As evolutionary biologist Richard Lewontin famously acknowledged, materialism is "absolute, for we cannot allow a Divine Foot in the door."


The honest observation of nature is a constant reminder that an intelligent agent must have directed the formation of the cosmos and the design of life. Materialism, however, is a demanding master, forcing its followers to embrace any theory, regardless of how implausible, in order to deny that this appearance of design and purpose is real.

21st century alchemists find no place for the facts in their elixirs.

At Denver Museum of Nature & Science, Patron Reports Errors in Displays, Gets Brushed Off
Sarah Chaffee

Misrepresentations of the scientific evidence on evolution are everywhere. Check out the displays at your local science museum, for example, and you can't help tripping over them.


We received a note from a friend of ours who visited the Denver Museum of Nature & Science. In addition to exploring the new robotics exhibit with his grandchildren, Discovery Institute supporter and intelligent design enthusiast Jim Campbell decided to visit the origins-of-life section. Two of the displays, on the formation of cells and the Miller-Urey experiment, were scientifically inaccurate.
 He sent a letter to the museum, pointing this out. On the "Recipe for Life" display:

...According to the display, the recipe just requires a few ingredients; carbon, sulphur, nitrogen, hydrogen, oxygen, and phosphorus. Then follow these steps:

Mix together in a warm environment,

Dry out occasionally,

Add time and energy, and

Allow to combine in orderly, patterned ways.

That's it! Just mix up a few chemicals, add some time and energy and life magically appears. To make it clear how easy it must have been, your exhibit shows a mixing bowl as though creating life was little more than making a loaf of bread or a pot of chicken soup.

For an informed view on the subject of life's origins, consider what Dr. James Tour has to say about it. Dr. Tour is the T. T. and W. F. Chao Professor of Chemistry at Rice University. He also teaches Computer Science, Materials Science, and Nano-Engineering. Dr. Tour is one of the world's leading experts in synthetic chemistry -- the science of designing complex molecules. These quotes are taken from his  Pascal Lecture at the University of Waterloo  in 2016:

Abiogenesis is the prebiotic process wherein life, such as a cell, arises from non-living simple organic compounds: carbohydrates, nucleic acids, lipids and proteins (polymers of amino acids). All this is needed before evolution can begin...

(Collective Cluelessness) We have no idea how the molecules that compose living systems could have been devised such that they would work in concert to fulfill biology's functions. We have no idea how the basic set of molecules, carbohydrates, nucleic acids, lipids, and proteins, were made and how they could have coupled in proper sequences, and then transformed into ordered assemblies until there was the construction of a complex biological system, and eventually to that first cell. Nobody has any idea on how this was done when using our commonly understood mechanisms of chemical science. Those that say that they understand are generally wholly uninformed regarding chemical synthesis...

Those that say this is all worked out, they know nothing: nothing about chemical synthesis. Nothing!

(Further Cluelessness) From a synthetic chemical perspective, neither I nor any of my colleagues can fathom a prebiotic molecular route to construction of a complex system. We cannot even figure out the prebiotic routes to the basic building blocks of life: carbohydrates, nucleic acids, lipids, and proteins. Chemists are collectively bewildered. Hence I say that no chemist understands prebiotic synthesis of the requisite building blocks let alone assembly into a complex system.

I've asked all my colleagues, National Academy members, Nobel Prize winners. I sit with them in offices. Nobody understands this. So if your professors say, "It's all worked out" -- your teachers say, "It's all worked out," they don't know what they are talking about. It is not worked out. You cannot just refer this to somebody else. They don't know what they are talking about.

The "Recipe for Life" display is, at best, misleading and, at worst, blatant propaganda. It is an embarrassment to the museum and should be removed.

And on the Miller-Urey exhibit, Mr. Campbell commented:

The second offensive museum display involves the Miller-Urey experiment. The experiment was certainly important and informative at the time it was conducted although there are now valid questions concerning whether the atmosphere simulated in the experiment was representative of the intended primitive atmosphere. However, the primary issue with this display concerns its caption, "Replicating Life in the Lab?"

The caption, presented in the form of a question to avoid being technically incorrect, is clearly intended to mislead impressionable people into believing that life has been created in a lab. I'm sure that the people responsible for setting up this display know full well that life has not been created in a laboratory -- not even close. Yet the display seems designed to mislead people into believing just the opposite.

Again, this display is not worthy of the museum and the caption should be at least modified to more honestly represent the experiment.

How did the Denver Museum of Nature & Science respond? In a September letter, they noted:

You shared your criticisms on the origins of life section of Prehistoric Journey. I have passed your recommendations to the multi-disciplinary team -- including curators, designers, and educators -- who collectively oversee our Prehistoric Journey exhibition. Due to travel schedules, they are not due to meet for several weeks, but are going to review your input, and will get back to you after that discussion.

Campbell followed up after receiving the letter, and then again a few months later, but has received no response.

It's great to see people using their scientific knowledge to point out flaws in Darwinian dogma. But a museum brushing off a customer, when it comes to evolution, sadly doesn't come as a big surprise.

OOL theorists continue to ask the wrong questions.

University of Wisconsin Geoscience Postdoc: Bury Carbon, Get Animals
Evolution News & Views 

Jon Husson, a geoscience postdoc at the University of Wisconsin-Madison, sits by an outcrop of black shale in Nova Scotia, where he gets a bright idea. His insight, according to an announcement from UWM:

For the development of animals, nothing -- with the exception of DNA -- may be more important than oxygen in the atmosphere.
Oxygen enables the chemical reactions that animals use to get energy from stored carbohydrates -- from food. So it may be no coincidence that animals appeared and evolved during the "Cambrian explosion," which coincided with a spike in atmospheric oxygen roughly 500 million years ago.

It was during the Cambrian explosion that most of the current animal designs appeared. [Emphasis added.]

We all know that correlation is not causation. Does Husson assert that oxygen caused animals to appear? Well, he notes the distinction:

"It's a correlation, but our argument is that there are mechanistic connections between geology and the history of atmospheric oxygen," Husson says. "When you store sediment, it contains organic matter that was formed by photosynthesis, which converted carbon dioxide into biomass and released oxygen into the atmosphere. Burial removes the carbon from Earth's surface, preventing it from bonding molecular oxygen pulled from the atmosphere."
So far so good. One can buy the argument up to this point. But what about those animals? Professor Shanan Peters comes forward.

"Burying the sediments that became fossil fuels was the key to advanced animal life on Earth," Peters says, noting that multicellular life is largely a creation of the Cambrian.
A "creation"? But who or what used the key to open a door? Who is the "creator," and how was multicellular life "created"? Shanan holds a trilobite in his hand. Instead of asking those questions, he asks, "Why is there oxygen in the atmosphere?"

It's a fair question, but Tenenbaum spoke of "the development of animals." He mentioned the Cambrian explosion. He says animals "appeared and evolved." By implication, he seems to be saying: bury carbon, release oxygen, and...well, what do you know? Animals, created by the Cambrian explosion.


It would be good to hear a little more about the DNA referenced in the first sentence. As far as we know, oxygen does not "create" DNA or the coded information it famously contains.