Search This Blog

Saturday, 23 December 2017

Vox populi vox Dei?Pros and cons.

In search of high quality ignorance?

Unanswered Questions: New York Times Highlights the Benefits of Teaching "Ignorance" in Science
Sarah Chaffee September 4, 2015 12:11 PM 

Concerned that his students thought they now understood the brain after studying the course's 1400+ page textbook, Dr. Stuart Firestein, neuroscientist and chairman of the Department of Biological Sciences at Columbia University, wrote Ignorance: How it Drives Science. He was afraid his students might come away with the idea that science has all the answers. His book takes a more realistic view, describing scientific discovery as "feeling around in dark rooms, bumping into unidentifiable things, looking for barely perceptible phantoms."

In a recent op-ed in the New York Times, Jamie Holmes, author of the forthcoming book Nonsense: The Power of Not Knowing, shared Firestein's story to emphasize the role of ignorance in education. He explains that ignorance can catalyze curiosity and prompt questions in fields from science to business to education:

As [Firestein] argued in his 2012 book "Ignorance: How It Drives Science," many scientific facts simply aren't solid and immutable, but are instead destined to be vigorously challenged and revised by successive generations. ...
Presenting ignorance as less extensive than it is, knowledge as more solid and more stable, and discovery as neater also leads students to misunderstand the interplay between answers and questions.

People tend to think of not knowing as something to be wiped out or overcome, as if ignorance were simply the absence of knowledge. But answers don't merely resolve questions; they provoke new ones...

But giving due emphasis to unknowns, highlighting case studies that illustrate the fertile interplay between questions and answers, and exploring the psychology of ambiguity are essential. Educators should also devote time to the relationship between ignorance and creativity and the strategic manufacturing of uncertainty.

... Our students will be more curious -- and more intelligently so -- if, in addition to facts, they were equipped with theories of ignorance as well as theories of knowledge.

It's encouraging to find a discussion like this in what might seem an unlikely place. At Discovery Institute, we support critical analysis of ideas about evolution and the origin of life precisely because those are issues where many answers remain as yet unknown. Teaching students about issues where there are more questions than answers fosters high-level learning.

The science of the past two centuries has dramatically expanded our knowledge, from the inventions of computers and the Internet, to making open-heart surgery possible. But there are still many mysteries, and not just at the margins either. Teaching only about our positive scientific knowledge is not enough. Quality science education informs students about areas of certainty and about those where inquiry is ongoing.

Alluding to Thomas Kuhn, Holmes notes that acknowledging ignorance causes us to confront our preconceptions. In The Structure of Scientific Revolutions, Kuhn stated that when faced with an anomaly, a theory's defenders "will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict." But eventually, given enough anomalies, the old theory will be replaced. Confronting unknowns is an essential part of scientific progress.

Chemical evolution -- the development of the first cell -- is clouded with mystery. 2007 Priestley Medalist George M. Whitesides wrote, "Most chemists believe, as do I, that life emerged spontaneously from mixtures of molecules in the prebiotic Earth. How? I have no idea." Similarly, leading molecular biologist Eugene Koonin has noted:

The origin of life is one of the hardest problems in all of science, but it is also one of the most important. Origin-of-life research has evolved into a lively, inter-disciplinary field, but other scientists often view it with skepticism and even derision. This attitude is understandable and, in a sense, perhaps justified, given the "dirty" rarely mentioned secret: Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin-of-life field is a failure -- we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle.
Koonin acknowledges that some progress has been made, but falls back on the controversial multiverse theory to explain how life sprang into existence against all odds.

The enigma of biological origins offers an ideal opportunity for students to learn about a field of persistent scientific uncertainty, instead of simply being spoon-fed "facts." Our Science Education Policy states:

Instead of mandating intelligent design, Discovery Institute seeks to increase the coverage of evolution in textbooks. It believes that evolution should be fully and completely presented to students, and they should learn more about evolutionary theory, including its unresolved issues. In other words, evolution should be taught as a scientific theory that is open to critical scrutiny, not as a sacred dogma that can't be questioned.
Indeed, the sense of mystery has driven some of the very greatest scientists. Isaac Newton put it well when he said, "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Learning based on active engagement and critical thinking promotes understanding and excitement. As Holmes writes, "Questions don't give way to answers so much as the two proliferate together. Answers breed questions. Curiosity isn't merely a static disposition but rather a passion of the mind that is ceaselessly earned and nurtured."

Firestein now teaches a class on scientific ignorance. Hoping to acquaint his students with the world of inquiry, he invites scientists from diverse fields to come and lecture -- not about their discoveries, but about what they don't know. Our stance on academic freedom merely recognizes that his beneficial pedagogical philosophy should extend to the teaching of evolution, no less than any other area of study.

The alliance is dead?:Pros and cons.

Vladimir Putin has made Russia great again?:Pros and cons.

On the making of a bomb-thrower II

Why the designer is King of the mountain and not God of the gaps.

Stephen Meyer Debunks the “God of the Gaps” Objection
David Klinghoffer | @d_klinghoffer


 A vacuous yet often heard objection to intelligent design decries ID as a “God of the gaps” argument. Here, Stephen Meyer requires less than three minutes to show that the complaint rests on a failure to understand a basic feature of the theory of ID.
Yet the point comes up again and again. The group BioLogos, for one, in promoting theistic evolution, starts this way in answer to what they call a “common question,” “Are gaps in scientific knowledge evidence for God?”Are gaps in scientific knowledge evidence for God?


God-of-the-gaps arguments use gaps in scientific explanation as indicators, or even proof, of God’s action and therefore of God’s existence. Such arguments propose divine acts in place of natural, scientific causes for phenomena that science cannot yet explain. The assumption is that if science cannot explain how something happened, then God must be the explanation. But the danger of using a God-of-the-gaps argument for the action or existence of God is that it lacks the foresight of future scientific discoveries. With the continuing advancement of science, God-of-the-gaps explanations often get replaced by natural mechanisms. Therefore, when such arguments are used as apologetic tools, scientific research can unnecessarily be placed at odds with belief in God.1 The recent Intelligent Design (ID) movement highlights this problem. 
No, that is wrong, as Meyer explains. It’s a topic elaborated, among many others, in the vast yet sprightly new volume,  Theistic Evolution: A Scientific, Philosophical, and Theological Critique,of which Dr. Meyer served as one of the editors.

May I add a further observation? If you don’t mind, ID is not an “apologetic tool,” as BioLogos puts it. It’s not a “tool” at all, except in the scientific sense, as a heuristic, a methodology for getting at the truth about life and about nature. It’s not a “tool” to use on people, to keep the kids from getting rowdy and disbelieving what their teacher says in class. This misunderstanding is common to theistic and atheistic evolutionists: they think too much in terms of winning recruits or keeping troops in line, rather than finding out what’s true, wherever the quest may take you.

Please, refrain from using a “tool” on me. I don’t care for that approach in a religious or a scientific context, and I can’t help but think many others, of whatever spiritual community or of none, must likewise find it patronizing.

It’s the difference between being treated as a child, or treated as an adult. That may be ID’s greatest strength — it speaks to us as adults — and one of the biggest turnoffs of theistic evolution.

On Darwinism and the magic kingdom.

Listen: Walt Disney on Evolution…Yes, Evolution
David Klinghoffer | @d_klinghoffer

For many of us, encounters with the imagination of Walt Disney were key influences in childhood, and beyond. When I was growing up in Southern California, visits to Disneyland were a touchstone ritual for me — and so too for our colleague John West, the CSC’s associate director. In a fascinating and frankly, thoroughly charming new episode of ID the Future, he recalls a first visit at the age of five, and an ongoing fascination thereafter.


Part of the interest here lies in the less-than-obvious question of how Disney shaped young people’s understanding of evolution. Evolution? Yes, that surprised me too when I first heard it. But consider.Dr. West examines theme park and World’s Fair attractions and, above all, the enigmatic Fantasia with its iconic sequence “Rite of the Spring.” The author of Walt Disney and Live Action,among his other books, West shows that Walt Disney had not only a long curiosity about evolution but some startlingly subtle, even contemporary points to make about it. You’ll enjoy listening 

Sunday, 17 December 2017

Using design to disprove design?

The Origin of Life: Dangers of Taking Research Claims at Face Value
Brian Miller

In  an article here yesterday, I wrote about philosopher Vincent Torley’s critique of my posts related to the origin of life, and I corrected his errors on thermodynamics. Today, I will correct the errors related to the state of origins research. As a general overview, origin-of-life research falls into two categories. The first is experiments that attempt to accurately model the conditions on the early Earth. The classic example is the Stanley Miller experiment which started with a combination of reducing gases (methane, ammonia, and hydrogen) that were believed to exist, and the researchers applied to the mixture electrical discharges. The resulting reactions produced several amino acids, heralded as a major breakthrough.

Unfortunately, scientists later recognized that the early atmosphere was not likely so reducing Instead, it contained a different combination of gases including carbon dioxide. All subsequent experiments conducted with more realistic starting ingredients failed to produce the building blocks of life (amino acids, carbohydrates, nucleotides, and lipids) in significant quantities. An additional challenge for all such experiments, including Miller’s, was that they produced other byproducts that would have caused deleterious cross reactions. Such conditions would have prevented any subsequent stages leading to life. All roads led to dead ends.

The consistent failure of realistic experiments led to a second class which do not attempt to model actual conditions on the early Earth. Instead, they follow what is termed prebiotic synthesis. Origins expert Robert Shapiro outlined the typical process used for RNA in his  analysis of origin-of-life research. Such experiments involve a long series of highly orchestrated steps which include purifying desired products, removing unwanted byproducts, changing physical and chemical conditions, adding unrealistically high concentrations of assisting substances, and other interventions to ensure that the target molecules are achieved.

Attempting to relate such research to actual events on the early Earth leads to an almost comical series of dozens of highly improbable events. Various proposed origins scenarios over the years have involved meteorite showersvolcanos, poisonous gas, and other phenomena coupled to the precise transportation of lucky molecules through a series of multiple subsequent environments while always passing through the perfect intermediate conditions. Torley actually describes just such a fanciful scenario proposed by Sutherland. As an amusing side note, a friend reviewed origins research, and she was not sure if she was reading about scientific theories or the synopsis of the next Michael Bay natural disaster movie. Ironically, such synthesis experiments actually bolster the design argument by demonstrating that the origin of the building blocks of life and their subsequence assembly require substantial intelligent direction.

My previous article described how two of the major obstacles to the origin of life are overcoming the free energy barriers and producing the fantastically improbable configurations of atoms associated with life. The synthetic experiments bypass these challenges through intelligent intervention. As an illustration, the origin of complex molecules such as RNA and lipids must start with high free-energy solutions of reactants. However, the abundance of such sets of molecules under natural conditions drops exponentially with their free energy. Researchers overcome this challenge by starting with highly concentrated solutions of the ideal combination of pure chemicals. Highly concentrating the chemicals artificially  increases their effective free energies, so reactions are driven in the desired direction.

In reality, many of the proposed starting molecules for origins theories would have quickly reacted on the early Earth with other molecules in the environment preventing substantial buildup (See The Mystery of Life’s Origins, Ch. 4). This challenge also holds true for the origination of any  autocatalytic system of reactions which is another essential component for life’s origins. The dilemma is similar to that of an entrepreneur who wishes to start a business to generate a profit, but starting it requires a million dollars for an initial investment. Unfortunately, the entrepreneur is destitute and has no credit for borrowing the needed capital. As a result, he has no way to even take the first step.

The configurational challenge relates to the fact that vast numbers of chemical reactions could take place on the early Earth. However, life’s origin requires that only specific ones proceed while other far more likely ones are blocked. This hurdle relates both to the origin of the building blocks and of cellular metabolism. In addition, in large molecules the atoms can take on numerous configurations, and the right ones are exceptionally unlikely. Shapiro described how the atoms in RNA could form hundreds of thousands to millions of other stable organic molecules. Researchers overcome this challenge by forcing the atoms to achieve the desired arrangements through tightly controlling the reaction steps. Such constraining of outcomes parallels  role of information in constraining messages in information theory. And, the relationship between information and precise causal control in biology was made explicit in the talk by Paul Griffiths at the Royal Society meeting on New Trends in Evolutionary Biology.

To summarize, researchers have shown how the origin of life might proceed through intelligent design, not blind processes.Shapiro illustrates this point beautifully in analyzing the experiments of John Sutherland, but his comments relate to all such experiments.

Reviewing Sutherland’s proposed route, Shapiro noted that it resembled a golfer, having played an 18 hole course, claiming that he had shown that the golf ball could have, through some combination of wind, rain, heating, cooling, dehydration, and ultraviolet irradiation played itself around the course without the golfer’s presence.

In Torley’s article he references several prebiotic synthesis experiments, but he fails to appreciate their irrelevance to the origins problem for the reasons outlined above. For instance, he describes how Sutherland and other researchers used ultraviolet light to help promote reactions leading the life. What Torley missed was that these experiments used a very specific wavelength of light (e.g., 240 nanometers at the ideal intensity for the optimal amount of time to drive the desired reactions. If the experiments had used light mimicking that from the sun hitting the early Earth, they would have failed since other wavelengths would have destroyed the target moleculesThe difference between the use of light in the experiments and the actual sun parallels the difference between the fire from a blowtorch used by a skilled craftsman and an open fire burning down a building.

Torley also describes how different researchers were able to drive key reactions even when they contained contaminants. For instance, Sutherland included a phosphate at the beginning of his experiments designed to create nucleotides. Similarly,  Jack Szostak’s group created vesicles (containers) out of two fatty acids which could house an RNA enzyme (ribozyme), and he added Mg2+ which under other conditions would have prevented vesicles from forming. However, the relevance of these experiments was greatly exaggerated.

The use of such terms as “contaminant” and “messy” is highly misleading. Phosphate is an essential component of the target nucleotide molecules, and Mg2+ was essential for activating the ribozymes. They were able to include these molecules because the experiments were meticulously designed to ensure they would produce the desired outcomes. If molecules were added which would have been abundant on the early Earth (true contaminants), the experiments would have failed. As an analogy, the researchers resemble car owners boasting about how their car engines could function even in the presence of such “contaminants” as gasoline and motor oil. However, if sand and glue were added, the engines would have fared far less well.

Torley mentions one additional class of studies which use simulations to attempt to address origin-of-life challenges. Specifically, he references Nigel Goldenfeld’s research to solve the homochirality problem  — many building blocks of life can come in either a right-handed or a left-handed form, but life requires only one handedness (homochiral). The results from simulation experiments are generally treated with great caution since they can be designed to model any imaginable conditions and to proceed according to any desired rules.

As a case in point, Goldenfeld’s study is based on an  abstract mathematical model and numerical simulations that center on an achiral (mirror image is the same as itself) molecule interacting with the right and left-handed versions (enantiomers) of a chiral molecule to yield another copy of the latter. For instance, the “autocatalytic” reaction could start with one left-handed amino acid and end with two left-handed amino acids. The simulation set the dynamics of the reactions to eventually lead to a pure mixture of one enantiomer.


The main challenge with these results is that the underlying model is completely unrealistic. No chiral building block of life (e.g. right-handed ribose) has been shown to interact with any substance to self-replicate. On the contrary, in all realistic environments mixtures with a bias of one enantiomer tend toward mixtures of equal percentages of both left-handed and right-handed versions. Goldenfeld “solved” the homochirality problem by creating an artificial world that eliminated all real-world obstacles. All simulations that purport to be breakthroughs in origins problems follow this same pattern. Conditions are created that remove the numerous practical challenges, and the underlying models are biased toward achieving the desired results.

The skilled trades;still the smart choice. III

Saturday, 16 December 2017

On Christ's loyal and wise steward:The Watchtower Society's commentary.

FAITHFUL AND DISCREET SLAVE

When answering the apostles’ question concerning his future presence and the conclusion of the existing system of things, Jesus Christ included a parable, or illustration, dealing with a “faithful and discreet slave.” The faithful slave’s master appointed him over his domestics, or household servants, to provide them their food. If approved at his master’s coming (evidently from some trip), the slave would be rewarded by being placed over all the master’s belongings.—Mt 24:3, 45-51.

In the parallel illustration at Luke 12:42-48, the slave is called a steward, that is, a house manager or administrator, one placed over servants, though he is himself a servant. Such a position was often filled in ancient times by a faithful slave. (Compare Ge 24:2; also the case of Joseph at Ge 39:1-6.) In Jesus’ illustration the steward is first assigned only to the supervision and timely dispensation of the food supplies to the master’s body of attendants, or servants, and later, because of his faithful and discreet handling of this ministry, his assignment is widened out to embrace supervision of all the master’s holdings. Regarding the identification of the “master” (Gr., kyʹri·os, also rendered “lord”), Jesus had already shown that he himself occupied such a position toward his disciples, and they addressed him as such on occasion. (Mt 10:24, 25; 18:21; 24:42; Joh 13:6, 13) The question remains concerning the application of the figure of the faithful and discreet slave, or steward, and what his dispensing food to the domestics represents.

“Slave” is in the singular. This, however, does not require that the “slave” prefigure only one particular person who would be so privileged. The Scriptures contain examples of the use of a singular noun to refer to a collective group, such as when Jehovah addressed the collective group of the Israelite nation and told them: “You are my witnesses [plural], . . . even my servant [singular] whom I have chosen.” (Isa 43:10) The “antichrist” is shown to be a collective group made up of individual antichrists. (1Jo 2:18; 2Jo 7) Similarly, the “slave” is composite. It was to be appointed in the time of the end as a channel to give out spiritual “food at the proper time.” (Mt 24:3, 45; Lu 12:42) In the first century, Jesus set a pattern for how spiritual food would be dispensed in the Christian congregation. Just as he had distributed literal food to the crowds through the hands of a few disciples, spiritual food was to be provided through the hands of a few. (Mt 14:19; Mk 6:41; Lu 9:16) Jesus trained the apostles for the role they would have after Pentecost 33 C.E. as a channel in dispensing spiritual food. They were later joined by other elders to serve as a governing body in order to settle issues and to direct the preaching and teaching of the Kingdom good news. (Ac 2:42; 8:14; 15:1, 2, 6-29) After the death of the apostles, a great apostasy set in. But in the time of the end—in keeping with the pattern he set in the first century of feeding many through the hands of a few—Jesus selected a small group of spirit-anointed men to serve as “the faithful and discreet slave,” to prepare and dispense spiritual food during his presence.


The domestics are all those who belong to the Christian congregation, both the anointed and the “other sheep,” who are fed spiritual food. (Joh 10:16) This includes the individual members making up “the faithful and discreet slave,” since they too are recipients of the food dispensed. Those who make up the faithful slave will receive expanded responsibility if they are found faithful at the master’s promised coming. When they receive their heavenly reward and become corulers with Christ, he will appoint them over “all his belongings.” Along with the rest of the 144,000, they will share Christ’s vast heavenly authority.—Mt 24:46, 47; Lu 12:43, 44.

The left is right?:Pros and cons.

A clash of Titans.LXV

File under "Well said" LVII

That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved.
Benjamin Franklin.

It's official, Wikipedia has become the Borg.

Wikipedia Co-Founder Blasts “Appallingly Biased” Wikipedia Entry on Intelligent Design
David Klinghoffer | @d_klinghoffer

When it comes to intelligent design, Wikipedia and its axe-grinding editors are ridiculously biased and unfair. And guess what? Even Wikipedia co-founder  Larry Sanger  agrees. He wrote as much last week on the Talk page for the Wiki article on ID,  under the heading, “My $0.02 on the issue of bias”:

As the originator of and the first person to elaborate Wikipedia’s neutrality policy, and as an agnostic who believes intelligent design to be completely wrong, I just have to say that this article is appallingly biased. It simply cannot be defended as neutral. If you want to understand why, read this. I’m not here to argue the point, as I completely despair of persuading Wikipedians of the error of their ways. I’m just officially registering my protest. —Larry Sanger (talk)  05:30, 8 December 2017 (UTC)

A philosophy PhD, Dr. Sanger worked with Jimmy Wales to found Wikipedia in 2001. He is a self-described  zealot for neutrality,” and reasonably concludes that Wikipedia’s content on intelligent design is anything but neutral. This is the man who came up with the name “Wikipedia.” He further introduces himself on his Talk page:

I’m no longer associated with Wikipedia, which I co-founded. (I named it, crafted much of the policy that now guides the project, and led the project for its first year. As Jimmy Wales declared on March 25, 2002, a week before I resigned, I was “the final arbiter of what the consensus is” on Wikipedia.)

A thoughtful reader discovered Sanger’s candid comment after he (the reader) sought to edit the entry on ID. He says he corrected the absurdly biased opening sentence, only to find his edits almost instantly reversed, “within one minute.” The first sentence of the   entry reads:

Intelligent design (ID) is a religious  argument for the existence of Godpresented by its proponents as “an evidence-based scientific theory about life’s origins”,[1][2] though it has been found to be  pseudoscience.[3][4][5]

This matters for an obvious reason: countless people curious about ID receive their introduction to the subject via a Web search that starts, thanks to Google, with a visit to the Wikipedia article. Many will stop right there. Many science reporters and others in the media — heck, many professional scientists — seem to have informed themselves on the topic by going no further than Wikipedia.  You don’t have to be a neutrality “zealot” to understand that evidence of design in nature (not the “existence of God”) poses a question of huge, urgent interest, that serious scientific (not religious, or pseudoscientific) arguments are made for ID, and that it does a terrible disservice to public awareness to so grossly mislead readers. (And not only readers. Don’t forget anyone who uses  Amazon’s Alexa.)That is the case even if ID is ultimately wrong, or “completely wrong,” as Sanger puts it.

In a long and carefully argued essay, Why Neutrality?”, he laments, “There’s a great latent demand for neutral content, and the demand is unmet.” And that is no doubt true. However, at Wikipedia, a masked mob of pseudonymous trolls has taken over and the public’s “latent demand” is permanently blocked from being satisfied. As I’ve pointed out,  many editors hardly bother to hide their ideological bias.

An interesting article at the news site Vice gives the background on Sanger’s involvement with Wikipedia.

It was Sanger, then, who synthesized emerging “wiki” technology with Nupedia’s original vision. Sanger came up with the name “Wikipedia,” wrote its founding documents, and spent the next 14 months as the site’s sole paid editor and philosophical leader. But as word about the project spread throughout web, Wikipedia and Sanger were inundated with new users, some of them trolls, who plagued Sanger with “edit wars” and resisted input from experts. In 2002, Sanger left Wikipedia and became an outspoken critic of the site, criticizing its quality and the disregard many users displayed for experts.

Indeed. We’ve already recounted how distinguished paleo-entomologist Günter Bechly, after coming out for intelligent design, found his entry deleted. This was following a surreal online editorial discussion led by an editor going by the pseudonym Jo-Jo Eumerus. Jo-Jo is a self-described 23-year-old “boy” from Switzerland with a dual online identity as a 500-year-old wizard. Under this other identity, the wizard  Septimus Heap, Jo-Jo explains of himself that, having been “diagnosed with Asperger syndrome,” he “sometimes [has] problems with society due to this.” Certainly he had a problem with Günter Bechly. The editors claimed the move to delete the entry was the result of their sudden realization that Bechly isn’t “notable” enough for Wikipedia. The  notability argument is a joke, and  even Darwinists conceded that Bechly was deleted for his support of ID.

It was Jo-Jo who made the final decision to permanently pull the plug on Dr. Bechly’s entry. The disparity in expertise — wizard versus paleo-entomologist — is blindingly obvious. Bechly changed his views on evolution and ID while serving as a curator at the State Museum of Natural History in Stuttgart, Germany, where he amassed an extremely impressive scholarly record studying the evolution of dragonflies over tens of millions of years. As Jo-Jo says of his own daily activities, “Nowadays, I mostly spend my time with World Building projects and seeing a bit forward with life.”


For more on Bechly’s turn to ID, see here:



Another ID scholar, Walter Bradley at Baylor University, suffered  comparable treatment at the hands of the fantastical pseudonyms editing Wikipedia. Manhandled by entities including Freakshownerd, Apollo The Logician, and Theroadislong, Dr. Bradley was not erased but he did see his entry disemboweled, reduced to nearly nothing.

You can’t fight back because people like Jo-Jo, Freakshownerd, etc. seem to have unlimited time at their disposal to revert edits they don’t like, over and over and over, at lightning speed. The sociology is interesting, but so is the psychology. As Larry Sanger recounts his experiences, Wikipedia from the start attracted not only trolls as editors, but trolls with, in some cases, mental problems.

There was one guy called 24, but I suspect that he was literally insane. He wrote some really wacked-out stuff. And there’s there another one called LIR. That person was… abrasive is not the right word, and [them] being confrontational wasn’t the problem. It was them doing so needlessly, for no good purpose other than to stir the pot. Because [Wikipedia] was wide open, and anybody could participate, there were people who would spent a lot of their time wasting everyone else’s time. I doubt that many of those people are just “bad,” they might just be abrasive, confused… “mentally unhinged,” in a few cases.

Having all that leisure to volunteer in “editing” online encyclopedia articles might correlate with being retired, or a dedicated hobbyist, or it could correlate with being on the margins, someone with “problems with society,” “confused,” “wacked-out,” “unhinged,” even “insane.” I apologize if this sounds unkind. But high-functioning people — employed or with other serious responsibilities, with friends, families, community commitments, and more — are not ideally suited to be Wikipedia editors or to engage in the endless editing wars that go along with it.

And this, again, is how a large segment of the public is introduced to the subject of intelligent design. The page received 30,494 views in the past 30 days alone.It’s not only the ID entry and related articles that are twisted by bias and inaccuracy, of course. But design, as I said, poses an ultimate question that scientists and philosophers have been discussing for millennia, and will go on discussing. That is not true of many other controversial subjects on Wikipedia.

It’s a real shame. As Larry Sanger says, we “despair of persuading Wikipedians of the error of their ways.” Sadly, there’s not much you can do about it — other than to warn your friends, family, and other contacts to be wary and  consult other sources. And that I certainly urge you to do.

We know more we understand less?

Are Scientists Smarter Now, or Dumber?
David Klinghoffer | @d_klinghoffer

A conversation with a friend of our oldest son solicited, if I understood correctly, the observation from this friend that people including students know more, are better educated, than in previous generations, thanks to things like the Internet. This is a very bright and curious young man, but I was dumbfounded by his statement.

He pointed to the fact that we, as a culture, “know more” than ever before. That is true in a limited sense, but acquisition of data is a long way from having the wisdom to understand and interpret it, which I think is what we mean when we talk about the kind of smarts that really matters. It’s what you do with what you know.

On the gathering specifically of scientific knowledge, our paleontologist colleague Günter Bechly nails it in a comment on Facebook:

My theory is: Scientists nowadays are far dumber than scientists centuries ago, which is a consequence of over-specialization and lack of philosophical education in natural science university curricula. The only reason why we know so much more than centuries ago is time, much larger number of scientists, and much more resources pumped into science, which resulted in an explosion of knowledge acquired by dumber scientists.

This might explain the unthinking dismissal of an idea like intelligent design not just by media people with a tendency to shallowness, but by scientists. I mentioned here the other day that even professionals in the sciences often seem to have gleaned the little they understand about ID from skimming the main Wikipedia article.


ID is a quintessential multidisciplinary field of study, asking us to consider not only biology but chemistry, cosmology, philosophy, and more. As Dr. Bechley points out, the trend to ever greater specialization combined with philosophical illiteracy go a long way toward explaining the condition of our “dumb” scientists.

Why OOL Science remains design Opponents' weakest point II

The Origin of Life: The Information Challenge
Brian Miller

I previously responded to an article by Vincent Torley on the origin of life by correcting the errors in his understanding of thermodynamics and in the state of origins research. Today, I will correct mistakes related to information theory, and I will identify the fundamentally different approaches by ID advocates and critics toward assessing evidence.

Semantic Information
The first issue relates to the comparison of the sequencing of amino acids in proteins to the letters in a sentence. This analogy is generally disliked by design critics since it so clearly reveals the powerful evidence for intelligence from the information contained in life. It also helps lay audiences see past the technobabble and misdirection often used to mislead the public, albeit unintentionally.

Torley’s criticism centers on the claim that sequences of amino acids in life demonstrate functional but not semantic information.

Dr. Miller, like Dr. Axe, is confusing functional information (which is found in living things) with the semantic information found in a message…functional information is much easier to generate than semantic information, because it doesn’t have to form words, conform to the rules of syntax, or make sense at the semantic level.

Unfortunately, this assertion completely contradicts the opinion of experts in the field such as Shen and Tuszynski.

Protein primary structures have the same language structure as human languages, especially English, French, and German. They are both composed of several basic symbols as building blocks. For example, English is composed of 26 letters, while proteins are composed of 20 common amino acids. A protein sequence can be considered to represent a sentence or a paragraph, and the set of all proteins can be considered to represent the whole language. Therefore, the semantic structure is similar to a language structure which goes from “letters” to “words,” then to “sentences,” to “chapters,” “books,” and finally to a “language library.”


The goals of semantic analysis for protein primary structure and that for human languages are basically the same. That is, to find the basic words they are composed of, the meanings of these words and the role they play in the whole language system. It then goes on to the analysis of the structure of grammar, syntax and semantics.

In the same way letters combine to form meaningful sentences, the amino acids in proteins form sequences that cause chains to fold into specific 3D shapes which achieve such functional goals as forming the machinery of a cell or driving chemical reactions. And sentences combine to form a book in the same way multiple proteins work in concert to form the highly integrated cellular structures and to maintain the cellular metabolism. The comparison is nearly exact.

Sequence Rarity
A second issue Torley raises is the question of the rarity of protein sequences. In particular, he argues that the research of Doug Axe, which demonstrated extreme rarity, was invalid. Criticisms against Axe’s work have been addressed in the past, but the probability challenge is so great that such a response is unnecessary. The most essential early enzymes would have needed to connect the breakdown of some high-energy molecule such as ATP with a metabolic reaction which moves energetically uphill. One experiment examined the likelihood of a random amino acid sequence binding to ATP, and results indicated that the chance was on the order of   one in a trillion. Already, the odds against finding such a functional sequence on the early Earth is straining credibility. However, a useful protein would have required at least one other binding site, which alone squares the improbability, and an  active site which properly oriented target molecules and created the right chemical environment to drive and interconnect two reactions  — the breakdown of ATP and the target metabolic one. The odds of a random sequence stumbling on such an enzyme would have to have been far less than 1 in a trillion trillion, clearly beyond the reach of chance.

The challenge for nucleotide based enzymes (ribozymes) is equally daunting. Stumbling across a random sequence that could perform even one of the most basic reactions also requires a search library  in the trillions.So, any multistage process would also be beyond the reach of chance. A glimmer of hope was offered by Jack Szostak when he published a paper that purported to show RNA could self-replicate without the aid of any enzyme. Unaided self-repliation would have greatly aided the search process. However, he later retracted the paper after the results could not be reproduced.

The problem has since been shown to be even worse. In particular, Eugene Koonin determined that the probability of an RNA-to-protein translation system forming through random arrangements of nucleotides is less than 1 in 101000 which would equate to an impossibility in our universe. His solution to this mathematical nightmare was to propose a probabilistic deus ex machina. He actually argued for the  existence of a multiverse which would contain a virtually infinite number of Earth-like planets. We just happen to reside in a lucky universe on the right planet where life won a vast series of lotteries.

Genetic Code
The next issue relates to the problem of explaining how a protein sequence was encoded into RNA or DNA using a genetic code where each amino acid corresponds to sets of three nucleotides known as codons. The key challenge is finding a causal process for the encoding when no physical or chemical connection exists between a given amino acid and its corresponding codons. Torley argues that a connection does exist. He quotes from Dennis Venema who stated that certain codons bind directly to their amino acids. Unfortunately, this claim is false. Venema was referencing the research by Michael Yarus, but he misinterpreted it. Yarus states that no  direct physical connect exists exists between individual amino acids and individual codons. He instead argues for correlations in chains of nucleotides (aptamers) between amino acids and codons residing where the latter binds to the former. However, Koonin argued  that correlations only existed for a handful of amino acids, and they were the least likely ones to have formed on the early Earth.

Torley references the article where Koonin dismisses Yarus’s model, but he misinterprets him by implying that the code could be partly explained by some chemical connection. Koonin does reference the possibility of the evolution of the modern translation system being aided by chemical attractions between amino acids and pockets in tRNA. But he states that the sequences in those pockets would have been “arbitrary,” so they would not relate to the actual code. As a result, no physical explanation exists for the encoding of amino acid sequences into codons, nor can the decoding process either be explained or directly linked to the encoding process. Such a linkage is crucial since the  encoding and decoding  must use the same code. However, without any physical connection, the code must have preexisted the cell particularly since both processes would have had to have been instantiated around the same time. The only place a code can exist outside of physical space is in a mind.

Examining Assumptions
In my responses to Torley I have addressed several problems with his interpretation of specific experiments. However, a more fundamental issue is the differences between our overall approaches to evaluating evidence, which I will illustrate with an analogy. Imagine that a boxing match is scheduled between Daniel Radcliffe, actor who played Harry Potter, and Manny Pacquiao, former world boxing champion. You learn that the fight will take place in three days and Radcliffe recently broke his leg and two arms in a skiing accident. You tell your friend that you are certain Pacquiao will win. Your friend then says that you are mistaken since Radcliffe will simply heal his body with a flick of his magical wand and then turn Pacquiao into a rat. You suddenly realize that your friend is conceiving of the fight in the imaginary world of Hogwarts from the fantasy series.

The same difference in perspectives exists between ID proponents and materialist scientists. The former wish to focus on experiments that attempt to accurately model conditions on the early Earth and on actual physical processes that have been demonstrated. In contrast, the latter wish to focus on highly orchestrated experiments which have no connection to realistic early conditions and on physical processes that reside only in the imaginations of researchers or in artificial worlds created through simulations. For instance, Torley references an an  article that proposes hydrogen peroxide  could have assisted in generating homochiral mixtures of nucleotides, but the author fully acknowledges that his ideas are purely speculative. Likewise, Koonin describes a scenario of how the protein translation system could have evolved, but nearly every step is only plausible if intelligently guided. In other words, he is constantly smuggling in design without giving due credit. To accept any of these theories requires blind faith in the materialist philosophical assumptions.

At the end of his article, Torley navigates out of the stormy seas of scientific analysis into the calmer waters of philosophical discourse which is his specialty. He argues that one can never prove design. On this point he is correct, if by prove one means demonstrating with mathematical certainty. The ID program does not claim to offer the type of absolute proof a mathematician would use to demonstrate the truth of the Pythagorean Theorem. Instead, we are arguing that the identification of design is an inference to the best explanation  which can be made with the same confidence one would have in identifying design in the pattern of faces on Mount Rushmore or in a signal from space which contained the schematics of a spaceship.


The skeptic could always argue that some materialistic explanation might eventually be found to explain those patterns, so design cannot be proven. Yet, the identification of design is still eminently reasonable. The evidence for design in the simplest cell is unambiguous since it contains energy conversion technology, advanced information processing, and automated assembly of all of its components, to name just a few features. The real issue is not the evidence but whether people’s philosophical assumptions would allow them to deny the preposterous and embrace the obvious.