Search This Blog

Thursday 23 November 2023

Yet more on why ID is already mainstream.

 Design: A Scientific Proxy for Intelligence


Paul Nelson speaks to the pervasive use of design detection in how we infer intelligent agency in various fields of investigation, from archaeology to arson:

What turned my head around about ID — in a good way — in 1991, well before I met Bill Dembski, was a single paragraph in one of his early papers. He pointed out that design detection, far from being an esoteric and inscrutable inference, lay in fact at the center of many normal human inquiries and activities.1

Fine-tuning, in which a low-probability event also matches an independently derived pattern or condition, can serve as a proxy for intelligent design.2 The inference of intelligent agency stems from our experience with generating designed systems, and this applies across multiple fields of inquiry.

Michael Egnor writes,

How could we discern design from non-design? It’s an issue central to archaeology, and obviously would be central to space archaeology. It would be great science to sort out criteria for detecting intelligent agency in an object in nature, especially in a situation in which we have no idea about the nature of the designer.

Yet we find design everywhere in living things, on an immense scale. There’s a breathtaking lack of self-awareness in the scientific community about intelligent design.3

Researchers use proxies to help trace out the existence and progression of a target parameter in cases where the historic values of the parameter are no longer accessible.

A Common Procedure in Science

Using proxies to infer information about the past is a common procedure in science. How it works: For determining the trend of a certain parameter during the geological history of Earth, scientists identify a measurable artifact that has a strong positive correlation with the parameter. If the appropriate artifact is recorded chronologically and can currently be accessed (such as by taking a core sample), then the historical trend in the desired parameter (for example, the atmospheric temperature) can be determined. 

The correlation between the proxy and the target parameter needs to be properly calibrated to avoid systematic errors. Furthermore, researchers need to establish a strong negative correlation between the proxy and other possible effects that might confuse its association with the target parameter. For example

Paleomagnetic records from several sources (volcanics, archeological artifacts, stalagmites, and sedimentary materials) that serve as proxy magnetometers provide access to geomagnetic field evolution before the age of systematic ground and satellite measurements or historical observations of Earth’s magnetic field.4

Triple oxygen isotope measurements of shales have been used as a proxy for the abundance of continental landmasses.5

Paleoclimatology is the study of past climates. Since it is not possible to go back in time to see what climates were like, scientists use imprints created during past climate, known as proxies, to interpret paleoclimate. Organisms, such as diatoms, forams, and coral serve as useful climate proxies. Other proxies include ice cores, tree rings, and sediment cores.

Ice core records- deep ice cores, such as those from Lake Vostok, Antarctica, the Greenland Ice Sheet Project, and North Greenland Ice Sheet Project can be analyzed for trapped gas, stable isotope ratios, and pollen trapped within the layers to infer past climate.6

Intelligent design as a causative explanation can be inferred from proxies that consist of artifacts that contain a level of specified complexity or complex functionality and are known to be associated with intelligence. The negative correlation stems from the complete lack of any examples of such artifacts arising from non-intelligent sources. A negative correlation between design artifacts and natural processes also arises from our knowledge that natural processes systematically destroy information-rich systems with the passage of time.7

From the Caves of Qumran

The Dead Sea Scrolls are an example of a design artifact for which intelligence is inferred as the source.8 These scroll fragments, found in caves near Qumran in 1947-1956, are artifacts of a type known to be produced by intelligent humans, providing a strong positive correlation between the artifacts and intelligent human agency. Further, our comprehensive experience gives a negative correlation between scrolls of this type and any other source besides intelligence. Consequently, the scroll artifacts serve as an example of a robust proxy for human intelligence operating in the historical era to which the scrolls are dated.

When biological systems are examined, the level of specified complexity and complex functionality found in the molecular biochemistry of the cell, and in the irreducibly complex systems of living organisms, far exceeds anything ever designed and constructed by human intelligence. As a result, the mainstream scientific consensus is that this pervasive and profound complexity arose over time by non-intelligent forces of nature acting on fundamental particles. 

One reason that the origin of the specified complexity and complex functionality found in the molecular biochemistry of the cell is attributed to the fundamental forces of nature acting on elementary particles is that a strong positive correlation exists between random mixtures of particles and the subsequent formation of functional biochemistry. (Sarcasm alert!)

Elementary particles, primarily fermions interacting according to the strong force and the electric force, are known to form atoms which can combine into simple molecules such as water (H20) and carbon dioxide (CO2). Slightly more complex molecules, including sugars and amino acids, have also been found to occur naturally. The natural formation of complex bio-polymers becomes problematic, however.

Attempting to get those amino acids to join into any sort of complex molecules has been one long study in failure.9

Suitable Agents for Producing Living Systems?

Is there any warrant to draw a positive correlation between random, natural processes and the artifacts of living systems manifesting profound, functional complexity? Can it be legitimately claimed that living systems serve as a valid proxy for the action of unguided forces of nature? Have unguided forces of nature shown themselves as suitable agents for producing living systems? Only if the forces of nature are acting within a living system to begin with. 

One of biology’s “universal laws” (accredited to Rudolph Virchow) states, “Every cell comes from a preexistent cell.” 

CANCELED SCIENCE, P. 212

Apart from biological reproduction, natural processes have never been known to produce life. So, living organisms and their fossilized remains lack any positive correlation with the forces of nature. Therefore, the existence of life in the history of Earth does not serve as a valid proxy for the actions of natural forces as the agency for producing such life. With the complete absence of a positive correlation between living systems and natural processes, there is no need to establish a negative correlation between the proxy of life and the purported agency of nature.

Despite these arguments, the mainstream scientific community may nonetheless dismiss artifacts of living systems as proxies pointing to intelligent agency. Perhaps a different proxy for intelligent agency would be more convincing. For this, I suggest that the ultimate proxy for intelligent agency is intelligence. “Artifacts” of intelligent minds are available for investigation on Earth today. If researchers are unconvinced that artifacts of biochemistry belonging to living organisms are sufficient proxies for intelligence, perhaps careful scrutiny of their own minds would suggest otherwise. Or, perhaps not.10

Just another revolution devouring its children?

 

The maths of ID

 Bayesian Probability and Intelligent Design: A Beginner’s Guide


If the phrase “Bayesian calculus” makes you want to run for the hills, you’re not alone! Bayesian logic can sound intimidating at first, but if you give it a little time, you’ll understand how useful it can be for evaluating the evidence for design in the natural world. On a new episode of ID the Future, biologist Jonathan McLatchie gives us a beginner’s guide to Bayesian thinking and teaches us how it can be used to build a strong cumulative case for intelligent design, as well as how we can use it in our everyday lives.

It is one of the most important formulas in all of probability, and it has been central to scientific discovery for the last two centuries. At its heart, Bayes’s theorem, first developed by 18th century English statistician, philosopher, and minister Thomas Bayes, is a method to quantify the confidence one should have in a particular belief or hypothesis. The process results in a likelihood ratio of a hypothesis being true or false, given the evidence. Here, Dr. McLatchie explains what the theorem is, the components that comprise it, when it would typically be used, and some useful examples of Bayesian reasoning in action. 

Dr. McLatchie shows how Bayesian probability can be applied to the evidence for design in nature. First, he argues that the initial prior probability — the intrinsic plausibility of the hypothesis being true given the background information alone — for the design hypothesis is not low:

In the case of intelligent design and our inferences to design in biology, we have independent reasons, I would contend, to already think that a mind is involved in the origin of our cosmos, including the fine-tuning of the laws and constants of our universe…and the prior environmental fitness of nature.

Secondly, when you add in the evidence we’ve discovered of the complexity of living cells, the infusions of new biological information into the biosphere over time, the evidence for the Big Bang, and more, the cumulative case for intelligent design grows stronger. “If we suppose that a mind is involved,” says McLatchie,

then it’s not hugely improbable that we’d find information content in the cell, and that we’d have information processing systems and that we’d have irreducibly complex machines. But, on the other hand, it is overwhelmingly improbable, I would argue, that such information-rich systems and irreducibly complex machinery would exist on the falsity of the design hypothesis. And so you have this overwhelmingly top-heavy likelihood ratio.

Download the podcast or listen to it here.

The iron fist of the emperor