Search This Blog

Friday, 1 December 2023

The design filter tells us when the dice are loaded?

 Defending Douglas Axe on the Rarity of Protein Folds


In 2000 and 2004, writing in the Journal of Molecular Biology, current Discovery Institute Senior Fellow Douglas Axe published seminal papers on the rarity of protein folds. Axe studied the beta-lactamase enzyme in E. coli and found that the likelihood of a chance sequence of 153 amino acids generating the stable, functional fold needed for the larger domain in that enzyme was as low as 1 in 1077. Axe conducted this research while a post-doc at the at the Centre for Protein Engineering (CPE) in Cambridge. In his book Unbelievable: How Biology Confirms Our Intuition that Life Was Designed, Axe explains that there are serious consequences for his research. 

His critics have not failed to notice that — including theologian Rope Kojonen, whom we’ll return to shortly. In a series here, we have been responding to Kojonen’s conception of design in nature. The following examination and defense of Axe serves as a direct, empirical test of Kojonen’s design hypothesis.

A Confrontation with Alan Fersht

In a suspenseful passage from his book, Dr. Axe describes what happened to him when his post-doc advisor, Alan Fersht, confronted him about his affinities for intelligent design:

I was the first person in the lab one morning in February of 2002. Alan usually made his rounds through the labs later in the day when work was in full swing, but on this morning he dropped in early to have a word with me. He seemed tense. He approached me as if there were a pressing matter he needed to discuss, yet he seemed unable to initiate the discussion. 

After mentioning that he had just listened to a BBC radio program discussing intelligent design, Alan put a few questions to me, somewhat awkwardly.

“You know this William Dembski fellow, don’t you?”

“Yes.”

“And you know about his intelligent design theory.”

“Yes.”

“Tell me, then, who is the designer?”

A Sign of Trouble

Axe goes on to note that “Alan’s questioning didn’t seem to lead anywhere on that February morning.” But his advisor interrogating him about his personal religious beliefs was definitely a sign of trouble, and also exposed the confusion and innate bias that many people have about intelligent design. Axe continues to discuss what was really going behind the scenes:

Years later, an article in New Scientist magazine about Biologic Institute (titled “The God Lab”) revealed that one of my fellow scientists at the CPE had been pressing Alan to dismiss me because of my connection to ID. The article says Alan refused to do so, quoting him as saying, “I have always been fairly easy-going about people working in the lab. I said I was not going to throw him out. What he was doing was asking legitimate questions about how a protein folded.” According to the article, I left the CPE after “Axe and Fersht were in dispute with each other over the implications of work going on in Fersht’s lab.”

The truth is that Alan did, in the end, give in to the internal whistle-blower who wanted me removed, though I certainly accept his account of having resisted this for some time. When he did finally act, I interpreted the awkwardness of his action as an indication of his reluctance. There was no heart-to-heart conversation or even a word spoken face-to-face. When everyone gathered in the customary way to bid me farewell, Alan was conspicuously absent. All I received was an e-mail from Alan’s assistant on the eleventh of March 2002, succinctly stating that the CPE was “very short of [lab] bench space” and declaring Alan’s solution: “Please vacate as soon as possible and by the end of March latest.”

Scientific Objections to Axe’s Research 

So, Axe’s research on protein sequence rarity seems to have gotten him expelled from the Centre for Protein Engineering in Cambridge. But this was only to be the first incident where people didn’t like his results. Quite a few critics have raised scientific objections to Axe’s research over the years. In our recent paper “On the Relationship between Design and Evolution,” reviewing Rope Kojonen’s book The Compatibility of Evolution and Design, we and our co-authors (Stephen Dilley and Emily Reeves) assess what those critics have said and why they got things wrong. As readers of this series will know, we critique Kojonen’s thoughtful attempt to harmonize mainstream evolutionary theory with his particular version of design. Here’s the relevant section from our paper:

Several studies demonstrate that, for many proteins, functional sequences occupy an exceedingly small proportion of physically possible amino acid sequences. For example, Axe (2000, 2004)’s work on the larger beta-lactamase protein domain indicates that only 1 in 1077 sequences are functional — astonishingly rare indeed. Such rarity presents prima facie evidence that many proteins are very difficult to evolve by a blind evolutionary process of random mutation and natural selection.

Of course, a common rejoinder to this data is to claim that ‘protein rarity’ is only true for select proteins; many others are not so rare. That is, many proteins might have sequences with functions that are more common in sequence space and are thereby easier to evolve. As Kojonen (2021, p. 119) puts it, “others argue that functional proteins are much more common”. He specifically cites Tian and Best (2017) as a rebuttal to Axe (2004) on this point. Similarly, Venema (2018) objects to Axe (2004)’s research because he believes “functional proteins are not rare within sequence space”. Importantly, Kojonen is correct that some proteins are easier to evolve than others, and this point is pressed by some scientists17 — but nonetheless, a very large proportion of proteins seems beyond the reach of mutation and selection.

Indeed, Tian and Best (2017) present much data that actually support Axe’s general thesis for protein rarity. They reported that the functional probabilities for ten protein domains range from 1 in 1024 to 1 in 10126. Yet even if we grant generous assumptions towards evolution, additional research indicates that only three of the ten domains studied by Tian and Best could have possibly emerged through an undirected evolutionary search of sequence space. Specifically, Chatterjee et al. (2014) calculated that there are at most 1038 trials available over the entire history of life on Earth to evolve a new protein. Therefore, if a protein domain has a probability of less than 10−38, then it is unlikely to emerge via a process of random mutation and natural selection. Seven of the ten domains studied by Tian and Best (2017) had probabilities below 10−38. Thus, even though Kojonen (2021, p. 119) cites Tian and Best (2017) to argue that the “specificity required for achieving a functional amino acid sequence” may be less for some proteins, their research provides strong empirical evidence that many proteins have functional sequences that are so rare as to be beyond the reach of standard evolutionary mechanisms.

Kojonen (2021, p. 119) also cites Taylor et al. (2001) to counter (or mitigate) Axe’s results on protein rarity. Taylor et al. (2001) reported that the probability of evolving a chorismate mutase enzyme is 1 in 1023, which Kojonen (2021) takes to suggest that functional protein sequences can be “more common than in the case of the protein studied by Axe”. Yet the fact that chorismate mutase represents less rare sequences is unsurprising given that its function requires a simpler fold than typical enzymes such as beta-lactamase studied by Axe (2004).18 Could chorismate mutase evolve? If it could, this still does not demonstrate the feasibility of Kojonen’s thesis: the possibility that some simpler proteins could evolve does not mean that all (or even most) more complex proteins could evolve. But for [Kojonen’s model] to succeed, evolutionary mechanisms must be up to the task in all cases, not just some.

The possibility of evolving relatively simpler proteins, however, raises another objection. Hunt (2007) asks: If a simple protein could evolve in the first place, might it also evolve further into a more complex protein? More specifically, if one assumes that a comparatively simple protein such as chorismate mutase could evolve, why could it not also evolve into a more elaborate protein, including one with a functional sequence that is as rare as those studied by Axe?19

Writing here recently, Brian Miller used an easy-to-grasp analogy to illustrate why a simple protein could not evolve into a more complex protein of even modest rarity. See, “Proteins Are Rare and Isolated — And Thus, Cannot Evolve.”

Responding to Dennis Venema

One of Axe’s critics, Dennis Venema, discusses intrinsically disordered proteins. We respond:

For example, Venema (2018) cites intrinsically disordered proteins (IDPs), noting they “do not need to be stably folded in order to function” and therefore represent a type of protein with sequences that are less tightly constrained and are presumably therefore easier to evolve. Yet IDPs fulfill fundamentally different types of roles (e.g., binding to multiple protein surfaces) compared to the proteins with well-defined structures that Axe (2004) studied (e.g., crucial enzymes involved in catalyzing specific reactions). Axe (2018) also responds by noting that Venema (2018) understates the complexity of IDPs. Axe (2018) points out that IDPs are not entirely unfolded, and “a better term” would be to call them “conditionally folded proteins”. Axe (2018) further notes that a major review paper on IDPs cited by Venema (2018) shows that IDPs are capable of folding — they can undergo “coupled folding and binding”; there is a “mechanism by which disordered interaction motifs associate with and fold upon binding to their targets” (Wright and Dyson 2015). That paper further notes that IDPs often do not perform their functions properly after experiencing mutations, suggesting they have sequences that are specifically tailored to their functions: “mutations in [IDPs] or changes in their cellular abundance are associated with disease” (Wright and Dyson 2015). In light of the complexity of IDPs, Axe (2018) concludes:

“If Venema (2018) pictures these conditional folders as being easy evolutionary onramps for mutation and selection to make unconditionally folded proteins, he’s badly mistaken. Both kinds of proteins are at work in cells in a highly orchestrated way, both requiring just the right amino-acid sequences to perform their component functions, each of which serves the high-level function of the whole organism. (Axe 2018)”

Venema (2018) also argues that functional proteins are easy to evolve. He cites Neme et al. (2017), a team that genetically engineered E. coli to produce a ∼500 nucleotide RNA (150 of which are random) that encode a 62 amino-acid protein (50 of which are random). The investigators reported that 25% of the randomized sequences enhance the cell’s growth rate. Unfortunately, they misinterpreted their results — a fact pointed out by Weisman and Eddy (2017), who raised “reservations about the correctness of the conclusion of Neme et al. that 25% of their random sequences have beneficial effects”. Here is why they held those reservations: the investigators in Neme et al. (2017) did not compare the growth of cells containing inserted genetic code with normal bacteria but rather with cells that carry a “zero vector” — a stretch of DNA that generates a fixed 350 nucleotide RNA (the randomized 150 nucleotides are excluded from this RNA). Weisman and Eddy (2017) explain how the zero vector “is neither empty nor innocuous”, since it produces a “a 38 amino-acid open reading frame at high levels” of expression. Yet since this “zero vector” and its transcripts provide no benefit to the bacterium, its high expression wastes cellular resources, which, as Weisman and Eddy (2017) note, “is detrimental to the E. coli host”. The reason the randomized peptide sometimes provided a relative benefit to the E. coli bacteria is because, in some cases (25%), it was probably interfering with production of the “zero vector” transcript and/or protein, thus sparing the E. coli host from wasting resources. As Weisman and Eddy (2017) put it, it is “easy to imagine a highly expressed random RNA or protein sequence gumming up the works somehow, by aggregation or otherwise interfering with some cellular component”. Axe (2018) responds to Neme et al. (2017) this way:

“Any junk that slows the process of making more junk by gumming up the works a bit would provide a selective benefit. Such sequences are “good” only in this highly artificial context, much as shoving a stick into an electric fan is “good” if you need to stop the blades in a hurry.”

In other words, at the molecular level, this random protein was not performing some complex new function but rather was probably interfering with its own RNA transcription and/or translation — a “devolutionary” hypothesis consistent with Michael Behe’s thesis that evolutionarily advantageous features often destroy or diminish function at the molecular level (Behe 2019). In any case, what Neme et al. (2017) showed is that a quarter of the randomized sequences were capable of inhibiting E. coli from expressing this “zero vector”, but they provided no demonstrated benefit to unmodified normal bacteria.

Finally, Venema (2018) cites Cai et al. (2008) to argue for the de novo origin of a yeast protein, BSC4, purportedly showing that “new genes that code for novel, functional proteins can pop into existence from sequences that did not previously encode a protein”. However, the paper provides no calculations about the rarity of the protein’s sequence nor its ability to evolve by mutation and selection. Rather, the evidence for this claim is entirely inferred, indirect, and based primarily upon the limited taxonomic range of the gene, which led the authors to infer it was newly evolved. Axe (2018) offers an alternative interpretation:

“The observable facts are what they are: brewers’ yeast has a gene that isn’t found intact in similar yeast species and appears to play a back-up role of some kind. The question is how to interpret these facts. And this is where Venema and I take different approaches. … Other interpretations of the facts surrounding BSC4 present themselves, one being that similar yeast species used to carry a similar gene which has now been lost. The fact that the version of this gene in brewers’ yeast is interrupted by a stop codon that reduces full-length expression to about 9 percent of what it would otherwise be seems to fit better with a gene on its way out than a gene on its way in.”

A Counterexample to Axe’s Research 

In our paper, we elaborate on the enzyme chorismite mutase which was cited by Kojonen as a counterexample to Axe’s research. We first explain that the functional complexity of chorismite mutase really is not comparable to the beta-lactamase enzyme studied by Axe: 

The function of chorismate mutase is to catalyze the conversion of chorismate to prephenate through amino acid side chains in its active site, thereby restricting chorismate’s conformational degrees of freedom. Essentially, it is merely providing a chamber or cavity that holds a particular molecule captive, thereby limiting that molecule’s ability to change. In contrast, beta-lactamase requires the precise positioning and orientation of amino acid side chains from separate domains that contribute to hydrolyzing the peptide bond of the characteristic four-membered beta-lactam ring. This function requires a more complex fold compared to chorismate mutase. Axe (2004) specifically compares beta-lactamase to chorismate mutase and notes that the beta-lactamase fold “is made more complex by its larger size, and by the number of structural components (loops, helices, and strands) and the degree to which formation of these components is intrinsically coupled to the formation of tertiary structure (as is generally the case for strands and loops, but not for helices)”.

We then elaborate on why Kojonen’s attempts to invoke special “fine-tuning” to allow the evolution of proteins like chorismite mutase could actually cause problems for the evolvability of other proteins. “No Free Lunch” theorems suggest that it’s very difficult to imagine a fine-tuning scenario that would globally assist in the evolution of all types of proteins. That is because biasing to allow the evolvability of one type of protein would likely make it more difficult to evolve other types of proteins:

Kojonen tries to overcome this problem by arguing that the physical properties of proteins are “finely-tuned” to bias the clustering of functional sequences such that a very narrow path could extend to complex proteins with rare functional sequences. The biasing would result in the prevalence of functional sequences along a path to a new protein being much higher than in other regions of sequence space. But such biasing could not possibly assist the evolution of most proteins. Biasing in the distribution of functional sequences in sequence space due to physical laws is arguably subject to the same constraints as the biasing in play in the algorithms employed by evolutionary search programs. Consequently, protein evolution falls under “No Free Lunch” theorems that state that no algorithm will in general find targets (e.g., novel proteins) any faster than a random search. An algorithm might assist in finding one target (e.g., specific protein), but it would just as likely hinder finding another (Miller 2017; Footnote 12). Thus, although Kojonen acknowledges that proteins are sometimes too rare to have directly emerged from a random search, he fails to appreciate the extent to which rarity necessitates isolation and why this must often pose a barrier to further protein evolution. Different proteins have completely different compositions of amino acids, physical properties, conformational dynamics, and functions. Any biasing that might assist in the evolution of one protein would almost certainly oppose the evolution of another. In other words, the probability of a continuous path leading to some proteins would be even less likely than if the distribution of functional sequences were random.

We consider this to be one of the most comprehensive collections of responses to Axe’s critics published to date and we hope our paper is useful in that regard.

The Bigger Picture

Stepping back, it may be helpful to say brief a word about how our defense of Axe fits into the overall argument in our Religions article. A key feature of Kojonen’s model is his claim that, in order for evolution to successfully produce biological complexity, it must rely on “fine-tuned” preconditions (and smooth fitness landscapes). These preconditions (and landscapes) are part of the “design” aspect of Kojonen’s model: in his view, God designed the laws of nature, which gave rise to fine-tuned preconditions and landscapes that in turn allow evolution to succeed.

If, as Kojonen claims, there really are fine-tuned preconditions and smooth fitness landscapes, then they should be empirically detectable. One can analyze, for example, whether functional protein folds can evolve into different functional protein folds by means of natural processes such as the mutation-selection mechanism. Douglas Axe’s work — along with the work of other scientists — shows that this is implausible. Proteins cannot evolve in this way. Kojonen’s empirical claim is false. Thus, his specific claims about design are false. The universe does not have the fine-tuned preconditions and smooth landscapes that his model says arose from the activity of a Designer. 

Thus, our main point is not to criticize evolution per se. Yet because of the way Kojonen frames the issue, it turns out that the same evidence that poses problems for his understanding of design also raises problems for mainstream evolutionary theory. In a sense, two birds fall with one stone.

Thursday, 30 November 2023

Vestigial science?

 

Human Vestigial Organs: Some Contradictions in Darwinian Thinking

Wolf Ekkehard Lonnig


In my recent article on vestigial organs in man I discuss two key points: First, one of the most egregious contradictions within the present theory of evolution, and second, the recently “discovered” non-existence of a rudimentary organ that has been hailed over the last 140 in most embryology textbooks and papers as a proof of the origin of humans from lower vertebrates.

Let’s take those here in reverse order. Start with the second point: The definition of vestigial (in the original evolutionary sense) is: “Of a body part or organ: remaining in a form that is small or imperfectly developed and not able to function.” Or according to Darwin and Haeckel, a vestigial organ is a rudimentary structure that, “although morphologically present, nevertheless does not exist physiologically, in that it does not carry out any corresponding functions” (Haeckel 1866, p. 268, similarly Darwin 1872, p. 131). (For all references, see my paper.)

An Outstanding Illustration

Among these organs, the pronephros was, at least until recently, taken as an outstanding illustration for the assertion that man is “a veritable walking museum of antiquities” (Horatio Hackett Newman 1925). Contemporary Darwinians such as Donald R. Prothero (2020) heartily agree.

What is the pronephros?

Mammalian kidneys develop in three successive stages, generating three distinct excretory structures known as the pronephros, the mesonephros, and the metanephros (Fig. 1.2). The pronephros and mesonephros are vestigial structures in mammals and degenerate before birth; the metanephros is the definitive mammalian kidney. (Scott et al. 2019)

However, directly after these sentences, we read that the early stages of kidney development are required for further developmental processes (pp. 3-4):

The early stages of kidney development are required for the development of the adrenal glands and gonads that also form within the urogenital ridge. Furthermore, many of the signaling pathways and genes that play important roles in the metanephric kidney appear to play parallel roles during the development of the pronephros and mesonephros.

Nevertheless, Scott et al. assert again (in their explanation for their Fig. 1.2):

The pronephros and mesonephros are vestigial structures in mice and humans and are regressed by the time the metanephros is well developed.

Meanwhile, we read in Wikipedia (2023) about the pronephros:

The organ is active in adult forms of some primitive fish, like lampreys or hagfish. It is present at the embryo of more advanced fish and at the larval stage of amphibians where it plays an essential role in osmoregulation. In human beings, it is rudimentary, appears at the end of the third week (day 20) and is replaced by the mesonephros after 3.5 weeks.

Nevertheless, the article continuous:

Despite this transient appearance in mammals, the pronephros is essential for the development of the adult kidneys. The duct of the mesonephros forms the Wolffian duct and ureter of the adult kidney. The embryonic kidney and its derivatives also produce the inductive signals that trigger formation of the adult kidney.

Here are several marked contradictions. The human pronephros is “vestigial,” “rudimentary,” yet “essential”? One wonders if the pronephros and mesonephros are really vestigial structures at all — in the sense of “an atavistic formation which, like a ruin, would only be of interest as a monument.” Or rather, do they in fact have important functions?

Larsen’s Human Embryology (6th Edition 2021, p. 369) states:

During embryonic development, three sets of nephric systems develop in craniocaudal succession from the intermediate mesoderm. These are called pronephros, mesonephros, and metanephros (or definitive kidneys). Formation of the pronephric kidney (i.e., pronephros) lays the foundation for induction of the metanephros. Hence, formation of a pronephros is really the start of a developmental cascade leading to the formation of the definitive kidney.

Thus, by having vital roles as inducers, the pronephros and mesonephros are crucial to the developmental cascade that leads to the formation of the permanent kidneys. They are definitely not “useless rudiments of once-functional systems.” It seems they are unquestionably not vestigial or atavistic formations, comparable to ruins in mammalian ontogeny.

In Today’s News

But wait. There is this “breaking news” on kidney development: The pronephros does not even exist in mammals: “A recent detailed analysis of human embryos concluded there is in fact no pronephric kidney even present in humans, or any mammal, and they are present and functional only in animals that have an aquatic life phase” (Peter D. Vize 2023, p. 23).

So much for this vestigial organ in man.

As to the first point, one of the most egregious contradictions within the modern theory of evolution, I would like to encourage the reader to check the following point: The evolutionary molecular biologist and Nobel laureate François Jacob emphasized that:

In the genetic program … is written the result of all past reproductions, the collection of successes, since all traces of failures have disappeared. The genetic message, the program of the present-day organism, therefore resembles a text without an author, that a proof-reader has been correcting for more than two billion years, continually improving, refining and completing it, gradually eliminating all imperfections.

Now, can Darwinians really have both — omnipotent natural selection eliminating all imperfections and, at the same time, human beings full of superfluous rudimentary organs constituting “a veritable walking museum of antiquities”?

Let the reader decide. 



Blood Money?

 

Teleology is verboten?

 

The design inference is science's security officer?

 Mendel’s Peas and More: Inferring Data Falsification in Science


Editor’s note: We are delighted to welcome the new and greatly expanded second edition of the design inference, by William Dembski and Winston Ewert. The following is excerpted from Chapter 2, “A Sampler of Design Inferences.”

Drawing design inferences is not an obscure or rare occurrence — it happens daily. We distinguish between a neatly folded pile of clothes and a random heap, between accidental physical contact and a deliberate nudge, between doodling and a work of art. Furthermore, we make important decisions based on this distinction. This chapter examines a variety of areas where we apply the design inference. In each case, what triggers a design inference is a specified event of small probability. 

The eminent statistician Ronald Aylmer Fisher uncovered a classic case of data falsification when he analyzed Gregor Mendel’s data on peas. Fisher inferred that “Mendel’s data were fudged,” as one statistics text puts it, because the data matched Mendel’s theory too closely. Interestingly, the coincidence that elicited this charge of data falsification was a specified event whose probability was roughly 4 in 100,000, or 1 in 25,000. By everyday standards, this probability will seem small enough, but it is huge compared to many of the probabilities we will be encountering. In any case, Fisher saw this probability as small enough to draw a design inference, concluding that Mendel’s experiment was compromised and charging Mendel’s gardening assistant with deception. 

Slutsky — Fast and Furious

For a more recent example of data falsification in science, consider the case of UCSD heart researcher Robert A. Slutsky. Slutsky was publishing fast and furiously. At his peak, he was publishing one new paper every ten days. Intent on increasing the number of publications in his curriculum vitae, he decided to lift a two-by-two table of summary statistics from one of his articles and insert it — unchanged — into another article. Data falsification was clearly implicated because of the vast improbability that data from two separate experiments should produce the same summary table of statistics. When forced to face a review board, Slutsky resigned his academic position rather than try to explain how this coincidence could have occurred without any fault on his part. The incriminating two-by-two table that appeared in both articles consisted of four blocks each containing a three-digit number. Given therefore a total of twelve digits in these blocks, the odds would have been roughly 1 in a trillion (= 1012) that this same table might have appeared by chance twice in his research. 

Why did Slutsky resign rather than defend a 1 in 1012 improbability? Why not simply attribute the coincidence to chance? There were three reasons. First, at the review board Slutsky would have had to produce the experimental protocols for the two experiments that supposedly gave rise to the identical two-by-two tables. If he was guilty of data falsification, these protocols would have incriminated him. Second, even if the protocols were lost, the sheer improbability of producing so unlikely a match between the two papers would have been enough to impugn the researcher’s honesty. Once a specification is in place (the two-by-two table in one paper here specifying the table in the other) and the probabilities become too small, the burden of proof, at least within the scientific community, shifts to the experimenter suspected of data falsification. In lay terms, Slutsky was self-plagiarizing. And third, Slutsky knew that this case of fraud was merely the tip of the iceberg. He had been committing other acts of research fraud right along, and these were now destined all to come into the open. 

Moving from Medicine to Physics

Now consider the case of Jan Hendrik Schön, which parallels the Slutsky case almost point for point. On May 23, 2002, the New York Times reported on the work of “J. Hendrik Schön, 31, a Bell Labs physicist in Murray Hill, NJ, who has produced an extraordinary body of work in the last two and a half years, including seven articles each in Science and Nature, two of the most prestigious journals.” Despite this track record, his career was on the line. The New York Times reported further that Schön published “graphs that were nearly identical even though they appeared in different scientific papers and represented data from different devices. In some graphs, even the tiny squiggles that should arise from purely random fluctuations matched exactly.” (The identical graphs that did in Schön parallel the identical two-by-two tables that did in Slutsky.) Bell Labs therefore appointed an independent panel to determine whether Schön was guilty of “improperly manipulating data in research papers published in prestigious scientific journals.” The hammer fell in September 2002 when the panel concluded that Schön had indeed falsified his data, whereupon Bell Labs fired him.17 

Exactly how a design inference was drawn in the Schön case is illuminating. In determining whether Schön’s numbers were made up fraudulently, the panel noted, if only tacitly, that the first published graph provided a pattern independently identified of the second and thus constituted the type of pattern that, in the presence of improbability, could negate chance to underwrite design (i.e., the pattern was a specification). And indeed, the match between the two graphs in Schön’s articles was highly improbable assuming the graphs arose from random processes (which is how they would have had to arise if, as Schön claimed, they resulted from independent experiments). As with the matching two-by-two tables in the Slutsky example, the match between the two graphs of supposedly random fluctuations would have been too improbable to occur by chance. With specification and improbability both evident, a design inference followed. 

But, as noted earlier, a design inference, by itself, does not implicate any particular intelligence. So how do we know that Schön was guilty? A design inference shows that Schön’s data were cooked. It cannot, without further evidence, show that Schön was the chef. To do that required a more detailed causal analysis — an analysis performed by Bell Labs’ independent panel. From that analysis, the panel concluded that Schön was indeed guilty of data falsification. Not only was he the first author on the problematic articles, but he alone among his co-authors had access to the experimental devices that produced the disturbingly coincident outcomes. Moreover, it was Schön’s responsibility to keep the experimental protocols for these research papers. Yet the protocols mysteriously vanished when the panel requested them for review. The circumstantial evidence connected with this case not only underwrote a design inference but established Schön as the designer responsible.

And from Physics to Parapsychology

As a final example of where data falsification becomes an issue facing science, consider efforts to debunk parapsychology. Parapsychological experiments attempt to show that parapsychological phenomena are real by producing a specified event of small probability. Persuaded that they’ve produced such an event, parapsychological researchers then explain it in terms of a quasi-design-like theoretical construct called psi (i.e., a non-chance factor or faculty supposedly responsible for such events). 

For instance, shuffle some cards and then have a human subject guess their order. Subjects rarely, if ever, guess the correct order with 100 percent accuracy. But to the degree that a subject guesses correctly, the improbability of this coincidence (which will then constitute a specified event of small probability) is regarded as evidence for psi. In attributing such coincidences to psi, the parapsychologist will draw a design inference. The debunker’s task, conversely, will then be to block the parapsychologist’s design inference. In practice, this will mean one of two things: either showing that sloppy experimental method was used that somehow signaled to the subject the order of the cards and thereby enabled the subject, perhaps inadvertently, to overcome chance; or else showing that the experimenter acted fraudulently, whether by making up the data or by otherwise massaging the data to provide evidence for psi. Note that the debunker is as much engaged in drawing a design inference as the parapsychologist — it’s just that one implicates the parapsychologist in fraud, the other implicates psi.

Keeping Science Honest

The takeaway is that science needs the design inference to keep itself honest. In the years since the first edition of this book was published, reports of fraud in science have continued to accumulate. The publish-or-perish mentality that incentivizes inflating the number of one’s publications regardless of quality has only gotten worse. That mentality moves easily from a haste-makes-waste sloppiness to self-serving fudginess to full-orbed fraudulence. Data falsification and other forms of scientific fraud, such as plagiarism, are far too common in science. What keeps scientific fraud in check is our ability to detect it, and it’s the design inference that does the detecting. 

We’ve now seen that the design inference makes design readily detectable in everyday life. Moreover, we’ve just seen that its ability to make design detectable in the data of science is central to keeping scientists honest. The grand ambition of this book is to show that the design inference makes design part of the very fabric of science. 

There are no good guys VI: Black September.

 

Wednesday, 29 November 2023

First Corinthians Chapter 6 New international version

 6.If any of you has a dispute with another, do you dare to take it before the ungodly for judgment instead of before the Lord’s people? 2Or do you not know that the Lord’s people will judge the world? And if you are to judge the world, are you not competent to judge trivial cases? 3Do you not know that we will judge angels? How much more the things of this life! 4Therefore, if you have disputes about such matters, do you ask for a ruling from those whose way of life is scorned in the church? 5I say this to shame you. Is it possible that there is nobody among you wise enough to judge a dispute between believers? 6But instead, one brother takes another to court—and this in front of unbelievers!


7The very fact that you have lawsuits among you means you have been completely defeated already. Why not rather be wronged? Why not rather be cheated? 8Instead, you yourselves cheat and do wrong, and you do this to your brothers and sisters. 9Or do you not know that wrongdoers will not inherit the kingdom of God? Do not be deceived: Neither the sexually immoral nor idolaters nor adulterers nor men who have sex with men a 10nor thieves nor the greedy nor drunkards nor slanderers nor swindlers will inherit the kingdom of God. 11And that is what some of you were. But you were washed, you were sanctified, you were justified in the name of the Lord Jesus Christ and by the Spirit of our God.


Sexual Immorality


12“I have the right to do anything,” you say—but not everything is beneficial. “I have the right to do anything”—but I will not be mastered by anything. 13You say, “Food for the stomach and the stomach for food, and God will destroy them both.” The body, however, is not meant for sexual immorality but for the Lord, and the Lord for the body. 14By his power God raised the Lord from the dead, and he will raise us also. 15Do you not know that your bodies are members of Christ himself? Shall I then take the members of Christ and unite them with a prostitute? Never! 16Do you not know that he who unites himself with a prostitute is one with her in body? For it is said, “The two will become one flesh.” b 17But whoever is united with the Lord is one with him in spirit. c


18Flee from sexual immorality. All other sins a person commits are outside the body, but whoever sins sexually, sins against their own body. 19Do you not know that your bodies are temples of the Holy Spirit, who is in you, whom you have received from God? You are not your own; 20you were bought at a price. Therefore honor God with your bodies.

Getting the "juice" for a journey to the Jovian satellites

 

The cell is a city under seige?

 

On the universe as God

 A Philosopher Rejects the Multiverse but Embraces Mythology


The fine-tuning of our universe is, of course, widely seen as providing evidence for intelligent design. And rightly so, as Stephen Meyer shows in Return of the God Hypothesis. This, however, leaves many a scholar in a quandary. Thus, in a recent article at The Conversation, Durham University philosopher Phillip Goff presents his views on the fine-tuning of the physical parameters, the multiverse postulate, and cosmic purpose. 

Goff acknowledges the significance of the physical constants of the universe and their fine-tuning for life.

One of the most startling scientific discoveries of recent decades is that physics appears to be fine-tuned for life.

Physicist Paul Davies, an agnostic, also acknowledges the reality of fine-tuning and its significance for our lives.

If almost any of the basic features of the universe, from the properties of atoms to the distribution of the galaxies, were different, life would very probably have been impossible… On the face of it, the universe does look as if it had been designed by an intelligent creator.1

Goff’s example of fine-tuning echoes the Goldilocks analogy that Davies uses:

To allow for the possibility of life, the strength of dark energy had to be, like Goldilocks’s porridge, “just right.”

A God-Substitute 

As Goff seeks to interpret the evidence, he reveals the philosophical discomfort that fine-tuning evokes in those who prefer a naturalistic explanation for the universe. In principle, science usually proceeds along these lines: naturalistic explanations for the phenomena of our universe are indeed appropriately sought first. The multiverse scenario, however, has ballooned up to serve as a God-substitute for those with a worldview excluding the possibility of metaphysical causes.

Some physicists aren’t too bothered by the seemingly fine-tuned cosmos. Others have found comfort in the multiverse theory. If our universe is just one of many, some would, statistically speaking, end up looking just like ours.

Goff takes issue with the current default understanding of the multiverse. The attempted rebuttal to the fine-tuning evidence typically proceeds from the following assumption:

If there are enough universes, with different numbers in their physics, it becomes likely that some universe is going to have the right numbers for life.

Multiverse scenarios assume that a putative universe-generating mechanism endows its offspring with physical laws and parameters spanning a wide spectrum of possibilities. However, this notion flies in the face of what we observe as a general feature of the physical realm.

In nature, chance interactions do not necessarily lead to an unlimited variety of outcomes but tend to produce limited variation. For example, throughout the 13.8-billion-year history of the universe, only a finite number of elements (about 94) have ever formed by natural processes. This limitation is a result of constraints on nature due to the laws of physics. The limitations imposed by those laws will prevent the natural formation of elements with, say, 200 protons, or an isotope of carbon with 53 neutrons — no matter how long we might wait.

A God-Substitute 


As Goff seeks to interpret the evidence, he reveals the philosophical discomfort that fine-tuning evokes in those who prefer a naturalistic explanation for the universe. In principle, science usually proceeds along these lines: naturalistic explanations for the phenomena of our universe are indeed appropriately sought first. The multiverse scenario, however, has ballooned up to serve as a God-substitute for those with a worldview excluding the possibility of metaphysical causes.

Some physicists aren’t too bothered by the seemingly fine-tuned cosmos. Others have found comfort in the multiverse theory. If our universe is just one of many, some would, statistically speaking, end up looking just like ours.

Goff takes issue with the current default understanding of the multiverse. The attempted rebuttal to the fine-tuning evidence typically proceeds from the following assumption:

If there are enough universes, with different numbers in their physics, it becomes likely that some universe is going to have the right numbers for life.

Multiverse scenarios assume that a putative universe-generating mechanism endows its offspring with physical laws and parameters spanning a wide spectrum of possibilities. However, this notion flies in the face of what we observe as a general feature of the physical realm.

In nature, chance interactions do not necessarily lead to an unlimited variety of outcomes but tend to produce limited variation. For example, throughout the 13.8-billion-year history of the universe, only a finite number of elements (about 94) have ever formed by natural processes. This limitation is a result of constraints on nature due to the laws of physics. The limitations imposed by those laws will prevent the natural formation of elements with, say, 200 protons, or an isotope of carbon with 53 neutrons — no matter how long we might wait.

Unlimited Physical Outcomes?

In light of all we know of this universe, even if a multiverse of other universes exists, it’s unreasonable to suppose that a near-infinite variety of physical outcomes will result within those universes. 

Goff acknowledges that the idea of the multiverse is consistent with the physics of cosmic inflation, but he denies its utility as a valid explanation for the specific fine-tuning in our universe. 

The scientific theory of inflation — the idea that the early universe blew up hugely in size — supports the multiverse. If inflation can happen once, it is likely to be happening in different areas of space — creating universes in their own right. While this may give us tentative evidence for some kind of multiverse, there is no evidence that the different universes have different numbers in their local physics.

When this particular universe was created, as in a die throw, it still had a specific, low chance of getting the right numbers.

Limited variation in physical parameters is to be expected. Why, then, would this particular universe, the only observable one, have so many parameters fine-tuned to a razor’s edge in support of life?

Goff identifies an additional philosophical error committed by those who appeal to the multiverse.

Experts in the mathematics of probability have identified the inference from fine-tuning to a multiverse as an instance of fallacious reasoning…. Specifically, the charge is that multiverse theorists commit what’s called the inverse gambler’s fallacy.

They think: “Wow, how improbable that our universe has the right numbers for life; there must be many other universes out there with the wrong numbers!”

A low-probability event is not explained by postulating a fictitious multitude of other players in the game. The only game in town is our observable universe, and its highly specific suite of parameters, if naturally occurring, must be explained by what we know exists in nature, not by appealing to what cannot be observed in nature.

The Anthropic Principle

Goff also addresses another common argument made by multiverse proponents:

At this point, multiverse theorists bring in the “anthropic principle” — that because we exist, we could not have observed a universe incompatible with life.

Logically, who can argue with this? If the universe’s parameters didn’t allow life, nobody would be here to discuss the issue, or read articles about it, either. However, this dismissal glosses over a fine point — the universe must be tuned to allow life, or else we wouldn’t be here, but no stretch of logic demands that it be exquisitely fine-tuned for life. Since the tuning for life is balanced on such a sharp knife-edge, intellectual curiosity leads us to legitimately suspect far more at work than an uninteresting axiomatic requirement for our existence. 

If we hold that the constants of our universe were shaped by probabilistic processes — as multiverse explanations suggest — then it is incredibly unlikely that this specific universe, as opposed to some other among millions, would be fine-tuned.

And Now, the God Hypothesis?
Granting that fine-tuning is real, but philosophically rejecting the multiverse as a cure-all for naturalism and its woes, is Goff led to accept “the God hypothesis”? No. He instead idolizes the universe itself, imagined as a sort of fertile incubator for life, pregnant with fine-tuning and the potential for vivification.

The conventional scientific wisdom is that these numbers have remained fixed from the Big Bang onwards. If this is correct, then we face a choice. Either it’s an incredible fluke that our universe happened to have the right numbers. Or the numbers are as they are because nature is somehow driven or directed to develop complexity and life by some invisible, inbuilt principle.

An important point from information theory, however, is that no degree of fine-tuning of physical parameters, so that life is allowed to exist, would by its nature drive life to develop. The material of the physical universe is influenced by just four fundamental forces of nature, and aside from the weak force (involved in radioactive decay), these forces do nothing more than exert an indiscriminate push or pull. 

Ascribing sentience or cosmic purpose to forces or the particles on which they act is to step out of the realm of science into the realm of myth-making. The purpose we observe in the universe is incompatible with inanimate particles but it is totally consistent with an ultimate cause whose attributes transcend the highest categories of human characteristics. However, there is a long history of humans who, faced with the idea of a creator, would prefer to say to a rock, “You gave me birth.”2


The main event: James Tour vs. Lee Cronin

 

It's complicated VIII

 

Monday, 27 November 2023

Spaceship earth's nearest star demystified.

 

Yet more on our undeniably designed bodies

 

Chance as blind tinkerer.

 Natural Selection as the Great Designer Substitute


Editor’s note: We are delighted to welcome the new and greatly expanded second edition of The design inference, by William Dembski and Winston Ewert. The following is excerpted from the Introduction.

Darwinian critics, however much they were willing to permit design inferences in other contexts, reflexively ruled them out as soon as they impacted biology or cosmology or anyplace where a non-natural designer might be implicated. They thereby gutted the design inference of any larger worldview significance, ensuring that it could never be applied to humanity’s really big and important questions.

Early in The Blind Watchmaker, Richard Dawkins stated that life is special because it exhibits a “quality” that is “specifiable in advance” and “highly unlikely to have been acquired by random chance alone.” All the elements of specified complexity are there in Dawkins’s characterization of life. Yet Dawkins, along with fellow Darwinians, did not see in specified complexity a marker of actual design but rather the outworking of natural selection, naturalism’s great designer substitute. For Dawkins, natural selection removes the small probabilities needed to make the design inference work. As he remarks, the “belief, that Darwinian evolution is ‘random,’ is not merely false. It is the exact opposite of the truth. Chance is a minor ingredient in the Darwinian recipe, but the most important ingredient is cumulative [i.e., natural] selection, which is quint- essentially nonrandom.” Nonrandom here means, in particular, not all that improbable.

Two Conditions for a Design Inference

For a design inference to properly infer design, two conditions must be met:

an observed outcome matches an independently identifiable pattern, or what we call a specification (what Dawkins means by “specifiable in advance”); and
the event corresponding to that pattern has small probability (think of the pattern as a target and an arrow landing anywhere in it as the corresponding event).
With these conditions satisfied, the design inference ascribes such an observed outcome to design. Dawkins finds no fault with this form of reasoning provided the probabilities are indeed small. He even admits that scientific theories are only “allowed to get away with” so much “sheer unadulterated miraculous luck” but no more. Dawkins is here expressing the widespread intuition that certain events are within the reach of chance but that others are not. He’s right that people widely embrace this intuition, and he’s right that this intuition applies to science.

Given his view that scientific theorizing can only permit a limited amount of luck (a view ID proponents share), Dawkins would be forced to concede that if randomness were operating in the evolution of life, the resulting probabilities would be small, and a design inference would be warranted. As evidence that Dawkins does indeed make this concession, consider the way he commends William Paley’s design argument. In a remarkable moment of candor, Dawkins writes, “I could not imagine being an atheist at any time before 1859, when Darwin’s Origin of Species was published… [A]lthough atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist.” According to Dawkins, but for Darwin, we would be stuck with Paley and compelled to be theists.

Breaking Intelligent Design

Darwin, in positing natural selection as the driving force behind evolution, was thus seen as breaking the power of classical design arguments. Natural selection, with its ability to heap up small incremental improvements, would allow evolution to proceed gradually, baby step by baby step, overcoming all evolutionary obstacles. In proceeding by baby steps, Darwinian evolution is supposed to mitigate the vast improbabilities that might otherwise constitute insuperable obstacles to life’s evolution, substituting at each step probabilities that are eminently manageable (not too small).

Darwinian processes, by overcoming probabilistic hurdles in this way, are thus said to banish design inferences from biology. The actual small probabilities needed for a valid design inference, according to Dawkins and fellow Darwinian biologists, thus never arise. Indeed, that was Dawkins’s whole point in following up The Blind Watchmaker with Climbing Mount Improbable. Mount Improbable only seems improbable if you have to scale it in one giant leap, but if you can find a gradual winding path to the top (baby step by baby step), getting there is quite probable.

Dawkins never gets beyond such a broad-brush description of how vast improbabilities that might otherwise dog evolution can be mitigated. As it is, there are plenty of probabilistic situations in which each step is reasonably probable but the coordination of all these reasonably probable events contributes to an outcome that is highly improbable. Flip a coin a hundred times, and at each flip the coin is reasonably likely to land on heads. But getting a hundred heads in a row is highly improbable, and we should not expect it to happen by chance. Dawkins doesn’t just need reasonably sized probabilities at each step, but a kind of coordination or ratcheting that locks in prior benefits and keeps striving for and accumulating future benefits. Showing that natural selection possesses this power universally goes well beyond what he, or any other Darwinian biologist, ever established probabilistically.

Confusing Apparent with Actual

In short, Darwinian critics of the design inference conflate apparent specified complexity with actual specified complexity. Darwinists like Dawkins grant that actual specified complexity warrants a design inference. But they view the Darwinian mechanism of natural selection as a probability amplifier, making otherwise improbable events probable and thus rendering them no longer complex. As a consequence, it does not matter that specified complexity, as a matter of statistical logic, warrants a design inference because, according to Darwinists, life does not actually exhibit specified complexity. Darwinists will, to be sure, claim that the Darwinian mechanism creates specified complexity. But what they really mean is that the Darwinian mechanism causes life to exhibit the illusion of specified complexity. Living systems only seem to be highly improbable, but they’re not once you understand how Darwinian evolution brings them about. In this way, the majority of evolutionary biologists, insofar as they understand the design inference at all, rationalize it away.

Since the publication of the first edition of this book, the debate over the design inference and its applicability to evolution has centered on whether such gradual winding paths exist and how their existence or non-existence would affect the probabilities by which Darwinian processes could originate living forms. Design theorists have identified a variety of biological systems that resist Darwinian explanations and argued that the probability of such systems evolving by Darwinian means is vanishingly small. They thus conclude that these systems are effectively unevolvable by Darwinian means and that their existence warrants a design inference. In this book, we recap that debate and contend that intelligent design has the stronger argument.

The amazing Randi vs. The paranormal

 

We reject the militant stupidity of the trinity

  Psalms ch.83:18KJV"That men may know that thou, whose name alone is JEHOVAH, art the most high over all the earth."


Thus no person or thing not numerically identical to JEHOVAH Can be the most high God ,in as much as JEHOVAH ALONE is the most high God and no person or thing not numerically identical to the most high God can be JEHOVAH in as much as the most High God is JEHOVAH. 

Superlative as defined by Webster's: surpassing all( not most) others : SUPREME

Thus by definition no one can be coequal to the most high God.

Thus in as much as The Jesus of Christendom is by common consent not superlative he simply cannot be the JEHOVAH of scripture.

The equivalence principle demystified

 

How about we just stick to the plain reading of the text II

 John Ch.14:28NKJV"You have heard Me say to you, ‘I am going away and coming back to you.’ If you loved Me, you would rejoice because [h]I said, ‘I am going to the Father,’ for My Father is GREATER than I."

Hebrews Ch.6:13NKJV"For when God made a promise to Abraham, because He could swear by NO ONE greater, He swore by Himself,"

John Ch.17:3 Ch.17:3KJV"And this is life eternal, that they might know THEE (second person singular)the ONLY true God, AND Jesus Christ, whom thou hast sent. "

John Ch.8:50KJV"And I seek NOT mine own glory: there is one that seeketh and judgeth."

John Ch.12:28NIV"FATHER, glorify YOUR(Not our) name!” Then a voice came from heaven, “I have glorified it, and will glorify it again.”"

Matthew Ch.20:23NIV"Jesus said to them, “You will indeed drink from my cup, but to sit at my right or left is not for me to grant. These places belong to those for whom they have been prepared by my FATHER.”"

Luke Ch.18:19NIV"“Why do you call me good?” Jesus answered. “No one is good—except God alone."





It's complicated VII

 

Sunday, 26 November 2023

Officially Junk no more?

 Newly Published Paper in BioEssays Recognizes Kuhnian “Paradigm Shift” Against Junk DNA


In September, I wrote about prolific functions discovered for short tandem repeats (STRs), formerly considered a type of “junk DNA.” Now a newly published paper in BioEssays has strongly rebuffed the idea of junk DNA — using the language of Kuhnian paradigm shifts. Before we go any further, let’s review just what a Kuhnian paradigm shift is.

The phrase comes from the work of a famous Harvard University historian and philosopher of science, Thomas Kuhn. In his influential book The Structure of Scientific Revolutions, he documented how new ideas in science typically take hold through what are called “paradigm shifts,” where the leading framework within a field (the “paradigm”) starts to accrue evidential problems (goes into “crisis”) until it finally gives way to a new idea that challenges the status quo. Kuhn further showed that most scientists spend most of their time doing “normal science” — basically solving scientific puzzles within the framework of the dominant paradigm. He observed that the scientists of the old guard paradigm are “often intolerant” of “new theories” that are being proposed by new scientists proposing ideas that challenge the reigning paradigm. A new theory “emerges first in the mind of one or a few individuals” but then it spreads because the field faces “crisis-provoking problems,” especially among scientists who are “so young or so new to the crisis-ridden field that practice has committed them less deeply than most of their contemporaries to the world view and rules determined by the old paradigm.”

A Junk DNA Paradigm Shift

This brings us to the article recently published in BioEssays, written by John Mattick, an Australian molecular biologist and Professor of RNA Biology at the University of New South Wales, Sydney. I have no evidence that Mattick has any affinities with intelligent design — but he’s a prime example of a bold scientist who has embraced new theories that challenge the reigning paradigm. Mattick has been indefatigable in following the evidence where it leads regarding evidence of function for “junk DNA.” In part because of his work, biology today has experienced a paradigm shift away from the concept of junk DNA. In fact, Mattick’s new BioEssays article, “A Kuhnian revolution in molecular biology: Most genes in complex organisms express regulatory RNAs,” frames the revolution in thinking over junk DNA precisely in “Kuhnian paradigm shift” terms. The paper has a nice video abstract, but here’s what it says in written form: 

Thomas Kuhn described the progress of science as comprising occasional paradigm shifts separated by interludes of ‘normal science’. The paradigm that has held sway since the inception of molecular biology is that genes (mainly) encode proteins. In parallel, theoreticians posited that mutation is random, inferred that most of the genome in complex organisms is non-functional, and asserted that somatic information is not communicated to the germline. However, many anomalies appeared, particularly in plants and animals: the strange genetic phenomena of paramutation and transvection; introns; repetitive sequences; a complex epigenome; lack of scaling of (protein-coding) genes and increase in ‘noncoding’ sequences with developmental complexity; genetic loci termed ‘enhancers’ that control spatiotemporal gene expression patterns during development; and a plethora of ‘intergenic’, overlapping, antisense and intronic transcripts. These observations suggest that the original conception of genetic information was deficient and that most genes in complex organisms specify regulatory RNAs, some of which convey intergenerational information.

Mattick describes the previously reigning “junk DNA” paradigm in biology as having come from “prevailing assumptions.” The assumptions hold that “‘genes’ encode proteins, that genetic information is transacted and regulated by proteins, and that there is no heritable communication between somatic and germ cells.” This view that genes encode proteins is a key part of the “central dogma” of biology. Of course, no one denies that genes encode proteins — Mattick’s point is that they can do much more than this. They can also encode RNAs and the evidence shows that many non-protein-coding sequences of DNA actually encode RNAs that perform many types of vital functions in the cell. 

Junk DNA and Evolution

So the central dogma of molecular biology is part of what is perpetuating the idea that if a stretch of DNA doesn’t encode a protein then it isn’t doing anything and is “junk.” But there’s another major driver of the failing junk DNA paradigm in biology — and it stems directly from evolutionary thinking. Mattick explains:  
          [T]heoretical biologists were integrating Mendelian genetics with Darwinian evolution, leading in 1942 to the so-called Modern Synthesis, which made two primary claims: mutations are random and somatic mutations are not inherited. … In 1968 Kimura proposed the neutral theory of molecular evolution, which posited that “an appreciable fraction” of the genome was evolving independently of natural selection. In 1969, Nei concluded that, given the “high probability of accumulating … lethal mutations in duplicated genomes … it is to be expected that higher organisms carry a considerable number of nonfunctional genes (nonsense DNA) in their genome”, leading Ohno to promote the concept of “junk DNA”, also arguing that “in order not to be burdened with an unbearable mutation load, the necessary increase in the number of regulatory systems had to be compensated by simplification of each regulatory system”. 

Against this backdrop — permeated with evolutionary thinking about the origin of the genome — the idea of junk DNA flourished and spread throughout the biology community. 



Jeremiah Chapter 14 American Standard Version.

 14.The word of JEHOVAH that came to Jeremiah concerning the drought.


2Judah mourneth, and the gates thereof languish, they sit in black upon the ground; and the cry of Jerusalem is gone up.


3And their nobles send their little ones to the waters: they come to the cisterns, and find no water; they return with their vessels empty; they are put to shame and confounded, and cover their heads.


4Because of the ground which is cracked, for that no rain hath been in the land, the plowmen are put to shame, they cover their heads.


5Yea, the hind also in the field calveth, and forsaketh her young , because there is no grass.


6And the wild asses stand on the bare heights, they pant for air like jackals; their eyes fail, because there is no herbage.


7Though our iniquities testify against us, work thou for thy name's sake, O JEHOVAH; for our backslidings are many; we have sinned against thee.


8O thou hope of Israel, the Saviour thereof in the time of trouble, why shouldest thou be as a sojourner in the land, and as a wayfaring man that turneth aside to tarry for a night?


9Why shouldest thou be as a man affrighted, as a mighty man that cannot save? yet thou, O JEHOVAH, art in the midst of us, and we are called by thy name; leave us not.


10Thus saith JEHOVAH unto this people, Even so have they loved to wander; they have not refrained their feet: therefore JEHOVAH doth not accept them; now will he remember their iniquity, and visit their sins. 11And JEHOVAH said unto me, Pray not for this people for their good. 12When they fast, I will not hear their cry; and when they offer burnt-offering and meal-offering, I will not accept them; but I will consume them by the sword, and by the famine, and by the pestilence.


13Then said I, Ah, Lord JEHOVAH! behold, the prophets say unto them, Ye shall not see the sword, neither shall ye have famine; but I will give you assured peace in this place. 14Then JEHOVAH said unto me, The prophets prophesy lies in my name; I sent them not, neither have I commanded them, neither spake I unto them: they prophesy unto you a lying vision, and divination, and a thing of nought, and the deceit of their own heart. 15Therefore thus saith JEHOVAH concerning the prophets that prophesy in my name, and I sent them not, yet they say, Sword and famine shall not be in this land: By sword and famine shall those prophets be consumed. 16And the people to whom they prophesy shall be cast out in the streets of Jerusalem because of the famine and the sword; and they shall have none to bury them-them, their wives, nor their sons, nor their daughters: for I will pour their wickedness upon them.


17And thou shalt say this word unto them, Let mine eyes run down with tears night and day, and let them not cease; for the virgin daughter of my people is broken with a great breach, with a very grievous wound.


18If I go forth into the field, then, behold, the slain with the sword! and if I enter into the city, then, behold, they that are sick with famine! for both the prophet and the priest go about in the land, and have no knowledge.


19Hast thou utterly rejected Judah? hath thy soul loathed Zion? why hast thou smitten us, and there is no healing for us? We looked for peace, but no good came; and for a time of healing, and, behold, dismay!


20We acknowledge, O JEHOVAH, our wickedness, and the iniquity of our fathers; for we have sinned against thee.


21Do not abhor us , for thy name's sake; do not disgrace the throne of thy glory: remember, break not thy covenant with us.


22Are there any among the vanities of the nations that can cause rain? or can the heavens give showers? art not thou he, O JEHOVAH our God? therefore we will wait for thee; for thou hast made all these things.

Finally a theory of everything?

 

Consciousness does not compute?

 

The red queen?