Search This Blog

Showing posts with label Intelligent design. Show all posts
Showing posts with label Intelligent design. Show all posts

Sunday, 15 December 2024

Where science actually leads re:origins.

 

Why the multiverse only multiplies questions but provides no answers re:finetuning

 Why the Multiverse Theory Can’t Explain Away Cosmic Fine-Tuning


Some argue that our universe being fine-tuned for life is merely an accident. After all, there might be millions of flopped universes out there. There is no evidence for them but, they say, we can’t rule them out either.

Durham University philosophy professor Philip Goff explains at IAI.TV why he doesn’t think that the idea of a multiverse can explain away the obvious fine-tuning of our universe.

An Example of Fine-Tuning

First, he offers an example of that fine-tuning,:

The claim is just that, for life to be possible, certain numbers in physics had to fall in a very narrow range. For example, if the force that powers the accelerating expansion of the universe had been a little stronger, everything would have shot apart so quickly that no two particles would have ever met. There would have been no stars, planets, or any kind of structural complexity. Whereas if that force had been significantly weaker, it would not have counteracted gravity, and so the entire universe would have collapsed back on itself a split second after the Big Bang. For there to be structural complexity, and therefore life, that strength of this force had to be — a bit like Goldilocks porridge –— not too strong, and not too weak: just right. There are many numbers like this, which is what it means to say our universe is fine-tuned for life. 

“The mistake at the heart of the multiverse,” December 3, 2024

The argument against fine-tuning is called the Inverse Gambler’s Fallacy:

Suppose you and I walk into a casino and the first person we see is someone winning big. I say, ‘Wow, there must be tens of thousands of people playing in the casino tonight!’ You say, ‘What makes you think that?’ I reply, ‘Well, if there are tens of thousands of people playing, it’s not so surprising that at least one person would win big, and that’s what we’ve just observed.’

“The mistake”

A Lucky Night

Of course, we have no evidence of that. We just see one person having a lucky night. Goff comments,

If we simply apply our standard way of understanding how evidence works, given by Bayes’ theorem, the fine-tuning of physics for life presents us with evidence for some form of goal-directedness towards life in the early universe. I suspect that it is deep-seated bias, a sense that this kind of hypothesis is not ‘proper science’, that is stopping most in the scientific community from following the evidence where it leads. Future historians will find it bizarre that we ignored for so long what is staring us in the face.

“The mistake”

But many thoughtful people in science today would see the goal of science as explaining away both goal-directedness and thought. In this universe, they have their work cut out for them.

Thursday, 5 December 2024

The conserved optimality of vertebrate limbs vs. Darwin.

 

Primordial WiFi?

 Biological Information in Static Electricity


Insects and spiders know how to read the air when static electricity is present. Electrical charge, to them, is a source of biological information, says Daniel Robert in Current Biology in his Primer on “Aerial Electroreception.”

This newfound sensory modality reveals a previously unrecognised source of information, a new informational ecological niche integral to diverse life histories and navigational abilities, which remarkably involves animals, plants and atmospheric electricity

Arthropods live in an “electric ecology” where “electrostatics is everywhere, always, and all at once.” They come equipped with antennae and tiny hairs that are sensitive to the electrical environment. Sensing a charge, however, is only a part of the story. How do these organisms utilize the information? What does it tell them? How does it trigger a response?

Coulomb’s Law

In a brief review of electrostatics (as opposed to electrodynamics, which involves charges moving near the speed of light), Dr. Robert discusses electric fields and Coulomb’s Law — the principle that like charges repel and opposite charges attract. He notes that “electric fields are ubiquitous in the presence of matter.” He then links electrosensitive abiotic materials with electrosensitive biological materials.

In essence, electrostatics provides a framework for understanding how a static imbalance in charge distribution can ensue between materials, and how forces arise from it. Biological materials are evidently not exempt from such processes, and it is proposed here that electrostatics plays a discrete yet pervasive and significant role in the informational ecology of terrestrial organisms.

Those terrestrial organisms include plants. Flowering plants can fashion the electric ecology for pollinators, as I discussed in an article about floral electric fields that attract bumblebees. See also my article, “Bioelectricity Gives Biologists a Jolt.” Professor Robert’s article extends the concept of the electric ecology to wider dimensions.

While humans are only weakly sensitive to electrostatic charges in air, those charges are large for small organisms like arthropods. “It has become increasingly clear that many organisms tend to be electrically charged,” he says. In fact, “it is actually very difficult to find objects, biological or else, that are not charged.” A flying insect, therefore, can “feel” the electrical field in its environment with its sensory equipment and react if it has the mechanical equipment and brain software to know how to use the information.

Crucially, these ubiquitous electrical fields generate the Coulomb forces between charged objects that are measurable and putatively useful to organisms. Thus, do electric fields have the potential to be a source of information for animals and plants to organise their lives in space and time?

The Answer Is Yes

The influence of static charge in pollination is one demonstrable case — not only for bees, but for moths and hummingbirds as well. A flower, “grounded” to ground (a net source of electrons), attracts a bee that accumulates a positive charge flying through air. The apex of the plant will be the most negatively charged due to the atmospheric potential gradient (APG) that increases the electric potential 100V for every meter above ground. While the Coulomb attraction would be too weak to move the entire insect, its sensory hairs and antennae feel a tractor beam drawing them to the flower. 

Charges without sensors cannot use information in the electric ecology. But with its tiny hairs, antennae, and wings which “can act as charged dielectric surfaces,” the insect might be able to carry and store electrical information for communicating with other bees in the hive. This fascinating and only recently investigated phenomenon is probably true of most arthropods since all are equipped with similar sensors.

In effect, these structures are present in nearly all species of terrestrial arthropods. The notion thus arises that such processes of aerial electroreception may be widespread, though other mechanisms of detection cannot be excluded. It can be highlighted that these sensory hairs can be sensitive to air currents and sounds, in addition to electric fields. Here, it is expected that hairs, as long, thin, sharp and protruding structures, will tend to accumulate charge and engage in electrostatic interactions. Hair canopies constitute a distributed array of sensors, sometimes covering the entire arthropod body. Theoretical work shows that dual acoustic and electric detection of hair arrays can extract rich information, sensing the position and distance of external charges.

Other Examples of Electroreception 

Professor Robert gives other examples of electroreception. A caterpillar might sense the approach of a wasp. “Reciprocally, the capacity of wasps to detect caterpillars electrostatically should be considered,” he writes, “which also raises the enticing possibilities of electric crypsis, masquerade, and/or aposematism.” Spiders that use ballooning with silk threads to travel long distances might be utilizing Coulomb forces to lift themselves up through the APG. And unfortunately for large mammals, their fur can accumulate thousands of volts, potentially helping parasites like ticks ride the electrostatic tractor beam to their skin.

These are just a few of the research possibilities in the new field of aerial electroreception. Professor Robert’s article is intriguing, but sadly, he attributes causal powers to evolution, committing the fallacy that evolution searches for phenomena to exploit. The maxim “opportunity knocks” works for humans with foresight, but mutations and natural selection couldn’t care less if static electricity is in the air or not. Without operational sensors and instincts built into an organism from the start, nothing would happen.


Tuesday, 26 November 2024

It looks like technology because it is technology?

 The Eukaryotic Cell Cycle: An Irreducibly Complex System


I have previously published several articles at Evolution News on the incredible design, and exquisite engineering, of the eukaryotic cell division cycle (see this recent article for links to previous essays on this subject). I also recently published a paper, in the journal BIO-Complexity, in which I documented significant obstacles to the origins of the eukaryotic cell cycle by evolutionary processes (available for free here).1 Here, I will describe several aspects of the cell cycle that render it irreducibly complex, which are also discussed in my paper.

Condensins

Condensins are protein complexes that play a crucial role in the organization and segregation of chromosomes during cell division. They are highly conserved across eukaryotes. Condensin I is active during late prophase and contributes to the structural integrity of chromosomes following the break-down of the nuclear envelope. Condensin II functions earlier in prophase and is involved in the initial stages of chromosome condensation in the nucleus.



Image source: Wikimedia Commons

Condensin molecules are composed of five subunits (as shown in the figure), including the SMC (Structural Maintenance of Chromosomes) proteins SMC2 and SMC4, which possess ATPase activity. SMC proteins possess coiled-coil domains (long, flexible arms that fold back on themselves, creating a V-shaped structure), a hinge domain that facilitates the dimerization of the two SMC proteins; and head domains containing ATP-binding and ATPase sites, energizing the activities of condensins. In addition to the SMC subunits, there are also three non-SMC subunits, which bind specific regions of DNA and assist in regulation of condensing activity.

Condensin complexes load onto chromatin in a stepwise manner, directed by non-SMC subunits. The SMC subunits create loops in DNA, utilizing their ATPase activity. These loops are stabilized and condensed into mitotic chromosomes.

The condensing proteins are crucial for the process of cell division. In their absence, the consequence would be chromosomal disorganization, as well as great difficulty in achieving proper segregation during mitosis.



Image credit: CNX OpenStax, CC BY 4.0 https://creativecommons.org/licenses/by/4.0, via Wikimedia Commons.

A complex of proteins, known as the kinetochore, assembles around the centromere of each chromosome (as shown in the figure), and is critical to the process of mitotic cell division. Each kinetochore serves as an attachment site for the spindle microtubules, which radiate from the centrosomes at the cell’s poles. Kinetochores assist with the alignment of chromosomes at the equatorial plane of the cell during metaphase, ensuring equal distribution of genetic material. Kinetochores also sense tension generated by microtubule pulling, thereby ensuring proper attachment. If improper attachments occur (e.g. if the kinetochores of both sister chromatids are attached to the same pole), these errors can be corrected by the kinetochore-associated machinery.

What would be the consequence if there were no kinetochores? This would result in the improper attachment of the chromosomes to the spindle apparatus, and the genetic material would be unequally distributed to the daughter cells. Indeed, so critical are the kinetochores to the process of cell division that they are found ubiquitously throughout all known eukaryotic organisms.

Separase and the Anaphase Promoting Complex


Image source: Wikimedia Commons.

Progression from metaphase to anaphase is mediated by the anaphase promoting complex or cyclosome (APC/C), an E3 ubiquitin ligase. When bound to its coactivator, Cdc20, the APC/C functions to ubiquitylate securin (a protein that prevents the cleavage of cohesin by the enzyme separase). Ubiquitylation of securin targets it for destruction by the cell’s molecular shredder, the proteasome. This liberates the enzyme separase to cleave the cohesin ring that tethers the sister chromatids together, thereby promoting sister chromatid separation.

In the absence of separase, the sister chromatids would fail to separate, and the cell would be rendered unable to segregate its chromosomes at anaphase. Indeed, experimental knockout studies have shown that deleting separase results in embryonic lethality.2,3 Cell cycle progression would also be halted in the absence of the APC/C, inhibiting the progression from metaphase to anaphase. Indeed, experimental studies knocking out APC2 (a core APC/C subunit) in mice, for example, resulted in lethal bone marrow failure within only seven days.4

Aurora Kinases

Aurora kinases are also crucial to proper spindle formation and chromosome segregation. Aurora kinase A phosphorylates proteins involved in microtubule organization and facilitates the accurate attachment of microtubules to kinetochores. Indeed, “Aurora A null nice die early during embryonic development during the 16-cell stage. These Aurora A null embryos have defects in mitosis, particularly in spindle assembly, supporting critical functions of Aurora A during mitotic transitions.”5 This indicates that Aurora kinase A is among the components that are essential for successful cell division.

Microtubules

I have previously described the critical role of microtubules in cell division. Microtubules radiate from centrosomes and anchor to the kinetochore complex, assembled around the centromere of each chromosome. During metaphase, the chromosomes are aligned along the equatorial plane of the cell, bound to microtubules at the kinetochore. In anaphase, the sister chromatids are pulled apart by the microtubules, driven by poleward spindle forces. The microtubules are, therefore, essential for segregating the sister chromatids into the two daughter cells.

In the absence of the microtubules, mitotic spindle assembly would thus be severely impaired, inhibiting chromosome alignment and segregation. Indeed, experimental studies with mouse embryos that are deficient in γ-tubulin exhibit a mitotic arrest that arrests development at the morula/blastocyst stages.6

The Contractile Ring


Image credit: David O Morgan, via Wikimedia Commons.

The contractile ring is also critical to the process of cytokinesis, the final stage of mitosis where the cell physically divides into two daughter cells. It is principally composed of actin filaments and myosin II motor proteins, together with other regulatory proteins such as formins, RhoA, and septins. These components form a dynamic, belt-like structure beneath the membrane at the equator of the dividing cell. The contractile ring produces the force that is needed for the ingression of the cleavage furrow. Myosin II proteins interact with actin filaments in the ring to generate this contractile force. This process is energized by hydrolysis of ATP. As the ring tightens, the plasma membrane is pinched inward, ultimately dividing the cytoplasm. The absence of the contractile ring would result in a failure of the cell to divide, leading to binucleated cells as well as other abnormalities.

Motor Proteins


In a previous article at Evolution News, I described the role of motor proteins (kinesin and dynein) in the assembly and function of the mitotic spindle during eukaryotic cell division. I’d refer interested readers to that essay for a discussion of this astounding process. The absence of these motor proteins would severely compromise the transport and positioning of chromosomes, resulting in chromosomal misalignment during metaphase and difficulty in establishing a proper mitotic spindle. The consequence would be errors in chromosome segregation during anaphase.

Cdk and Cyclin Molecules

I have written previously about the role of cyclin-dependent kinases and cyclin molecules in cell cycle progression. I refer readers there for a review. The Cdk and cyclin molecules exhibit redundancy, meaning that they are not all individually necessary. For example, mouse knockouts of Cdk2, 3, 4, or 6 still retain viability.7,8,9,10,11,12,13Furthermore, yeast cells possess only a single Cdk, specifically Cdk1.14 Interestingly, double knockouts involving combinations of Cdk2 and 4, or Cdk4 and 6, result in embryonic lethality, though a double knockout of Cdk2 and 6 does not.15,16 It appears, then, that the pair Cdk2 and 4 and the pair Cdk4 and 6 can substitute for one another.17 However, Cdk1 appears to be essential, and knocking it out arrests development at the blastocyst stage.18

Cdk molecules themselves are activated by the binding of cyclin molecules. Without those cyclins, the Cdks would be inactive, resulting in cell cycle arrest. Though there is redundancy here too (and thus not all cyclins are indispensable to successful division), the absence of cyclin B (which activates Cdk1 to drive progression into mitosis), would impair the transition from G2 to M phase. In other words, the cell could not enter mitosis. This is corroborated by experimental knockout studies of cyclin B in mouse embryos, leading to the arrest of the cell cycle in G2 after as little as two divisions.19,20

Checkpoints

I have written previously about the various cell cycle checkpoints — i.e., the G1 (restriction) checkpoint, G2 (DNA damage) checkpoint, and spindle assembly checkpoint (see my articles on these here, here and here). These are also essential for successful cell division. For instance, without the mitotic checkpoint complex, the cell’s ability to monitor spindle assembly would be abolished — drastically increasing the risk of cells proceeding through division with spindle defects, the result of which would be chromosome missegregation and aneuploidy. The absence of the G1 checkpoint would enable damaged DNA to enter S phase, which could lead to the propagation of mutations as well as genomic instability. The loss of the G2 checkpoint would allow cells with DNA damage to enter into mitosis, leading to the division of cells with unrepaired genetic material, as well as a greatly increased risk of chromosome aberrations. Without the DNA damage checkpoint in S phase, replication of damaged DNA would occur, resulting in the propagation of mutations and thus an elevated risk of genetic abnormalities in the daughter cells.

Irreducibly Complex

As seen from the cursory discussion above, various components of the mitotic cell division apparatus are indispensable for the system to work. This makes the eukaryotic cell division irreducibly complex, rendering it resistant to explanations in terms of blind, evolutionary processes. Any system that achieves a complex higher-level objective by means of various well-matched interacting components requires foresight to come about. In a subsequent article, I will discuss how the challenge to evolutionary accounts of the origins of eukaryotic cell division extends much deeper than this.

Notes

McLatchie J (2024) Phylogenetic Challenges to the Evolutionary Origin of the Eukaryotic Cell Cycle. BIO-Complexity 2024 (4):1–19 doi:10.5048/BIO-C.2024.4.
Kumada K, Yao R, Kawaguchi T, Karasawa M, Hoshikawa Y, et al (2006) The selective continued linkage of centromeres from mitosis to interphase in the absence of mammalian separase. J Cell Biol. 172(6): 835-46. doi:10.1083/jcb.200511126
Wirth KG, Wutz G, Kudo NR, Desdouets C, Zetterberg A, et al (2006) Separase: a universal trigger for sister chromatid disjunction but not chromosome cycle progression. J Cell Biol. 172(6): 847-60. doi:10.1083/jcb.200506119
Wang J, Yin MZ, Zhao KW, Ke F, Jin WJ, et al (2017) APC/C is essential for hematopoiesis and impaired in aplastic anemia. Oncotarget. 8(38): 63360-63369. doi:10.18632/oncotarget.18808
Lu LY, Wood JL, Ye L, Minter-Dykhouse K, Saunders TL, Yu X, Chen J (2008) Aurora A is essential for early embryonic development and tumor suppression. J Biol Chem. 283(46): 31785-90. doi:10.1074/jbc.M805880200
Yuba-Kubo A, Kubo A, Hata M, Tsukita S (2005) Gene knockout analysis of two gamma-tubulin isoforms in mice. Dev Biol.282(2): 361-73. doi:10.1016/j.ydbio.2005.03.031
Berthet C, Aleem E, Coppola V, Tessarollo L, Kaldis P (2003) Cdk2 knockout mice are viable. Curr Biol. 13: 1775–1785. doi:10.1016/j.cub.2003.09.024
Ortega S, et al. (2003) Cyclin-dependent kinase 2 is essential for meiosis but not for mitotic cell division in mice. Nat Genet.35: 25–31. doi:10.1038/ng1232
Ye X, Zhu C, Harper JW (2001) A premature-termination mutation in the Mus musculus cyclin-dependent kinase 3 gene. Proc Natl Acad Sci USA. 98: 1682–1686. doi:10.1073/pnas.98.4.1682Rane SG, et al. (1999) Loss of Cdk4 expression causes insulin-deficient diabetes and Cdk4 activation results in β-islet cell hyperplasia. Nat Genet. 22: 44–52. doi:10.1038/8751
Tsutsui T, et al. (1999) Targeted disruption of CDK4 delays cell cycle entry with enhanced p27Kip1 activity. Mol Cell Biol. 19: 7011–7019. doi:10.1128/MCB.19.10.7011
Hu MG, et al. (2009) A requirement for cyclin-dependent kinase 6 in thymocyte development and tumorigenesis. Cancer Res. 69: 810–818. doi:10.1158/0008-5472.CAN-08-2473
Malumbres M, et al. (2004) Mammalian cells cycle without the D-type cyclin-dependent kinases Cdk4 and Cdk6. Cell. 118: 493–504. doi:10.1016/j.cell.2004.08.002
Enserink JM, Kolodner RD (2010) An overview of Cdk1-controlled targets and processes. Cell Div. 5: 11. doi:10.1186/1747-1028-5-11
Malumbres M, et al. (2004) Mammalian cells cycle without the D-type cyclin-dependent kinases Cdk4 and Cdk6. Cell. 118: 493–504. doi:10.1016/j.cell.2004.08.002
Berthet C, Kaldis P (2007) Cell-specific responses to loss of cyclin-dependent kinases. Oncogene 26: 4469–4477. doi:10.1038/sj.onc.1210243
Satyanarayana A, Kaldis P (2009) Mammalian cell-cycle regulation: Several Cdks, numerous cyclins and diverse compensatory mechanisms. Oncogene. 28:2925–2939. doi:doi.org/10.1038/onc.2009.170
Diril MK, Ratnacaram CK, Padmakumar VC, Du T, Wasser M, Coppola V, Tessarollo L, Kaldis P (2012) Cyclin-dependent kinase 1 (Cdk1) is essential for cell division and suppression of DNA re-replication but not for liver regeneration. Proc Natl Acad Sci U S A. 109(10): 3826-31. doi:10.1073/pnas.1115201109
Berthet C, et al. (2006) Combined loss of Cdk2 and Cdk4 results in embryonic lethality and Rb hypophosphorylation. Dev Cell. 10: 563–573. doi:10.1016/j.devcel.2006.03.004
Strauss B, Harrison A, Coelho PA, Yata K, Zernicka-Goetz M, Pines J (2018) Cyclin B1 is essential for mitosis in mouse embryos, and its nuclear export sets the time for mitosis. J Cell Biol. 217(1): 179-193. doi:10.1083/jcb.201612147

Sunday, 17 November 2024

On the designed intelligence of the fruit fly.

 Design, Engineering, Specified Complexity: Appreciating the Fruit Fly Brain


Groundbreaking new research has documented the complexity and design of the brains of fruit flies (Drosphila melanogaster). Many of the results were published in a series of papers in the journal Nature. The basis for the research is the completion of the entire wiring diagram (called a connectome) of the fruit fly brain, which consists of 140,000 neurons.1 In addition, it includes more than 50 million connections (chemical synapses).2Keep in mind that, despite the number of neurons and connections, fruit fly brains are tiny, smaller than a poppy seed. Previously, researchers had mapped the brains of a few other organisms, including the roundworm C. elegans, however their brains consist of only 302 neurons. 

Most of the work was conducted by a group of researchers called the FlyWire consortium. The completion of the project and ongoing research is expected to result in a revolution in neuroscience. Previously it was believed that brains with hundreds of thousands of neurons were too large to map and assess function in much detail. But the results are a first step toward being able to do so, and potentially toward mapping at least segments of larger brains (including humans with more than 80 billion neurons and 100 trillion connections). The research has already revealed a number of important, and in some cases, surprising findings. 

Neuron Types

The research has identified at least 8,453 neuronal cell types.3 A neuron cell type is a group that has similar morphology and connectivity. This compares with the worm C. elegans which has 118 cell types.4 The research also identified different classes of neurons, depending upon their function. Examples include sensory neurons (labeled afferent) that send signals from sensory organs to the brain. Motor and endocrine neurons (labeled efferent) send signals from the brain to muscles and other organs.5

Previously, some theorized that brain neurons might be like “snowflakes,” that is, each one is unique. That would imply their development and connections are essentially a random process. However, the research confirms that is generally not the case. There is some evidence of randomness, as one analysis shows that, “Over 50% of the connectome graph is a snowflake. Of course, these non-reproducible edges [connections] are mostly weak.”6 The analysis does show that, “Neurons occasionally do something unexpected (take a different route or make an extra branch on one side of the brain). We hypothesize that such stochastic differences are unnoticed variability present in most brains…In conclusion, we have not collected a snowflake.”7 This means that the stronger connections are largely stereotyped and do not vary in a random manner significantly. Conversely, the findings show convincingly that neither is the brain structure a regular lattice type, as in crystals.

Complexity

Fruit flies exhibit a number of complex behaviors, including flight control (hovering, rapid changes in direction), navigation, mating courtship using pheromones, and swarming. Therefore, it isn’t that surprising that their brains show complexity. The average fruit fly neuron connection consists of 12.6 synapses.8 Individual neurons typically have less than 10 connections, but some have more than 100, and even a few have 1,000.9This means that there isn’t a uniform distribution of neurons or a uniform distribution of connections. The research has even been able to map the flow of information throughout the brain. The fruit fly brain consists of areas of specialized functions. These include visual processing, olfactory, auditory, mechanical sensors, and temperature sensors. A further indication of specialized functions is the report of one research project that analyzed 78 anatomically distinct “subnetworks” in the brain.10 This same analysis concluded, “The local structure of the brain displays a high degree of non-randomness, consistent with previous studies in C. elegans and in the mouse cortex.”11

The overall structure of the brain is consistent among fruit flies, based on the finding of “[a] high degree of stereotypy at every level; neuron counts are highly consistent between brains, as are connections above a certain weight.”12 This is consistent with previous research with different insect brains.13

Another finding from the research is that the fruit fly brain exhibits the characteristics of is what is called a “small-world network,” where the “nodes are highly clustered and path lengths are short.”14 Other examples of small-world networks are power grids, train routes, and electronic circuits. The brain of C. elegans was the first example identified of a small-world neural network. Characteristics of small-world networks include “enhanced signal-propagation, computational power, and synchronizability.”15 The key benefit for brain function is that it provides “highly effective global communication among neurons.”16

Overall, the research shows that the fruit fly brain has a high degree of complexity, but more importantly, much of it is specified complexity. This includes the engineering design of the various specialized neural networks and subnetworks. Some of the engineering design principles that are evident in aspects of the brain include optimization, efficiency, and coherence. As complex as the brain is shown to be so far from the research, it is likely even more complex than currently appears to be the case since the electrical connections have yet to be fully mapped in a similar way to the chemical connections

Saturday, 16 November 2024

Rallying to the logic of design.

 Postcard from Venice: First Pan-European Conference on Intelligent Design


Recently I had the great privilege and honor to attend a remarkable event in the beautiful and historic city of Venice, Italy. It was the first pan-European conference on intelligent design theory, organized by the Centro Italiano Intelligent Design (CIID), in collaboration with the foundation En Arche (Poland), BioCosmos (Norway), Centre for Intelligent Design (UK), Zentrum für BioKomplexität & NaturTeleologie (Austria), and Discovery Institute (USA). The conference was titled “Cosmos, Life, Intelligence, Information” and it was held at the prestigious and absolutely stunning venue of the Ateneo Veneto, which represents the oldest cultural institute still operative in Venice. The institute is dedicated to the spreading of science, education, and art and was officially founded in 1812, but originally dates back as far as 1458. It is situated in the historic center of Venice in a building from the early 1500s. The event was not advertised in advance and only included about 60 invited guests, to avoid any possible intervention by the Darwinist thought police, whose zealous activists already had prevented several such conferences at prestigious venues in the past.

The speakers came from all over Europe and America and addressed very different topics related to the question of intelligent design. After an introduction by the president of CIID, Carlo Alberto Cossano, the German physicist Professor Alfred Krabbe talked about “Fine-tuning in the universe,” which surprised me with some striking examples of fine-tuning in physics and astronomy that I had never heard of before. Professor Ferdinando Catalano elaborated on the strange relation between mathematics and physics in his talk “But does light ‘reflect’?”, and his Italian compatriot Professor Alessandro Giorgetti emphasized the extreme unlikelihood of the emergence of life from inanimate matter in his lecture about the “Origins of life and exobiology.”

Discontinuities in the Fossil Record

Next, I presented a talk about the “Scientific Challenges to Neo-Darwinism,” based on the discontinuities in the fossil record, the waiting time problem, the species pair challenge, and the incongruence of different lines of evidence in phylogenetics and molecular clock studies. Professor Steinar Thorvaldsen, an information scientist from Norway, talked about “Measuring the information in genes and DNA,” and Polish biologist Professor StanisÅ‚aw KarpiÅ„ski asked “Is the theory of evolution coherent or fragmentary?”, presenting fascinating new discoveries about communication and information processing in plants. British physician Dr. David Galloway introduced “The engineering of oxygen delivery in the newborn human” as another case of irreducibly complex systems. Last but not least, Dr. Casey Luskin from Discovery Institute gave an “Update on avenues of ID inspired research,” which showed the remarkable progress of intelligent design in the past years.

A Concluding Debate

The event concluded with a panel debate between theistic evolutionist Dr. Erkii Vesa Rope Kojonen (Finland) and ID proponent Casey Luskin about the compatibility of evolution and design. Both speakers are Christian theists, who agree that there is evidence for design in nature that cannot be sufficiently explained by blind forces of chance and necessity, but they differ in their views as to how and when the input of intelligent design happened. Rope Kojonen thinks that it was only at the very beginning of the universe, through a fine-tuning of the laws of nature and the initial conditions, while the development of life happened by mere Darwinian processes in this fine-tuned fitness landscape. On the other hand, Casey Luskin made a strong case for the necessity of ongoing activity of an intelligent designer during the history of life to explain complex adaptations and new proteins. While Rope Kojonen relied more on philosophical and theological arguments, Casey Luskin focused on the empirical scientific evidence and an inference to the best explanation, which in his and my humble opinion clearly favors intelligent design theory over theistic evolution. Nevertheless, it was very encouraging to see how such an exchange of different views can happen in a very respectful, charitable, and kind manner, very much unlike the aggressive attitude of many vocal ID critics on the Internet. After a discussion and Q&A session, the event ended with a wonderful dinner in an inspiring atmosphere of camaraderie and friendship.


All the talks were professionally recorded and will be made available on YouTube soon, and there are plans to publish English abstracts of the talks.

CIID should be congratulated for the excellent organization of this conference, which I hope will mark the beginning of more regular events like this in Europe to foster interdisciplinary exchange and advance the field of intelligent design research.

Thursday, 31 October 2024

On reverse engineering JEHOVAH'S tech

 Studying Biology with System Engineering Principles


In the IEEE Open Journal of Systems Engineering, I recently co-authored a paper with Dr. Gerald Fudge at Texas A&M on the intersection of biology and engineering. Our paper does two things: 1) It lays out a methodology based on systems engineering for biologists. 2) It illustrates the usefulness of the methodology with a case study of glycolysis. 

The project was inspired a couple of years back when I read Uri Alon’s An Introduction to Systems Biology, which made me realize that biologists could benefit from the same engineering approaches used to build iPhones. These approaches could lead to uncovering the intricate designs in life. 

As a biologist, I’ve often wondered what the best way is to integrate engineering ideas in biology research. While there are many methods, one way engineering can assist the everyday biologist is in providing a robust methodology for approaching the study of living systems. A great illustration is the paper, “Can a Biologist Fix a Radio?” The punchline is that a handyman can fix a radio, but a biologist probably can’t — and this has nothing to do with IQ but everything to do with methodology. (Lazebnik 2002)

Current practice in biology does not involve a formal methodology for reverse engineering a system. Instead, biologists are taught the scientific method, which is very useful for rigorously testing hypotheses, along with a reductionistic bottom-up processes of interrogation. Different from these is a methodology that helps one understand and interrogate a complex system. Having identified this problem, Dr. Fudge, a long-time engineer, and I teamed up to work on integrating the proven systems engineering methodology to enhance discovery in living organisms.

Proven in What Way?

I used the word “proven” because systems engineering has built amazing technology, from rockets to iPhones. It has a track record of being able to develop complex systems. The standard systems engineering process goes something like this. Engineers meet with stakeholders and are given a rough outline of requirements (verbal or written details about what the product should do). This is eventually formalized into a set of specific requirements and then often modeled using a systems engineering tool. More specific models are then developed, from which a variety of refinements result. Then construction begins. Construction of the smaller parts happens first, followed by the assembly of subsystems. Throughout this build phase, testing is ongoing, and all is compared with the list of requirements and the initial systems model. Eventually a completed product is produced, meeting the stakeholders’ expectations. Or that is the goal, anyway.

Dr. Fudge and I adapted this methodology for biology. We call it model-based reverse systems engineering (MBRSE). “Model-based,” because it utilizes a system model as a map to keep track of relationships between objects and processes. “Reverse,” because the goal of biology is to understand and predict how organisms function. “Systems,” because this approach utilizes requirements and modeling to tie components into a system-level design, illustrating how the whole is more than the sum of its parts.

To Start with Literature Mining

Our approach, as in biology, begins with observations via literature mining. However, these observations are guided by classic systems engineering questions. Those include: (1) What requirements is this system meeting? (2) What are its interfaces? (3) What are the associated derived requirements? (4) What predictions can we make, whether at the system, sub-system, or component level, based on these derived requirements? From observations, our methodology shifts quickly into a more traditional systems engineering approach, where we infer requirements from observations and build a system model (in our case we used OPCloud). Building a system model starts with qualitative conceptual modeling and can be followed by more specific computational modeling. Conceptual modeling, to my surprise, is highly accessible to biologists. It is more like creating a map than it is like quantitative modeling. Yet it serves as a critical foundation for quantitative modeling since it sets relationships between objects and processes through a formal language. This also allows for errors to be identified early. Once the system model and requirements are developed, which often identifies key knowledge gaps since it is a methodical process, one can make predictions, test, and then validate experimentally and update the model and requirements based on observed results. This is an iterative process where the goal is to develop a list of requirements and a systems model that accurately reflect the biological system or organism.

A Case Study of Glycolysis

In our paper, to illustrate the utility of our approach, we use glycolysis as a case study. Glycolysis is reasonably well understood and is familiar to many non-biologists since most high school biology courses teach the basics of this crucial metabolic pathway.

Similarities and Differences in Glycolysis by Systems Engineering 

Before we talk about similarities and differences in glycolysis across different types of organisms, it’s important to define a term: topology. Topology refers to the overall metabolic sequence — i.e., the ordering of the pathway steps that lead from, say, glucose to ATP and the intermediates that are produced along this pathway. It has been noted for glycolysis that among different types of organisms there are both remarkable similarities (for example, most organisms use one of two topological patterns for catabolism of glucose, commonly the EMP or ED topology) and remarkable differences (while the topology is conserved, the DNA sequences of the enzymes used in the pathway are not). (Rivas, Becerra, and Lazcano 2018) The high degree of similarity for the topology of the pathway across different organisms led many to assume that the uniformity resulted from common ancestry, and also to expect a common ancestry pattern for the genetic sequences of the enzymes. But this hypothesis overlooked system requirement-based reasons for topological similarity. As we write in our paper:

Traditionally, uniformity has been attributed as an artifact of common descent, meaning uniformity resulted from a historical relationship between all living organisms and does not have functional importance. However, in systems engineering, uniformity at a low level in a system design is often an optimized solution to upper-level requirements. We therefore propose that the striking similarity in the topology and metabolites of glycolysis across organisms is driven by a requirement for compatibility between organism energy interfaces, aiming to maximize efficiency at the ecosystem level.

Fudge and Reeves 2024

Ecosystem requirements shape the design of organisms, which in turn influence the requirements of metabolic design, ultimately constraining the structure of lower subsystems like glycolysis. This is because higher-level system needs determine the architecture of the subsystems below them. For glycolysis, a need for ecosystem efficiency and optimization of energy catabolism is a hypothesis with increasing evidentiary support that best explains the uniformity of the glycolytic topology. First, ecosystem efficiency requires some level of biomass commonality to maximize thermodynamic efficiency in reusing complex molecules by minimizing the amount of required biomolecule break-down and rebuild. This also helps minimize waste buildup, as shared waste products simplify the maintenance of ecosystem homeostasis. Second, the glycolytic pathway is recognized as optimized for a number of key metabolic constraints, further supporting its uniformity across species.

Ebenhöh and Heinrich [40] showed that the glycolysis architecture with a preparatory phase followed by a payoff phase is highly efficient based on kinetic and thermodynamic analysis. Similarly, Court et al. [41] discovered that the payoff phase has a maximally efficient throughput rate. In 2010, Noor et al. [42] demonstrated that the 10 step glycolytic pathway is minimally complex, with all glycolytic intermediates essential either for building biomass or for ATP production. In fact, it turns out that glycolysis is Pareto-optimized to maximize efficiency while serving multiple, often competing, purposes. Ng et al. [43] published their analysis in 2019 by analyzing over 10000 possible routes between glucose and pyruvate to show that the two primary glycolysis variant pathways are Pareto-optimized to balance ATP production against biomass production while simultaneously minimizing protein synthesis cost.

Fudge and Reeves 2024

In contrast, the differences in glycolytic enzyme or transporter sequences amongst organisms seem to be due to lower subsystem design requirements and constraints, which are expected to reflect more organism-specific differences. In our paper, we discuss the example of mammalian glucose transporters, which have 14 subtypes, only four of which are well characterized. (Thorens and Mueckler 2010) Of the four, each plays a unique role in system level glucose control within the mammalian system. Thus, differences in glucose transporters are explainable by their tissue-adapted roles. Similarly, differences between the glycolytic enzymes themselves are poorly correlated with ancestry and have led to complete dismissal of the previous assumption that the pathway had a single evolutionary origin. (Rivas, Becerra, and Lazcano 2018) Instead, evidence continues to accumulate that glycolytic enzyme differences between organisms play functional roles due to the unique subsystem environments in which they are placed.

The Warburg Effect and Cancer Research

Using our system engineering approach, we also generated a hypothesis for the Warburg effect, which is a well understood phenomenon in many cancer types. Briefly, the Warburg effect is preferential use of glucose in cancer via upregulation (i.e., increased usage) of glycolysis even in the presence of oxygen. This is often thought to be a deleterious byproduct of cancer, but our paper proposes a new perspective. Our hypothesis is that the Warburg effect is a normal system response to local organism injury or other temporary situations that require rapid tissue growth, such as during certain early developmental stages. Cancer occurs when the signal to turn off rapid tissue growth fails. The downstream effect is the continued signal for upregulated glycolysis, hence the Warburg effect. From our paper: 

Under certain (currently unknown) conditions, the feedback control loop for injury response can be broken, resulting in an under-controlled or completely uncontrolled response. In other words, we hypothesize a cellular level failure in the control system that upregulates cellular processes for division including glycolysis such that the rate of glycolysis is unconstrained at the cellular level. Note that all four proposed functions of the Warburg effect, plus its ability to support cellular metabolism if the oxygen supply is interrupted due to local loss of normal blood flow, are beneficial for tissue repair after an injury where 1) there might be reduced oxygen, 2) faster cell division and local ATP energy supply is needed, and 3) more biomass is required. A similar situation can occur during early organism development when tissue growth is more rapid than in the adult stage, and in which the blood supply is developing simultaneously.

Fudge and Reeves 2024

To our surprise, in our literature search, we found little about the Warburg effect as a critical part of injury repair. An exception was Heiden et al., who suggested that the increased cellular division rate associated with the Warburg effect can be beneficial in tissue repair as well as in immune responses. (Vander Heiden, Cantley, and Thompson 2009) We propose that this could be a very important area for investigation. Research that focuses on feedback mechanisms in the control system responsible for the rate of glycolysis upregulation should be able to verify or falsify our hypothesis.

A Useful Design-Based Tool

Engineering is a design-driven field, born from the creativity of intelligent human agents. Many tools developed in the field have applications in biology. For example, the MBRSE approach overcomes a key challenge facing biology: many biological objects and processes are not linked to system-level requirements. Without these connections, a divide occurs between the structure of components and how they fit into the system’s function. On a personal note, one aspect of system modeling that I find particularly appealing is its use of formal relationships and structured language. Once you’re familiar with the tool, it becomes much easier to identify connections between subsystems or constraints, even when looking at a different system model. This offers a major advantage over the inconsistent, often free-form diagrams found in biology research papers, where each tends to differ from the next. Another benefit of systems modeling is that it organizes information from research papers in a structured, graphical manner. No matter how brilliant a researcher is, it’s impossible to keep track of information from thousands of papers. However, a systems model can do that. It’s remarkable that while these modeling tools are standard in engineering, they are largely absent from biological training, despite the clear benefit they offer in overcoming the inconsistencies of biological diagrams. 

Our reverse systems engineering approach is motivated by some key observations: 

Biological systems look as if they are designed; for example, Francis Crick cautions biologists about using evolutionary ideas to guide research because biological systems look designed though he thinks they evolved (Campana 2000). Even Richard Dawkins admitted in The God Delusion, “The illusion of design is so powerful that one has to keep reminding oneself that it is indeed an illusion.”
Biological systems have much in common with human engineered systems (Csete and Doyle 2002); and
Biological systems exhibit features such as modularity, robustness, and design re-use (Alon 2003) that are traditionally associated with good top-down engineering practices.
These observations suggest that from a pragmatic perspective, the best approach to reverse engineer biological systems will be to treat them as if they are the result of a top-down requirements-driven systems engineering process.

It is good news, then, that design-based tools and hypotheses play an increasingly prominent role in biology, offering a clear, coherent path to understanding biological complexity. From this understanding, more than a few deeper philosophical questions arise.

References

Alon, U. 2003. “Biological Networks: The Tinkerer as an Engineer.” Science (New York, NY) 301 (5641): 1866-67.
Campana, Joey. 2000. “The Design Isomorph and Isomorphic Complexity.” Nature Reviews Molecular Cell Biology, 149-53.
Csete, Marie E., and John C. Doyle. 2002. “Reverse Engineering of Biological Complexity.” Science (New York, NY) 295 (5560): 1664-69.
Fudge, Gerald L., and Emily Brown Reeves. 2024. “A Model-Based Reverse System Engineering Methodology for Analyzing Complex Biological Systems with a Case Study in Glycolysis.” IEEE Open Journal of Systems Engineering 2:119–34.
Lazebnik, Yuri. 2002. “Can a Biologist Fix a radio? — Or, What I Learned While Studying Apoptosis.” Cancer Cell2 (3): 179–82.
Rivas, Mario, Arturo Becerra, and Antonio Lazcano. 2018. “On the Early Evolution of Catabolic Pathways: A Comparative Genomics Approach. I. the Cases of Glucose, Ribose, and the Nucleobases Catabolic Routes.” Journal of Molecular Evolution 86 (1): 27–46.
Thorens, Bernard, and Mike Mueckler. 2010. “Glucose Transporters in the 21st Century.” American Journal of Physiology. Endocrinology and Metabolism 298 (2): E141-45.
Vander Heiden, Matthew G., Lewis C. Cantley, and Craig B. Thompson. 2009. “Understanding the Warburg Effect: The Metabolic Requirements of Cell Proliferation.” Science 324 (5930): 1029-33.

On the nexus of art and information.

 

I.D has always been mainstream

 Using AI to Discover Intelligent Design


Human senses are excellent design detectors, but sometimes they need a little help. In a recent case, AI tools were applied to aerial photographs of the Nazca plain in Peru. The algorithms, trained on known geoglyphs, were able to select hundreds of candidate sites with figures too faint for the human eye. Many of them, on closer inspection, turned out to indeed contain patterns on the ground indicative of purposeful manipulation by indigenous tribes that lived in the area long ago. 

Here is a case where humans used their intelligent design to create intelligently designed “machine intelligences” capable of detecting intelligent design. Even so, the scientists needed to use their innate design detection abilities to follow up on the AI results to validate the potential design detections. AI is a tool, not a thinker. As a tool, it offers new powers to archaeology: one of the examples of intelligent design in action in science.

The Nazca Pampa is designated a World Heritage Site by UNESCO because of its immense geoglyphs, averaging 90m in length. The well-known ones, consisting of lines, geometric figures and images of animals, were rediscovered in the early 20th century and have fascinated scientists and laypeople alike. UNESCO describes what makes them unique:

They are located in the desert plains of the basin river of Rio Grande de Nasca, the archaeological site covers an area of approximately 75,358.47 Ha where for nearly 2,000 uninterrupted years, the region’s ancient inhabitants drew on the arid ground a great variety of thousands of large scale zoomorphic and anthropomorphic figures and lines or sweeps with outstanding geometric precision, transforming the vast land into a highly symbolic, ritual and social cultural landscape that remains until today. They represent a remarkable manifestation of a common religion and social homogeneity that lasted a considerable period of time.

They are the most outstanding group of geoglyphs anywhere in the world and are unmatched in its extent, magnitude, quantity, size, diversity and ancient tradition to any similar work in the world. The concentration and juxtaposition of the lines, as well as their cultural continuity, demonstrate that this was an important and long-lasting activity, lasting approximately one thousand years.

Based on pottery fragments, the geoglyphs are dated to between at least 100 BC and possibly up to the 15th century. The spellings (Nasca vs Nazca) appear to be interchangeable. Mysteries remain about the purpose of geoglyphs, and various theories are debated. One thing is indisputable: they were designed by intelligent minds. The people made considerable effort to modify the landscape for whatever purposes that drove them. But that’s OK; ID theory can detect design without knowing the identity of the designer(s) or why they did their work. ID’s job is done when the Design Filter has ruled out chance and natural law to conclude something is the product of a designing intelligence. Discerning the purposes of designs like these are left in the capable hands of anthropologists, historians, and archaeologists, who may find themselves puzzled by some of the discoveries like the “knife-wielding killer whale” figure.

The New AI-Directed Discoveries

New detections of Nazca geoglyphs have continued slowly through the years. A team of Japanese, European, and American researchers, Sakai et al., publishing in PNAS, boasts that AI has accelerated the pace of new discoveries:

The rate of discovery of new figurative Nazca geoglyphs has been historically on the order of 1.5 a y (from 1940s to 2000s). It has accelerated due to the availability of remotely sensed high-resolution imagery to 18.7/y from 2004 to 2020. Our current work represents another 16-fold acceleration (303 new figurative geoglyphs during the 2022/23 season of field work) using big geospatial data technologies and data mining with the aid of AI. Thus, AI may be at the brink of ushering in a revolution in archaeological discoveries like the revolution aerial imaging has had on the field.

The Nazca geoglyphs can be classified as line-type (carved into the ground) or relief-type (made by aligning stones above ground). They can also be distinguished by subject matter and size. Sakai et al. surveyed the entire Nazca Pampa (629 km2), then subdivided aerial photographs with 10-cm resolution into grids. They trained their AI model on 406 relief-type glyphs and gave the AI some puzzles to solve:

To leverage the limited number of known relief-type geoglyphs, and to render the training robust, data augmentation is paramount. Hand-labeled outlines of known geoglyphs serve to pick 10 random crops from within each of the known geoglyphs. These are also randomly rotated, horizontally flipped, and color jittered. Similarly, 25 negative training images are randomly cropped from the area surrounding each known geoglyph. We set the ratio of positive to negative training images to 10:25 for a reasonable balance between precision and recall.

This method yielded 1,309 hotspots of likely geoglyphs, which the scientists classed as Rank I, II, or III from most to least likely. “Of the 303 newly discovered figurative geoglyphs,” the paper says, “178 were individually suggested by the AI and 125 were not individually AI-suggested.” It still required 2,640 labor hours of follow-up on foot and with drones to validate the AI selections. Nevertheless, this effort represented a quantum leap in design detection of glyphs with such low contrast they were barely visible to the unaided human eye.

New Scientist included photos of some of the new geoglyphs outlined for clarity. The new ones tend to be smaller and located near trails rather than larger roads, leading the scientists to surmise that they were intended for viewing by local groups instead of for community-wide religious rituals. Reporter Jeremy Hsu wrote about the need for human intelligence to corroborate the selections made by AI:

The researchers followed up on the AI suggestions and discovered a total of 303 figurative geoglyphs during field surveys in 2022 and 2023. Of these figures, 178 geoglyphs were individually identified by the AI. Another 66 were not directly pinpointed, but the researchers found them within a group of geoglyphs the AI had highlighted.

“The AI-based analysis of remote sensing data is a major step forward, since a complete map of the geoglyphs of the Nazca region is still not available,” says Karsten Lambers at Leiden University in the Netherlands. But he also cautioned that “even this new, powerful technology is more likely to find the better visible geoglyphs — the low hanging fruits — than the more difficult ones that are likely still out there”.

The authors believe that many more geoglyphs remain to be discovered in the area. Now that design has been concluded, we may understandably wonder what the people had in mind when they made these figures:

Line-type geoglyphs predominantly depict wildlife-related motifs (e.g., wild animals and plants). Most relief-type geoglyphs (81.6%) depict human motifs or motifs of things modified by humans (33.8% humanoids, 32.9% decapitated heads, and 14.9% domesticated camelids). These do not appear in the line-type figurative geoglyphs at all. Decapitated heads are sometimes depicted alone, while humanoids are repeatedly depicted with decapitated heads and together with domesticated camelids. Examples of both are shown as Insets to Fig. 5. Wild animals, which dominate the line-type geoglyphs, represent only 6.9% (47 geoglyphs) of the relief-type geoglyphs. These include bird, cat, snake, monkey, fox, killer whale, and fish.

Again, though, figuring out the meaning of the designs is not ID’s job. ID is equally valid at detecting evil designs and good designs. Undoubtedly future archaeologists might have trouble understanding 21st century graffiti if they happened upon a destroyed U.S. city without written records or history. But thanks to the Design Filter, determining whether contemporary “art” was designed or not would be a straightforward project