the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
Friday, 8 April 2022
Addressing Darwinist just so stories on eye evolution.
More Implausible Stories about Eye Evolution
Recently an email correspondent asked me about a clip from Neil deGrasse Tyson’s reboot of Cosmos where he claims that eyes could have evolved via unguided mutations. Even though the series is now eight years old, it’s still promoting implausible stories about eye evolution. Clearly, despite having been addressed by proponents of intelligent design many times over, this issue is not going away. Let’s revisit the question, as Tyson and others have handled it.
In the clip, Tyson claims that the eye is easily evolvable by natural selection and it all started when some “microscopic copying error” created a light-sensitive protein for a lucky bacterium. But there’s a problem: Creating a light-sensitive protein wouldn’t help the bacterium see anything. Why? Because seeing requires circuitry or some kind of a visual processing pathway to interpret the signal and trigger the appropriate response. That’s the problem with evolving vision — you can’t just have the photon collectors. You need the photon collectors, the visual processing system, and the response-triggering system. At the very least three systems are required for vision to give you a selective advantage. It would be prohibitively unlikely for such a set of complex coordinated systems to evolve by stepwise mutations and natural selection.
A “Masterpiece” of Complexity
Tyson calls the human eye a “masterpiece” of complexity, and claims it “poses no challenge to evolution by natural selection.” But do we really know this is true?
Darwinian evolution tends to work fine when one small change or mutation provides a selective advantage, or as Darwin put it, when an organ can evolve via “numerous, successive, slight modifications.” If a structure cannot evolve via “numerous, successive, slight modifications,” Darwin said, his theory “would absolutely break down.” Writing in The New Republic some years ago, evolutionist Jerry Coyne essentially concurred on that: “It is indeed true that natural selection cannot build any feature in which intermediate steps do not confer a net benefit on the organism.” So are there structures that would require multiple steps to provide an advantage, where intermediate steps might not confer a net benefit on the organism? If you listen to Tyson’s argument carefully, I think he let slip that there are.
Tyson says that “a microscopic copying error” gave a protein the ability to be sensitive to light. He doesn’t explain how that happened. Indeed, biologist Sean B. Carroll cautions us to “not be fooled” by the “simple construction and appearance” of supposedly simple light-sensitive eyes, since they “are built with and use many of the ingredients used in fancier eyes.” Tyson doesn’t worry about explaining how any of those complex ingredients arose at the biochemical level. What’s more interesting is what Tyson says next: “Another mutation caused it [a bacterium with the light-sensitive protein] to flee intense light.”
An Interesting Question
It’s nice to have a light-sensitive protein, but unless the sensitivity to light is linked to some behavioral response, then how would the sensitivity provide any advantage? Only once a behavioral response also evolved — say, to turn towards or away from the light — can the light-sensitive protein provide an advantage. So if a light-sensitive protein evolved, why did it persist until the behavioral response evolved as well? There’s no good answer to that question, because vision is fundamentally a multi-component, and thus a multi-mutation, feature. Multiple components — both visual apparatus and the encoded behavioral response — are necessary for vision to provide an advantage. It’s likely that these components would require many mutations. Thus, we have a trait where an intermediate stage — say, a light-sensitive protein all by itself — would not confer a net advantage on the organism. This is where Darwinian evolution tends to get stuck.
Tyson seemingly assumes those subsystems were in place, and claims that a multicell animal might then evolve a more complex eye in a stepwise fashion. He says the first step is that a “dimple” arises which provides a “tremendous advantage,” and that dimple then “deepens” to improve visual acuity. A pupil-type structure then evolves to sharpen the focus, but this results in less light being let in. Next, a lens evolves to provide “both brightness and sharp focus.” This is the standard account of eye evolution that I and others have critiqued before. Francis Collins and Karl Giberson, for example, have made a similar set of arguments.
Such accounts invoke the abrupt appearance of key features of advanced eyes including the lens, cornea, and iris. The presence of each of these features — fully formed and intact — would undoubtedly increase visual acuity. But where did the parts suddenly come from in the first place? As Scott Gilbert of Swarthmore College put it, such evolutionary accounts are “good at modelling the survival of the fittest, but not the arrival of the fittest.”
Hyper-Simplistic Accounts
As a further example of these hyper-simplistic accounts of eye evolution, Francisco Ayala in his book Darwin’s Gift to Science and Religion asserts, “Further steps — the deposition of pigment around the spot, configuration of cells into a cuplike shape, thickening of the epidermis leading to the development of a lens, development of muscles to move the eyes and nerves to transmit optical signals to the brain — gradually led to the highly developed eyes of vertebrates and cephalopods (octopuses and squids) and to the compound eyes of insects.” (p. 146)
Ayala’s explanation is vague and shows no appreciation for the biochemical complexity of these visual organs. Thus, regarding the configuration of cells into a cuplike shape, biologist Michael Behe asks (in responding to Richard Dawkins on the same point):
And where did the “little cup” come from? A ball of cells–from which the cup must be made–will tend to be rounded unless held in the correct shape by molecular supports. In fact, there are dozens of complex proteins involved in maintaining cell shape, and dozens more that control extracellular structure; in their absence, cells take on the shape of so many soap bubbles. Do these structures represent single-step mutations? Dawkins did not tell us how the apparently simple “cup” shape came to be.
Michael J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution, p. 15 (Free Press, 1996)
An Integrated System
Likewise, mathematician and philosopher David Berlinski has assessed the alleged “intermediates” for the evolution of the eye. He observes that the transmission of data signals from the eye to a central nervous system for data processing, which can then output some behavioral response, comprises an integrated system that is not amenable to stepwise evolution:
Light strikes the eye in the form of photons, but the optic nerve conveys electrical impulses to the brain. Acting as a sophisticated transducer, the eye must mediate between two different physical signals. The retinal cells that figure in Dawkins’ account are connected to horizontal cells; these shuttle information laterally between photoreceptors in order to smooth the visual signal. Amacrine cells act to filter the signal. Bipolar cells convey visual information further to ganglion cells, which in turn conduct information to the optic nerve. The system gives every indication of being tightly integrated, its parts mutually dependent.
The very problem that Darwin’s theory was designed to evade now reappears. Like vibrations passing through a spider’s web, changes to any part of the eye, if they are to improve vision, must bring about changes throughout the optical system. Without a correlative increase in the size and complexity of the optic nerve, an increase in the number of photoreceptive membranes can have no effect. A change in the optic nerve must in turn induce corresponding neurological changes in the brain. If these changes come about simultaneously, it makes no sense to talk of a gradual ascent of Mount Improbable. If they do not come about simultaneously, it is not clear why they should come about at all.
The same problem reappears at the level of biochemistry. Dawkins has framed his discussion in terms of gross anatomy. Each anatomical change that he describes requires a number of coordinate biochemical steps. “[T]he anatomical steps and structures that Darwin thought were so simple,” the biochemist Mike Behe remarks in a provocative new book (Darwin’s Black Box), “actually involve staggeringly complicated biochemical processes.” A number of separate biochemical events are required simply to begin the process of curving a layer of proteins to form a lens. What initiates the sequence? How is it coordinated? And how controlled? On these absolutely fundamental matters, Dawkins has nothing whatsoever to say.
David Berlinski, “Keeping an Eye on Evolution: Richard Dawkins, a Relentless Darwinian Spear Carrier, Trips Over Mount Improbable,” Globe & Mail (November 2, 1996)
More or Less One Single Feature
In sum, standard accounts of eye evolution fail to explain the evolution of key eye features such as:
- The biochemical evolution of the fundamental ability to sense light
- The origin of the first “light-sensitive spot”
- The origin of neurological pathways to transmit the optical signal to a brain
- The origin of a behavioral response to allow the sensing of light to give some behavioral advantage to the organism
- The origin of the lens, cornea, and iris in vertebrates
- The origin of the compound eye in arthropods
At most, accounts of the evolution of the eye provide a stepwise explanation of “fine gradations” for the origin of more or less one single feature: the increased concavity of eye shape. That does not explain the origin of the eye. But from Neil Tyson and the others, you’d never know that.
Against the stigma of the skilled trades.
The Stigma of Choosing Trade School Over College
When college is held up as the one true path to success, parents—especially highly educated ones—might worry when their children opt for vocational school instead.
Toren Reesman knew from a young age that he and his brothers were expected to attend college and obtain a high-level degree. As a radiologist—a profession that requires 12 years of schooling—his father made clear what he wanted for his boys: “Keep your grades up, get into a good college, get a good degree,” as Reesman recalls it. Of the four Reesman children, one brother has followed this path so far, going to school for dentistry. Reesman attempted to meet this expectation, as well. He enrolled in college after graduating from high school. With his good grades, he got into West Virginia University—but he began his freshman year with dread. He had spent his summers in high school working for his pastor at a custom-cabinetry company. He looked forward each year to honing his woodworking skills, and took joy in creating beautiful things. School did not excite him in the same way. After his first year of college, he decided not to return.
He says pursuing custom woodworking as his lifelong trade was disappointing to his father, but Reesman stood firm in his decision, and became a cabinetmaker. He says his father is now proud and supportive, but breaking with family expectations in order to pursue his passion was a difficult choice for Reesman—one that many young people are facing in the changing job market.
Traditional-college enrollment rates in the United States have risen this century, from 13.2 million students enrolled in 2000 to 16.9 million students in 2016. This is an increase of 28 percent, according to the National Center for Education Statistics. Meanwhile, trade-school enrollment has also risen, from 9.6 million students in 1999 to 16 million in 2014. This resurgence came after a decline in vocational education in the 1980s and ’90s. That dip created a shortage of skilled workers and tradespeople.
Many jobs now require specialized training in technology that bachelor’s programs are usually too broad to address, leading to more “last mile”–type vocational-education programs after the completion of a degree. Programs such as Galvanize aim to teach specific software and coding skills; Always Hired offers a “tech-sales bootcamp” to graduates. The manufacturing, infrastructure, and transportation fields are all expected to grow in the coming years—and many of those jobs likely won’t require a four-year degree.
This shift in the job and education markets can leave parents feeling unsure about the career path their children choose to pursue. Lack of knowledge and misconceptions about the trades can lead parents to steer their kids away from these programs, when vocational training might be a surer path to a stable job.
Raised in a family of truck drivers, farmers, and office workers, Erin Funk was the first in her family to attend college, obtaining a master’s in education and going on to teach second grade for two decades. Her husband, Caleb, is a first-generation college graduate in his family, as well. He first went to trade school, graduating in 1997, and later decided to strengthen his résumé following the Great Recession. He began his bachelor’s degree in 2009, finishing in 2016. The Funks now live in Toledo, Ohio, and have a 16-year-old son, a senior in high school, who is already enrolled in vocational school for the 2019–20 school year. The idea that their son might not attend a traditional college worried Erin and Caleb at first. “Vocational schools where we grew up seemed to be reserved for people who weren’t making it in ‘real’ school, so we weren’t completely sure how we felt about our son attending one,” Erin says. Both Erin and Caleb worked hard to be the first in their families to obtain college degrees, and wanted the same opportunity for their three children. After touring the video-production-design program at Penta Career Center, though, they could see the draw for their son. Despite their initial misgivings, after learning more about the program and seeing how excited their son was about it, they’ve thrown their support behind his decision.
But not everyone in the Funks’ lives understands this decision. Erin says she ran into a friend recently, and “as we were catching up, I mentioned that my eldest had decided to go to the vocational-technical school in our city. Her first reaction was, ‘Oh, is he having problems at school?’ I am finding as I talk about this that there is an attitude out there that the only reason you would go to a vo-tech is if there’s some kind of problem at a traditional school.” The Funks’ son has a 3.95 GPA. He was simply more interested in the program at Penta Career Center. “He just doesn’t care what anyone thinks,” his mom says.
The Funks are not alone in their initial gut reaction to the idea of vocational and technical education. Negative attitudes and misconceptions persist even in the face of the positive statistical outlook for the job market for these middle-skill careers. “It is considered a second choice, second-class. We really need to change how people see vocational and technical education,” Patricia Hsieh, the president of a community college in the San Diego area, said in a speech at the 2017 conference for the American Association of Community Colleges. European nations prioritize vocational training for many students, with half of secondary students (the equivalent of U.S. high-school students) participating in vocational programs. In the United States, since the passage of the 1944 GI Bill, college has been pushed over vocational education. This college-for-all narrative has been emphasized for decades as the pathway to success and stability; parents might worry about the future of their children who choose a different path.
Read more: The world might be better off without college for everyone
Dennis Deslippe and Alison Kibler are both college professors at Franklin and Marshall College in Lancaster, Pennsylvania, so it was a mental shift for them when, after high school, their son John chose to attend the masonry program at Thaddeus Stevens College of Technology, a two-year accredited technical school. John was always interested in working with his hands, Deslippe and Kibler say—building, creating, and repairing, all things that his academic parents are not good at, by their own confession.
Deslippe explains, “One gap between us as professor parents and John’s experience is that we do not really understand how Thaddeus Stevens works in the same way that we understand a liberal-arts college or university. We don’t have much advice to give. Initially, we needed some clarity about what masonry exactly was. Does it include pouring concrete, for example?” (Since their son is studying brick masonry, his training will likely not include concrete work.) Deslippe’s grandfather was a painter, and Kibler’s grandfather was a woodworker, but three of their four parents were college grads. “It’s been a long-standing idea that the next generation goes to college and moves out of ‘working with your hands,’” Kibler muses. “Perhaps we are in an era where that formula of rising out of trades through education doesn’t make sense?”
College doesn’t make sense is the message that many trade schools and apprenticeship programs are using to entice new students. What specifically doesn’t make sense, they claim, is the amount of debt many young Americans take on to chase those coveted bachelor’s degrees. There is $1.5 trillion in student debt outstanding as of 2018, according to the Federal Reserve. Four in 10 adults under the age of 30 have student-loan debt, according to the Pew Research Center. Master’s and doctorate degrees often lead to even more debt. Earning potential does not always offset the cost of these loans, and only two-thirds of those with degrees think that the debt was worth it for the education they received. Vocational and technical education tends to cost significantly less than a traditional four-year degree.
This stability is appealing to Marsha Landis, who lives with her cabinetmaker husband and two children outside of Jackson Hole, Wyoming. Landis has a four-year degree from a liberal-arts college, and when she met her husband while living in Washington, D.C., she found his profession to be a refreshing change from the typical men she met in the Capitol Hill dating scene. “He could work with his hands, create,” she says. “He wasn’t pretentious and wrapped up in the idea of degrees. And he came to the marriage with no debt and a marketable skill, something that has benefited our family in huge ways.” She says that she has seen debt sink many of their friends, and that she would support their children if they wanted to pursue a trade like their father.
In the United States, college has been painted as the pathway to success for generations, and it can be, for many. Many people who graduate from college make more money than those who do not. But the rigidity of this narrative could lead parents and students alike to be shortsighted as they plan for their future careers. Yes, many college graduates make more money—but less than half of students finish the degrees they start. This number drops as low as 10 percent for students in poverty. The ever sought-after college-acceptance letter isn’t a guarantee of a stable future if students aren’t given the support they need to complete a degree. If students are exposed to the possibility of vocational training early on, that might help remove some of the stigma, and help students and parents alike see a variety of paths to a successful future.
On the right to conscientious objection to military service.
About conscientious objection to military service and human rights
The right to conscientious objection to military service is based on article 18 of the International Covenant on Civil and Political Rights, which guarantees the right to freedom of thought, conscience and religion or belief. While the Covenant does not explicitly refer to a right to conscientious objection, in its general comment No. 22 (1993) the Human Rights Committee stated that such a right could be derived from article 18, inasmuch as the obligation to use lethal force might seriously conflict with the freedom of conscience and the right to manifest one’s religion or belief.
The Human Rights Council, and previously the Commission on Human Rights, have also recognized the right of everyone to have conscientious objection to military service as a legitimate exercise of the right to freedom of thought, conscience and religion, as laid down in article 18 of the Universal Declaration of Human Rights and article 18 of the International Covenant on Civil and Political Rights (see their resolutions which were adopted without a vote in 1989, 1991, 1993, 1995, 1998, 2000, 2002, 2004, 2012, 2013 and 2017).
OHCHR’s work on conscientious objection to military service
OHCHR has a mandate to promote and protect the effective enjoyment by all of all civil, cultural, economic, political and social rights, as well as to make recommendations with a view to improving the promotion and protection of all human rights. The High Commissioner for Human Rights has submitted thematic reports on conscientious objection to military service both to the Commission on Human Rights (in 2004 and 2006) and to the Human Rights Council (in 2007, 2008, 2013, 2017 and 2019). The latest report (A/HRC/41/23, para. 60) stresses that application procedures for obtaining the status of conscientious objector to military service should comply, as a minimum, with the following criteria:
- Availability of information
- Cost-free access to application procedures
- Availability of the application procedure to all persons affected by military service
- Recognition of selective conscientious objection
- Non-discrimination on the basis of the grounds for conscientious objection and between groups
- No time limit on applications
- Independence and impartiality of the decision-making process
- Good faith determination process
- Timeliness of decision-making and status pending determination
- Right to appeal
- Compatibility of alternative service with the reasons for conscientious objection
- Non-punitive conditions and duration of alternative service
- Freedom of expression for conscientious objectors and those supporting them.
On language and the tyranny of authority.
Why Words Matter: Sense and Nonsense in Science
Editor’s note: We are delighted to present a new series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the first article in the series. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
My professional background in European languages and linguistics has given me some idea of how easy it is for people in all ages and cultures to create neologisms or ad hoc linguistic formulations for a whole variety of vague ideas and fancies. In fact, it seems all too easy to fashion words to cover any number of purely abstract, at times even chimerical notions, the more convincingly (for the uncritical) if one chooses to append the honorific title of “science” to one’s subjective thought experiments.
One can for instance, if so inclined, muse with Epicurus, Lucretius, and David Hume that the world “evolved” by chance collocations of atoms and then proceed to dignify one’s notion by dubbing it “the theory of atomism.” Or one can with Stephen Hawking, Lawrence Krauss, and Peter Atkins1 conclude that the universe and all within it arose spontaneously from “natural law.” But in all these cases we have to be willing to ignore the fact that such theories involve what is known grammatically as the “suppression of the agent.” This means the failure to specify who the agent/legislator might be — this being the sort of vagueness which we were taught to avoid in school English lessons. A mundane example of this suppression of the agent is the criminal’s perennial excuse, “The gun just went off in my hand, officer, honest.”
A Universe by an “Agentless Act”
As I have pointed out before,2 it is both grammatical solecism and logical impossibility to contend with Peter Atkins that the universe arose through an “agentless act” since this would imply some form of pure automatism or magical instrumentality quite outside common experience or observability. In a similar vein one might, with Charles Darwin, theorize that the development of the biosphere was simply down to that empirically unattested sub-variant of chance he chose to term natural selection.3 Since no empirical evidence exists for any of the above conjectures, they must inevitably remain terms without referents or, to use the mot juste from linguistics, empty signifiers.
Empty Signifiers in Science
Many terms we use in everyday life are, and are widely acknowledged to be, notional rather than factual. The man on the moon and the fabled treasure at the end of the rainbow are trivial examples of what are sometimes termed “airy nothings.” These are factually baseless terms existing “on paper” but without any proper referent in the real world because no such referent exists. Nobody of course is misled by light-hearted façons de parler widely understood to be only imaginary, but real dangers for intellectual clarity arise when a notional term is mistaken for reality.
One famous historical example of such a term was the substance dubbed phlogiston, postulated in the 1660s as a fire-like substance inhering in all combustible bodies; but such a substance was proved not to exist and to be merely what we would now rightly term pseudo-science just over a century later by the French scientist Antoine Lavoisier. Or again in more recent times there is that entirely apocryphal entity dubbed “ectoplasm.” This was claimed by Victorian spiritualists to denote a substance supposedly exuded from a “medium” (see the photo above) which represented the materialization of a spiritual force once existing in a now deceased human body. Needless to say, the term “ectoplasm” is now treated with unqualified skepticism.
Next, “The Man on the Moon and Martian Canals.”
Notes
- Stephen Hawking and Leonard Mlodinov, The Grand Design: New Answers to the Ultimate Questions of Life (London: Bantam, 2011); P. W. Atkins, Creation Revisited (Oxford and New York: Freeman, 1992); Lawrence Krauss, A Universe from Nothing (London: Simon and Schuster, 2012).
- See Neil Thomas, Taking Leave of Darwin (Seattle: Discovery, 2021), p. 110, where I point out how that expression is a contradiction in terms.
- Darwin in later life, stung that many friends thought he was all but deifying natural selection, came to concede that natural preservation might
have been the more accurate term to use — but of course that opens up
the huge problem of how organic innovation (the microbes-to-man
conjecture) can be defended in reference to a process which simply
preserved and had no productive or creative input.
- PS. Works for theology too.
A civilisation beneath our feet and the design debate.
To Regulate Foraging, Harvester Ants Use a (Designed) Feedback Control Algorithm
A recent study in the Journal of the Royal Society Interface reports on “A feedback control principle common to several biological and engineered systems.” The researchers, Jonathan Y. Suen and Saket Navlakha, show how harvester ants (Pogonomyrmex barbatus) use a feedback control algorithm to regulate foraging behavior. As Science Daily notes, the study determined that, “Ants and other natural systems use optimization algorithms similar to those used by engineered systems, including the Internet.”
The ants forage for seeds that are widely scattered and usually do not occur in concentrated patches. Foragers usually continue their search until they find a seed. The return rate of foragers corresponds to the availability of seeds: the more food is available, the less time foragers spend searching. When the ants successfully find food, they return to the nest in approximately one third of the search time compared to ants unable to find food. There are several aspects of this behavior that point to intelligent design.
Feedback Control
First, it is based on the general engineering concept of a feedback control system. Such systems use the output of a system to make adjustments to a control mechanism and maintain a desired setting. A common example is the temperature control of heating and air conditioning systems. An analogy in biology is homeostasis, which uses negative feedback, and is designed to maintain a constant body temperature.
Mathematical Algorithm
A second aspect of design is the algorithm used to implement the specific control mechanism. Suen and Navlaka describe the system as “multiplicative-increase multiplicative-decrease” (MIMD). The MIMD closed loop system is a hybrid combination of positive and negative feedback. Receiving positive feedback results in multiplying the response, while negative feedback results in reducing the response by a constant value. The purpose relates to the challenge of optimizing ant foraging. As the paper explains:
If foraging rates exceed the rate at which food becomes available, then many ants would return “empty-handed,” resulting in little or no net gain in colony resources. If foraging rates are lower than the food availability rate, then seeds would be left in the environment uncollected, meaning the seeds would either be lost to other colonies or be removed by wind and rain.
The authors found that positive feedback systems are “used to achieve multiple goals, including efficient allocation of available resources, the fair or competitive splitting of those resources, minimization of response latency, and the ability to detect feedback failures.” However, positive control feedback systems are susceptible to instability (think of the annoying screech when there is feedback into microphones in a sound system). Therefore, a challenge for MIMD systems is to minimize instability.
In this application, when foraging times are short, the feedback is positive, resulting in a faster increase in the number of foragers. When foraging times are longer, the feedback is negative, resulting in a reduction in the number of foragers. A mathematical model of the behavior has confirmed that the control algorithm is largely optimized. (See Prabhakar et al., “The Regulation of Ant Colony Foraging Activity without Spatial Information,” PLOS Computational Biology, 2012.) As I describe in my recent book, Animal Algorithms, the harvester ant algorithm is just one example of behavior algorithms that ants and other social insects employ.
Suen and Navlakha point out that the mechanism is similar to that employed to regulate traffic on the Internet. In the latter context, there are billions of “agents” continuously transmitting data. Algorithms are employed to control and optimize traffic flow. The challenge for Internet operations is to maximize capacity and allow for relatively equal access for users. Obviously, Internet network control is designed by intelligent engineers. In contrast, the harvester ant behavior is carried out by individuals without any central control mechanism.
Physical Sensors
A third feature indicating design is the physical mechanism used by the ants to determine how long returning foragers have been out. When ants forage for food, molecules called cuticular hydrocarbons change based on the amount of time spent foraging. This is due to the difference in temperature and humidity outside of the nest. As the ants return to the entrance of the nest, there are interactions between the returning and the outgoing ants via their antennae. These interactions enable detection of the hydrocarbons, which provide a mechanism to enable outgoing ants to determine the amount of time that returning ants spent foraging.
These three elements of harvester ant behavior (feedback control, mathematical algorithm, and physical sensors) present a severe challenge for the evolutionary paradigm. From a Darwinian perspective, they must have arisen through a combination of random mutations and natural selection. A much more plausible explanation is that they are evidence of intelligent design.
Christendom's role in the war on logic and commonsense.
Hosea11:9KJV"I will not execute the fierceness of mine anger, I will not return to destroy Ephraim: for I am God, and not man; the Holy One in the midst of thee: and I will not enter into the city."
Having rejected the immutability of the most fundamental binary of all (i.e that between creator and creature) with their nonsensical God-man hypothesis. Why are so many of Christendom's clerics puzzled that many of their flock find no issue with rejecting the immutability of the far less fundamental gender binary?
You say that Darwinism invokes the free lunch fallacy, defies mathematical falsification and further more is a clear violation of occam's razor? Tell us about it trinitarian?
If God can become man why can't the same sovereign power not make it possible for any chosen creature to become God?
I mean if God can be three and yet one with no contradiction he can be nine and yet three with no contradiction. Don't believe me? Consider.
Revelation1:4,5NASB"John to the seven churches that are in Asia:Grace and peace to you from Him who is and who was and who is to come,and from the seven spirits who are before His throne and from Jesus Christ..." Making a total of nine members of the multipersonal Godhead revealed in scripture but there is no principle in Christendom's philosophy that can be invoked to limit it to this figure. That's the thing with rejecting Commonsense as a principle once you are off the reservation all bets are off