Definition of woman
the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
<iframe width="853" height="480" src="https://www.youtube.com/embed/K-uBeGbvVrQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="853" height="480" src="https://www.youtube.com/embed/WlQdLLOmW3o" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Editor’s note: We have been delighted to present a series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the seventh and final article in the series. Find the full series so far here. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
As I have suggested already in this series, there undoubtedly was much at stake in the Miller-Urey experiment — considerably more than was realized at the time by those who listened uncritically to Carl Sagan and others with an interest in deceptively boosting the supposed importance of the experiment. Its implicit promise for many observers as well as eager readers of the American and world press would have been that it would extend Darwin’s timeline back to the pre-organic formation of the first living cell, and so establish the fundamental point of departure for the mechanism of natural selection to go to work on. It would also of course have delivered a stunning victory for the materialist position. In the event, though, it succeeded only in dealing a disabling body-blow to materialist notions by giving game set and match to the theistic position. This point has not, to my knowledge, been publicly acknowledged.
Most devastatingly for Darwinists, the complete failure of this and more recent experiments to find the origins of primitive life forms in hot springs, hydrothermal vents in the ocean floor, et al., have removed the indispensable foundation for the operation of natural selection. By that I mean that any postulated selective mechanism must obviously have something to select. No raw material means no evolution, no nothing. Without an “abiogenetic moment” Darwin’s entire theory of evolution via natural selection falls flat.
As matters stand, the bare emergence of living cells remains an unsolved mystery, let alone the claimed corollary of that mysterious and unexplained cellular “complexification” (yet another word without any demonstrable referent, it may be noted) said to follow from it and to have occasioned the fabled development from microbes to (wo)man. The most significant finding of Miller and Urey appears to have been a categorical disproof of Darwinian ideas and a presumptive indication of a supra-natural etiology for the cellular system — an inference to theistic creation/evolution which was of course the very obverse of the result they were seeking.
To sum up: there is perhaps limited value in trying to rank in order of gravity the many objections to Darwinism which have been thrown up over the last 160 years. If pressed to do a sort of countdown to number one, however, I would have to say that this particular objection should rank very high up. This is because to attempt to discuss the subject of how the process of selection by nature began to operate whilst not even broaching the question of how nature itself arose in the first place must count as a major evasion.
It is in fact such a glaring logical elision that it can only be viewed (in plain English) as a cop-out. Nothing can come of nothing, goes the old tag, and without knowledge of or at the very least a credible theory concerning the provenance of organic material, the theory of “natural selection” lacks any coherent foundation even for the starting point of its putative operations. Evolutionary biology finds itself in the unenviably anomalous position of being based on an illusory premise without any discernible foundation. Yet the urge to find proof for “natural selection” endures. That, I guess, is a powerful reminder that words, even meaningless words, have the power to create their own virtual realities in our minds, with no relation to any definable referent in the world we inhabit.
Daniel2:35KJV"Then was the iron, the clay, the brass, the silver, and the gold, broken to pieces together, and became like the chaff of the summer threshingfloors; and the wind carried them away, that no place was found for them: and the stone that smote the image became a great mountain, and filled the whole earth."
Daniel2:44KJV"And in the days of these kings shall the God of heaven set up a kingdom, which shall never be destroyed: and the kingdom shall not be left to other people, but it shall break in pieces and consume all these kingdoms, and it shall stand for ever."
Revelation20:11KJV"And I saw a great white throne, and him that sat on it, from whose face the earth and the heaven fled away; and there was found no place for them."
Note please that the Rule of JEHOVAH'S kingdom over this earth takes place after the destruction of the present human kingdoms there is to be no millennium of parallel rule between JEHOVAH'S kingdom and Satan's empire.
Revelation20:6KJV"Blessed and holy is he that hath part in the first resurrection: on such the second death hath no power, but they shall be priests of God and of Christ, and shall reign with him a thousand years."
1Corinthians15:23KJV"But every man in his own order: Christ the firstfruits; afterward they that are Christ's at his coming(Parousia)."
John6:39KJV"And this is the Father's will which hath sent me, that of all which he hath given me I should lose nothing, but should raise it up again at the last day."
Note that the millenium Follows the first resurrection which follows the beginning of Christ parousia Paul makes it clear that the present age is not the time for Christians to seek any dominion over the earth or any part thereof."Now ye are full, now ye are rich, ye have reigned as kings without us: and I would to God ye did reign, that we also might reign with you."
Hebrews2:8KJV"Thou hast put all things in subjection under his feet. For in that he put all in subjection under him, he left nothing that is not put under him. But now we see NOT YET all things put under him."
Evidently Paul did not think that Christ millenial reign had begun.
Editor’s note: We are delighted to present a series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the sixth article in the series. Find the full series so far here. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
I have been writing about the neologism “abiogenesis” (see earlier posts here and here). Like “panspermia,”1 it is but one example of an old concept (it was first mooted by Svante Arrhenius in 1903) which periodically undergoes a curious form of (intellectual) cryogenic freezing only to reappear after a decent lapse of time and memory to be presented afresh under a revamped name2 as an idea claimed to be worth a second look.
In essence it seems to draw its strength from pseudo-scientific folk-beliefs that life could somehow be made to emerge from non-life, a conception most notably exploited (and obliquely criticized) in Mary Shelley’s Frankenstein (1818). In another example of the literary/media intelligentsia being ahead of the curve, the refusal of the discredited spontaneous generation to give up the ghost gave that anarchic auteur Mel Brooks ample raw material to ridicule the atavistic misconception in his inspired 1974 comic movie, Young Frankenstein.
For those who did not catch this laugh-out-loud film: the engaging anti-hero, played by the inimitable Gene Wilder, scion of the notorious Baron Frankenstein, at first does everything possible to put distance between himself and his notorious ancestor, whom he memorably dismisses before a class of his students as a “kook,” and thereafter insists on his surname being pronounced Frankensteen. However, the temptation to attempt the impossible “one last time” proves too much either for “Dr. Frankensteen” (whom the film shows reverting to type when he latterly de-Americanizes his surname to Frankenstein) or, it appears, for two American scientists, Stanley Miller and Harold Urey, to resist.
Most of us probably remember Brooks’s oeuvre as being of a somewhat variable standards, but in amongst the pure goofery of Young Frankenstein, as Brooks himself put it in an interview, the film contained an unmistakably satirical thrust because “the doctor (Wilder) is undertaking the quest to defeat death — to challenge God.”3 That is a not inappropriate epitaph for the Miller-Urey experiment as well as its later avatars, it might be thought.
Next, the final post in this series, “Existential Implications of the Miller-Urey Experiment.”
Editor’s note: We are delighted to present a series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the fifth article in the series. Find the full series so far here. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
I wrote here yesterday about the Miller-Urey experiment at the University of Chicago in 1953 as an effort to investigate the possibility of spontaneous generation. To be fair to both distinguished collaborators, Stanley Miller and Harold Urey, this was no desperate shot in the dark to bolster materialist thinking. They had clearly done all the requisite preparation for their task. Miller and Urey (a later recipient of the Nobel Prize) theorized that if the conditions prevailing on primeval Earth were reproduced in laboratory conditions, such conditions might prove conducive to a chemical synthesis of living material.
To abbreviate a long, more complex short, they caused an electric spark to pass through a mixture of methane, hydrogen, ammonia, and water to simulate the kind of energy which might have come from thunderstorms on the ancient Earth. The resulting liquid turned out to contain amino acids which, though not living molecules themselves, are the building blocks of proteins, essential to the construction of life.1 However, the complete chemical pathway hoped for by many was not to materialize. In fact, the unlikelihood of such a materialization was underscored in the very same year that the Miller-Urey experiment took place when Francis Crick, James Watson, and Rosalind Franklin succeeded in identifying the famous double helix of DNA. Their discovery revealed, amongst other things, that even if amino acids could somehow be induced to form proteins, this would still not be enough to produce life.
Despite over-optimistic press hype in the 1950s, which came to include inter alia fulsome eulogizing by Carl Sagan, it has in more recent decades been all but conceded that life is unlikely to form at random from the so-called “prebiotic” substrate on which scientists had previously pinned so much hope. To be sure, there are some biologists, such as Richard Dawkins, who still pin their faith in ideas which have resulted only in blankly negative experimental results.2 Some notions, it appears, will never completely die for some, despite having been put to the scientific sword on numerous occasions — as long of course as they hold out the promise of a strictly materialist explanation of reality.
Next, “Frankenstein and His Offspring.”
Editor’s note: We are delighted to present a series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the fourth article in the series. Find the full series so far here. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
Words are cheap and, in science as in other contexts, they can be used to cover up and camouflage a multitude of areas of ignorance. In this series so far, I have dealt summarily with several such terms, since I anticipated that they are already familiar to readers, and as I did not wish to belabor my fundamental point.
I would, however, like to discuss in somewhat more detail a term which is well enough known but whose manifold implications may not even now, it appears to me, have been appreciated to their full extent. This is the historically recent neologism “abiogenesis” — meaning spontaneous generation of life from a combination of unknown chemical substances held to provide a quasi-magical bridge from chemistry to biology. This term, when subjected to strict logical parsing, I will argue, undermines the very notion of what is commonly understood by Darwinian evolution since it represents a purely notional, imaginary term which might also (in my judgment) be usefully relegated to the category of “just words.”
The greatest problem for the acceptance of Darwinism as a self-standing and logically coherent theory is the unsolved mystery of the absolute origin of life on earth, a subject which Charles Darwin tried to bat away as, if not a total irrelevance, then as something beyond his competence to pronounce on. Even today Darwinian supporters will downplay the subject of the origins of life as a matter extraneous to the subject of natural selection. It is not. It is absolutely foundational to the integrity of natural selection as a conceptually satisfactory theory, and evolutionary science cannot logically even approach the starting blocks of its conjectures without cracking this unsolved problem, as the late 19th-century German scientist Ludwig Buechner pointed out.1
Darwin famously put forward in a letter the speculation of life having been spontaneously generated in a small warm pool, but did he not follow up on the hunch experimentally. This challenge was left to Stanley Miller and Harold Urey, two much later intellectual legatees in the middle of the 20th century who, in defiance of previous expert opinion, staged an unusual experiment. The remote hinterland of this experiment was as follows. In the 17th century, medical pioneer Sir William Harvey and Italian scientist Francesco Redi both proved the untenability of spontaneous generation: only life can produce life, a finding later to be upheld by French scientist Louis Pasteur in the latter half of the 19th century; but the two Americans proceeded on regardless.
There is no getting away from the fact that the three-fold confirmation of the impossibility of spontaneous generation by respected scientists working independently of each other in different centuries brought with it far-reaching theological implications. For if natural processes could not account for life’s origins, then the only alternative would be a superior force standing outside and above nature but with the power to initiate nature’s processes. The three distinguished scientists were in effect and by implication ruling out any theory for the origin of life bar that of supranatural creation. So it was hardly surprising that there emerged in later time a reaction against their “triple lock” on the issue.
In what was shaping up to become the largely post-Christian 20th century in Europe, the untenability of the abiogenesis postulate was resisted by many in the scientific world on purely ideological grounds. The accelerating secularizing trends of the early 20th century meant that the outdated and disproven notion of spontaneous generation was nevertheless kept alive on a form of intellectual life-support despite the abundant evidence pointing to its unviability.
For presently both the Russian biologist Alexander Oparin and the British scientist John Haldane stepped forward to revive the idea in the 1920s. The formal experiment to investigate the possibility of spontaneous generation had then to wait a few decades more before the bespoke procedure to test its viability in laboratory conditions was announced by the distinguished team of Miller and Urey of the University of Chicago in 1953. Clearly the unspoken hope behind this now (in)famous experiment was the possibility that Pasteur, Harvey, and Redi might have been wrong to impose their “triple lock” and that mid 20th-century advances might discover a solution where predecessors had failed. If ever there was an attempt to impose a social/ideological construction of reality on science in line with materialist thinking, this was it.
Next, “Imagining ‘Abiogenesis’: Crick, Watson, and Franklin.”
SETI is on a roll again. The Search for Extra-Terrestrial Intelligence oscillates in popularity although it has rumbled on since the 1970s like a carrier tone, waiting for a spike to stand out above the cosmic noise. Instrument searches are largely automated these days. Once in a while somebody raises the subject of SETI above the hum of scientific news. The principal organization behind SETI has been busily humming in the background but now has a message to broadcast.
The SETI Institute announced that the Very Large Array (VLA) in New Mexico has been outfitted to stream data for “technosignature research.” Technosignatures are the new buzzword in SETI. Unlike the old attempts to detect meaningful messages like How to Serve Man, the search for technosignatures involves looking for “signs of technology not caused by natural phenomena.” Hold that thought for later.
COSMIC SETI (the Commensal Open-Source Multimode Interferometer Cluster Search for Extraterrestrial Intelligence) took a big step towards using the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) for 24/7 SETI observations. Fiber optic amplifiers and splitters are now installed for all 27 VLA antennas, giving COSMIC access to a complete and independent copy of the data streams from the entire VLA. In addition, the COSMIC system has used these links to successfully acquire VLA data, and the primary focus now is on developing the high-performance GPU (Graphical Processing Unit) code for analyzing data for the possible presence of technosignatures. [Emphasis added.]
Use of government funding for SETI has been frowned on ever since Senator William Proxmire gave it his infamous “Golden Fleece Award” in 1979, and got it cancelled altogether three years later. The SETI Institute learned from that shaming incident to conceal its aims in more recondite jargon, and “technosignatures” fills the bill nicely. So how did they succeed in getting help from the National Radio Astronomy Observatory (NRAO) to use a government facility? Basically, it’s just a data sharing arrangement. COSMIC will not interfere with the VLA’s ongoing work but will tap into the data stream. With access to 82 dishes each 25 meters linked by interferometry, this constitutes a data bonanza for the SETI Institute — the next best thing to Project Cyclops that riled Proxmire with its proposed 1,000 dishes costing half a billion dollars back at a time when a billion dollars was real money.
Another method that the SETI Institute is employing is looking for laser pulses over wide patches of the night sky. Last year, the institute announced progress in installing a second LaserSETI site at the Haleakala Observatory in Hawaii with the cooperation of the University of Hawaii. The first one is at Robert Ferguson Observatory in Sonoma, California. No tax dollars are being spent on these initiatives.
Initial funding for LaserSETI was raised through a crowdfunding campaign in 2017, with additional financing provided through private donations. The plan calls for ten more instruments deployed in Puerto Rico, the Canary Islands, and Chile. When this phase is complete, the system will be able to monitor the nighttime sky in roughly half of the western hemisphere.
This brings up another reason for growing SETI news: technological advancements are making possible unprecedented searches. “Each LaserSETI device consists of two identical cameras rotated 90 degrees to one another along the viewing axis,” they say. “They work by using a transmission grating to split light sources up into spectra, then read the camera out more than a thousand times per second.” This optical form of search differs from the traditional radio wave searches of the past, and is once again a hunt for technosignatures.
Writing for Universe Today, Evan Gough connected the search for biosignatures, such as microbes being sought by Mars Rovers, with technosignatures being sought by the SETI Institute.
The search for biosignatures is gaining momentum. If we can find atmospheric indications of life at another planet or moon — things like methane and nitrous oxide and a host of other chemical compounds — then we can wonder if living things produced them. But the search for technosignatures raises the level of the game. Only a technological civilization can produce technosignatures.
NASA has long promoted the search for biosignatures. Its Astrobiology programs that began with the Mars Meteorite in 1997 have continued despite later conclusions that the structures in the rock were abiotic. In the intervening years, astrobiology projects have been deemed taxpayer worthy, but SETI projects have not. That may be changing. Marina Koen wrote for The Atlantic in 2018 that the search for technosignatures has gained a little support in Congress, boosted by the discovery of thousands of exoplanets from the Kepler Mission. SETI Institute’s senior astronomer Seth Shostak has become friends with one congressman.
“Kepler showed us that planets are as common as cheap motels, so that was a step along the road to finding other life because at least there’s the real estate,” says Shostak. “That doesn’t mean there’s any life there, but at least there are planets.”
Gough mentions the Decadal Survey on Astronomy, named Astro2020, that was released in 2021 from the National Academies of Sciences (NAS). It contained initiatives that could overlap astrobiology with SETI by extending searches for biosignatures to searches for technosignatures. Worded that way, they don’t seem that far apart. One white paper specifically linked the two:
The Astro2020 report outlines numerous recommendations that could significantly advance technosignature science. Technosignatures refer to any observable manifestations of extraterrestrial technology, and the search for technosignatures is part of the continuum of the astrobiological search for biosignatures (National Academies of Sciences 2019a,b). The search for technosignatures is directly relevant to the “World and Suns in Context” theme and “Pathways to Habitable Worlds” program in the Astro2020 report. The relevance of technosignatures was explicitly mentioned in “E1 Report of the Panel on Exoplanets, Astrobiology, and the Solar System,” which stated that “life’s global impacts on a planet’s atmosphere, surface, and temporal behavior may therefore manifest as potentially detectable exoplanet biosignatures, or technosignatures” and that potential technosignatures, much like biosignatures, must be carefully analyzed to mitigate false positives. The connection of technosignatures to this high-level theme and program can be emphasized, as the report makes clear the purpose is to address the question “Are we alone?” This question is also presented in the Explore Science 2020-2024 plan1 as a driver of NASA’s mission.
The most likely technosignature that could be seen at stellar distances, unfortunately for the SETI enthusiasts, would have to be on the scale of a Dyson Sphere: a theoretical shield imagined by Freeman Dyson that collects all the energy from a dying star by a desperate civilization trying to preserve itself from a heat death (see the graphic in Gough’s article). The point is that such a “massive engineering structure” would require the abilities of intelligent beings with foresight and planning much grander than ours.
Hunting for technosignatures is less satisfying than “Contact” — it lacks the relationship factor. It’s like eavesdropping instead of conversing. We can only wonder what kind of beings would make such things. Maybe the signatures are like elaborate bird nests, interesting but instinctive. Worse, maybe the signatures have a natural explanation we don’t yet understand.
A unique feature of intelligent life, SETI enthusiasts often assume, is the desire to communicate. We’ll explore that angle of SETI next time.
Recently an email correspondent asked me about a clip from Neil deGrasse Tyson’s reboot of Cosmos where he claims that eyes could have evolved via unguided mutations. Even though the series is now eight years old, it’s still promoting implausible stories about eye evolution. Clearly, despite having been addressed by proponents of intelligent design many times over, this issue is not going away. Let’s revisit the question, as Tyson and others have handled it.
In the clip, Tyson claims that the eye is easily evolvable by natural selection and it all started when some “microscopic copying error” created a light-sensitive protein for a lucky bacterium. But there’s a problem: Creating a light-sensitive protein wouldn’t help the bacterium see anything. Why? Because seeing requires circuitry or some kind of a visual processing pathway to interpret the signal and trigger the appropriate response. That’s the problem with evolving vision — you can’t just have the photon collectors. You need the photon collectors, the visual processing system, and the response-triggering system. At the very least three systems are required for vision to give you a selective advantage. It would be prohibitively unlikely for such a set of complex coordinated systems to evolve by stepwise mutations and natural selection.
Tyson calls the human eye a “masterpiece” of complexity, and claims it “poses no challenge to evolution by natural selection.” But do we really know this is true?
Darwinian evolution tends to work fine when one small change or mutation provides a selective advantage, or as Darwin put it, when an organ can evolve via “numerous, successive, slight modifications.” If a structure cannot evolve via “numerous, successive, slight modifications,” Darwin said, his theory “would absolutely break down.” Writing in The New Republic some years ago, evolutionist Jerry Coyne essentially concurred on that: “It is indeed true that natural selection cannot build any feature in which intermediate steps do not confer a net benefit on the organism.” So are there structures that would require multiple steps to provide an advantage, where intermediate steps might not confer a net benefit on the organism? If you listen to Tyson’s argument carefully, I think he let slip that there are.
Tyson says that “a microscopic copying error” gave a protein the ability to be sensitive to light. He doesn’t explain how that happened. Indeed, biologist Sean B. Carroll cautions us to “not be fooled” by the “simple construction and appearance” of supposedly simple light-sensitive eyes, since they “are built with and use many of the ingredients used in fancier eyes.” Tyson doesn’t worry about explaining how any of those complex ingredients arose at the biochemical level. What’s more interesting is what Tyson says next: “Another mutation caused it [a bacterium with the light-sensitive protein] to flee intense light.”
It’s nice to have a light-sensitive protein, but unless the sensitivity to light is linked to some behavioral response, then how would the sensitivity provide any advantage? Only once a behavioral response also evolved — say, to turn towards or away from the light — can the light-sensitive protein provide an advantage. So if a light-sensitive protein evolved, why did it persist until the behavioral response evolved as well? There’s no good answer to that question, because vision is fundamentally a multi-component, and thus a multi-mutation, feature. Multiple components — both visual apparatus and the encoded behavioral response — are necessary for vision to provide an advantage. It’s likely that these components would require many mutations. Thus, we have a trait where an intermediate stage — say, a light-sensitive protein all by itself — would not confer a net advantage on the organism. This is where Darwinian evolution tends to get stuck.
Tyson seemingly assumes those subsystems were in place, and claims that a multicell animal might then evolve a more complex eye in a stepwise fashion. He says the first step is that a “dimple” arises which provides a “tremendous advantage,” and that dimple then “deepens” to improve visual acuity. A pupil-type structure then evolves to sharpen the focus, but this results in less light being let in. Next, a lens evolves to provide “both brightness and sharp focus.” This is the standard account of eye evolution that I and others have critiqued before. Francis Collins and Karl Giberson, for example, have made a similar set of arguments.
Such accounts invoke the abrupt appearance of key features of advanced eyes including the lens, cornea, and iris. The presence of each of these features — fully formed and intact — would undoubtedly increase visual acuity. But where did the parts suddenly come from in the first place? As Scott Gilbert of Swarthmore College put it, such evolutionary accounts are “good at modelling the survival of the fittest, but not the arrival of the fittest.”
As a further example of these hyper-simplistic accounts of eye evolution, Francisco Ayala in his book Darwin’s Gift to Science and Religion asserts, “Further steps — the deposition of pigment around the spot, configuration of cells into a cuplike shape, thickening of the epidermis leading to the development of a lens, development of muscles to move the eyes and nerves to transmit optical signals to the brain — gradually led to the highly developed eyes of vertebrates and cephalopods (octopuses and squids) and to the compound eyes of insects.” (p. 146)
Ayala’s explanation is vague and shows no appreciation for the biochemical complexity of these visual organs. Thus, regarding the configuration of cells into a cuplike shape, biologist Michael Behe asks (in responding to Richard Dawkins on the same point):
And where did the “little cup” come from? A ball of cells–from which the cup must be made–will tend to be rounded unless held in the correct shape by molecular supports. In fact, there are dozens of complex proteins involved in maintaining cell shape, and dozens more that control extracellular structure; in their absence, cells take on the shape of so many soap bubbles. Do these structures represent single-step mutations? Dawkins did not tell us how the apparently simple “cup” shape came to be.
Michael J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution, p. 15 (Free Press, 1996)
Likewise, mathematician and philosopher David Berlinski has assessed the alleged “intermediates” for the evolution of the eye. He observes that the transmission of data signals from the eye to a central nervous system for data processing, which can then output some behavioral response, comprises an integrated system that is not amenable to stepwise evolution:
Light strikes the eye in the form of photons, but the optic nerve conveys electrical impulses to the brain. Acting as a sophisticated transducer, the eye must mediate between two different physical signals. The retinal cells that figure in Dawkins’ account are connected to horizontal cells; these shuttle information laterally between photoreceptors in order to smooth the visual signal. Amacrine cells act to filter the signal. Bipolar cells convey visual information further to ganglion cells, which in turn conduct information to the optic nerve. The system gives every indication of being tightly integrated, its parts mutually dependent.
The very problem that Darwin’s theory was designed to evade now reappears. Like vibrations passing through a spider’s web, changes to any part of the eye, if they are to improve vision, must bring about changes throughout the optical system. Without a correlative increase in the size and complexity of the optic nerve, an increase in the number of photoreceptive membranes can have no effect. A change in the optic nerve must in turn induce corresponding neurological changes in the brain. If these changes come about simultaneously, it makes no sense to talk of a gradual ascent of Mount Improbable. If they do not come about simultaneously, it is not clear why they should come about at all.
The same problem reappears at the level of biochemistry. Dawkins has framed his discussion in terms of gross anatomy. Each anatomical change that he describes requires a number of coordinate biochemical steps. “[T]he anatomical steps and structures that Darwin thought were so simple,” the biochemist Mike Behe remarks in a provocative new book (Darwin’s Black Box), “actually involve staggeringly complicated biochemical processes.” A number of separate biochemical events are required simply to begin the process of curving a layer of proteins to form a lens. What initiates the sequence? How is it coordinated? And how controlled? On these absolutely fundamental matters, Dawkins has nothing whatsoever to say.
David Berlinski, “Keeping an Eye on Evolution: Richard Dawkins, a Relentless Darwinian Spear Carrier, Trips Over Mount Improbable,” Globe & Mail (November 2, 1996)
In sum, standard accounts of eye evolution fail to explain the evolution of key eye features such as:
At most, accounts of the evolution of the eye provide a stepwise explanation of “fine gradations” for the origin of more or less one single feature: the increased concavity of eye shape. That does not explain the origin of the eye. But from Neil Tyson and the others, you’d never know that.
When college is held up as the one true path to success, parents—especially highly educated ones—might worry when their children opt for vocational school instead.
Toren Reesman knew from a young age that he and his brothers were expected to attend college and obtain a high-level degree. As a radiologist—a profession that requires 12 years of schooling—his father made clear what he wanted for his boys: “Keep your grades up, get into a good college, get a good degree,” as Reesman recalls it. Of the four Reesman children, one brother has followed this path so far, going to school for dentistry. Reesman attempted to meet this expectation, as well. He enrolled in college after graduating from high school. With his good grades, he got into West Virginia University—but he began his freshman year with dread. He had spent his summers in high school working for his pastor at a custom-cabinetry company. He looked forward each year to honing his woodworking skills, and took joy in creating beautiful things. School did not excite him in the same way. After his first year of college, he decided not to return.
He says pursuing custom woodworking as his lifelong trade was disappointing to his father, but Reesman stood firm in his decision, and became a cabinetmaker. He says his father is now proud and supportive, but breaking with family expectations in order to pursue his passion was a difficult choice for Reesman—one that many young people are facing in the changing job market.
Traditional-college enrollment rates in the United States have risen this century, from 13.2 million students enrolled in 2000 to 16.9 million students in 2016. This is an increase of 28 percent, according to the National Center for Education Statistics. Meanwhile, trade-school enrollment has also risen, from 9.6 million students in 1999 to 16 million in 2014. This resurgence came after a decline in vocational education in the 1980s and ’90s. That dip created a shortage of skilled workers and tradespeople.
Many jobs now require specialized training in technology that bachelor’s programs are usually too broad to address, leading to more “last mile”–type vocational-education programs after the completion of a degree. Programs such as Galvanize aim to teach specific software and coding skills; Always Hired offers a “tech-sales bootcamp” to graduates. The manufacturing, infrastructure, and transportation fields are all expected to grow in the coming years—and many of those jobs likely won’t require a four-year degree.
This shift in the job and education markets can leave parents feeling unsure about the career path their children choose to pursue. Lack of knowledge and misconceptions about the trades can lead parents to steer their kids away from these programs, when vocational training might be a surer path to a stable job.
Raised in a family of truck drivers, farmers, and office workers, Erin Funk was the first in her family to attend college, obtaining a master’s in education and going on to teach second grade for two decades. Her husband, Caleb, is a first-generation college graduate in his family, as well. He first went to trade school, graduating in 1997, and later decided to strengthen his résumé following the Great Recession. He began his bachelor’s degree in 2009, finishing in 2016. The Funks now live in Toledo, Ohio, and have a 16-year-old son, a senior in high school, who is already enrolled in vocational school for the 2019–20 school year. The idea that their son might not attend a traditional college worried Erin and Caleb at first. “Vocational schools where we grew up seemed to be reserved for people who weren’t making it in ‘real’ school, so we weren’t completely sure how we felt about our son attending one,” Erin says. Both Erin and Caleb worked hard to be the first in their families to obtain college degrees, and wanted the same opportunity for their three children. After touring the video-production-design program at Penta Career Center, though, they could see the draw for their son. Despite their initial misgivings, after learning more about the program and seeing how excited their son was about it, they’ve thrown their support behind his decision.
But not everyone in the Funks’ lives understands this decision. Erin says she ran into a friend recently, and “as we were catching up, I mentioned that my eldest had decided to go to the vocational-technical school in our city. Her first reaction was, ‘Oh, is he having problems at school?’ I am finding as I talk about this that there is an attitude out there that the only reason you would go to a vo-tech is if there’s some kind of problem at a traditional school.” The Funks’ son has a 3.95 GPA. He was simply more interested in the program at Penta Career Center. “He just doesn’t care what anyone thinks,” his mom says.
The Funks are not alone in their initial gut reaction to the idea of vocational and technical education. Negative attitudes and misconceptions persist even in the face of the positive statistical outlook for the job market for these middle-skill careers. “It is considered a second choice, second-class. We really need to change how people see vocational and technical education,” Patricia Hsieh, the president of a community college in the San Diego area, said in a speech at the 2017 conference for the American Association of Community Colleges. European nations prioritize vocational training for many students, with half of secondary students (the equivalent of U.S. high-school students) participating in vocational programs. In the United States, since the passage of the 1944 GI Bill, college has been pushed over vocational education. This college-for-all narrative has been emphasized for decades as the pathway to success and stability; parents might worry about the future of their children who choose a different path.
Read more: The world might be better off without college for everyone
Dennis Deslippe and Alison Kibler are both college professors at Franklin and Marshall College in Lancaster, Pennsylvania, so it was a mental shift for them when, after high school, their son John chose to attend the masonry program at Thaddeus Stevens College of Technology, a two-year accredited technical school. John was always interested in working with his hands, Deslippe and Kibler say—building, creating, and repairing, all things that his academic parents are not good at, by their own confession.
Deslippe explains, “One gap between us as professor parents and John’s experience is that we do not really understand how Thaddeus Stevens works in the same way that we understand a liberal-arts college or university. We don’t have much advice to give. Initially, we needed some clarity about what masonry exactly was. Does it include pouring concrete, for example?” (Since their son is studying brick masonry, his training will likely not include concrete work.) Deslippe’s grandfather was a painter, and Kibler’s grandfather was a woodworker, but three of their four parents were college grads. “It’s been a long-standing idea that the next generation goes to college and moves out of ‘working with your hands,’” Kibler muses. “Perhaps we are in an era where that formula of rising out of trades through education doesn’t make sense?”
College doesn’t make sense is the message that many trade schools and apprenticeship programs are using to entice new students. What specifically doesn’t make sense, they claim, is the amount of debt many young Americans take on to chase those coveted bachelor’s degrees. There is $1.5 trillion in student debt outstanding as of 2018, according to the Federal Reserve. Four in 10 adults under the age of 30 have student-loan debt, according to the Pew Research Center. Master’s and doctorate degrees often lead to even more debt. Earning potential does not always offset the cost of these loans, and only two-thirds of those with degrees think that the debt was worth it for the education they received. Vocational and technical education tends to cost significantly less than a traditional four-year degree.
This stability is appealing to Marsha Landis, who lives with her cabinetmaker husband and two children outside of Jackson Hole, Wyoming. Landis has a four-year degree from a liberal-arts college, and when she met her husband while living in Washington, D.C., she found his profession to be a refreshing change from the typical men she met in the Capitol Hill dating scene. “He could work with his hands, create,” she says. “He wasn’t pretentious and wrapped up in the idea of degrees. And he came to the marriage with no debt and a marketable skill, something that has benefited our family in huge ways.” She says that she has seen debt sink many of their friends, and that she would support their children if they wanted to pursue a trade like their father.
In the United States, college has been painted as the pathway to success for generations, and it can be, for many. Many people who graduate from college make more money than those who do not. But the rigidity of this narrative could lead parents and students alike to be shortsighted as they plan for their future careers. Yes, many college graduates make more money—but less than half of students finish the degrees they start. This number drops as low as 10 percent for students in poverty. The ever sought-after college-acceptance letter isn’t a guarantee of a stable future if students aren’t given the support they need to complete a degree. If students are exposed to the possibility of vocational training early on, that might help remove some of the stigma, and help students and parents alike see a variety of paths to a successful future.
The right to conscientious objection to military service is based on article 18 of the International Covenant on Civil and Political Rights, which guarantees the right to freedom of thought, conscience and religion or belief. While the Covenant does not explicitly refer to a right to conscientious objection, in its general comment No. 22 (1993) the Human Rights Committee stated that such a right could be derived from article 18, inasmuch as the obligation to use lethal force might seriously conflict with the freedom of conscience and the right to manifest one’s religion or belief.
The Human Rights Council, and previously the Commission on Human Rights, have also recognized the right of everyone to have conscientious objection to military service as a legitimate exercise of the right to freedom of thought, conscience and religion, as laid down in article 18 of the Universal Declaration of Human Rights and article 18 of the International Covenant on Civil and Political Rights (see their resolutions which were adopted without a vote in 1989, 1991, 1993, 1995, 1998, 2000, 2002, 2004, 2012, 2013 and 2017).
OHCHR has a mandate to promote and protect the effective enjoyment by all of all civil, cultural, economic, political and social rights, as well as to make recommendations with a view to improving the promotion and protection of all human rights. The High Commissioner for Human Rights has submitted thematic reports on conscientious objection to military service both to the Commission on Human Rights (in 2004 and 2006) and to the Human Rights Council (in 2007, 2008, 2013, 2017 and 2019). The latest report (A/HRC/41/23, para. 60) stresses that application procedures for obtaining the status of conscientious objector to military service should comply, as a minimum, with the following criteria:
Editor’s note: We are delighted to present a new series by Neil Thomas, Reader Emeritus at the University of Durham, “Why Words Matter: Sense and Nonsense in Science.” This is the first article in the series. Professor Thomas’s recent book is Taking Leave of Darwin: A Longtime Agnostic Discovers the Case for Design (Discovery Institute Press).
My professional background in European languages and linguistics has given me some idea of how easy it is for people in all ages and cultures to create neologisms or ad hoc linguistic formulations for a whole variety of vague ideas and fancies. In fact, it seems all too easy to fashion words to cover any number of purely abstract, at times even chimerical notions, the more convincingly (for the uncritical) if one chooses to append the honorific title of “science” to one’s subjective thought experiments.
One can for instance, if so inclined, muse with Epicurus, Lucretius, and David Hume that the world “evolved” by chance collocations of atoms and then proceed to dignify one’s notion by dubbing it “the theory of atomism.” Or one can with Stephen Hawking, Lawrence Krauss, and Peter Atkins1 conclude that the universe and all within it arose spontaneously from “natural law.” But in all these cases we have to be willing to ignore the fact that such theories involve what is known grammatically as the “suppression of the agent.” This means the failure to specify who the agent/legislator might be — this being the sort of vagueness which we were taught to avoid in school English lessons. A mundane example of this suppression of the agent is the criminal’s perennial excuse, “The gun just went off in my hand, officer, honest.”
As I have pointed out before,2 it is both grammatical solecism and logical impossibility to contend with Peter Atkins that the universe arose through an “agentless act” since this would imply some form of pure automatism or magical instrumentality quite outside common experience or observability. In a similar vein one might, with Charles Darwin, theorize that the development of the biosphere was simply down to that empirically unattested sub-variant of chance he chose to term natural selection.3 Since no empirical evidence exists for any of the above conjectures, they must inevitably remain terms without referents or, to use the mot juste from linguistics, empty signifiers.
Many terms we use in everyday life are, and are widely acknowledged to be, notional rather than factual. The man on the moon and the fabled treasure at the end of the rainbow are trivial examples of what are sometimes termed “airy nothings.” These are factually baseless terms existing “on paper” but without any proper referent in the real world because no such referent exists. Nobody of course is misled by light-hearted façons de parler widely understood to be only imaginary, but real dangers for intellectual clarity arise when a notional term is mistaken for reality.
One famous historical example of such a term was the substance dubbed phlogiston, postulated in the 1660s as a fire-like substance inhering in all combustible bodies; but such a substance was proved not to exist and to be merely what we would now rightly term pseudo-science just over a century later by the French scientist Antoine Lavoisier. Or again in more recent times there is that entirely apocryphal entity dubbed “ectoplasm.” This was claimed by Victorian spiritualists to denote a substance supposedly exuded from a “medium” (see the photo above) which represented the materialization of a spiritual force once existing in a now deceased human body. Needless to say, the term “ectoplasm” is now treated with unqualified skepticism.
Next, “The Man on the Moon and Martian Canals.”
A recent study in the Journal of the Royal Society Interface reports on “A feedback control principle common to several biological and engineered systems.” The researchers, Jonathan Y. Suen and Saket Navlakha, show how harvester ants (Pogonomyrmex barbatus) use a feedback control algorithm to regulate foraging behavior. As Science Daily notes, the study determined that, “Ants and other natural systems use optimization algorithms similar to those used by engineered systems, including the Internet.”
The ants forage for seeds that are widely scattered and usually do not occur in concentrated patches. Foragers usually continue their search until they find a seed. The return rate of foragers corresponds to the availability of seeds: the more food is available, the less time foragers spend searching. When the ants successfully find food, they return to the nest in approximately one third of the search time compared to ants unable to find food. There are several aspects of this behavior that point to intelligent design.
First, it is based on the general engineering concept of a feedback control system. Such systems use the output of a system to make adjustments to a control mechanism and maintain a desired setting. A common example is the temperature control of heating and air conditioning systems. An analogy in biology is homeostasis, which uses negative feedback, and is designed to maintain a constant body temperature.
A second aspect of design is the algorithm used to implement the specific control mechanism. Suen and Navlaka describe the system as “multiplicative-increase multiplicative-decrease” (MIMD). The MIMD closed loop system is a hybrid combination of positive and negative feedback. Receiving positive feedback results in multiplying the response, while negative feedback results in reducing the response by a constant value. The purpose relates to the challenge of optimizing ant foraging. As the paper explains:
If foraging rates exceed the rate at which food becomes available, then many ants would return “empty-handed,” resulting in little or no net gain in colony resources. If foraging rates are lower than the food availability rate, then seeds would be left in the environment uncollected, meaning the seeds would either be lost to other colonies or be removed by wind and rain.
The authors found that positive feedback systems are “used to achieve multiple goals, including efficient allocation of available resources, the fair or competitive splitting of those resources, minimization of response latency, and the ability to detect feedback failures.” However, positive control feedback systems are susceptible to instability (think of the annoying screech when there is feedback into microphones in a sound system). Therefore, a challenge for MIMD systems is to minimize instability.
In this application, when foraging times are short, the feedback is positive, resulting in a faster increase in the number of foragers. When foraging times are longer, the feedback is negative, resulting in a reduction in the number of foragers. A mathematical model of the behavior has confirmed that the control algorithm is largely optimized. (See Prabhakar et al., “The Regulation of Ant Colony Foraging Activity without Spatial Information,” PLOS Computational Biology, 2012.) As I describe in my recent book, Animal Algorithms, the harvester ant algorithm is just one example of behavior algorithms that ants and other social insects employ.
Suen and Navlakha point out that the mechanism is similar to that employed to regulate traffic on the Internet. In the latter context, there are billions of “agents” continuously transmitting data. Algorithms are employed to control and optimize traffic flow. The challenge for Internet operations is to maximize capacity and allow for relatively equal access for users. Obviously, Internet network control is designed by intelligent engineers. In contrast, the harvester ant behavior is carried out by individuals without any central control mechanism.
A third feature indicating design is the physical mechanism used by the ants to determine how long returning foragers have been out. When ants forage for food, molecules called cuticular hydrocarbons change based on the amount of time spent foraging. This is due to the difference in temperature and humidity outside of the nest. As the ants return to the entrance of the nest, there are interactions between the returning and the outgoing ants via their antennae. These interactions enable detection of the hydrocarbons, which provide a mechanism to enable outgoing ants to determine the amount of time that returning ants spent foraging.
These three elements of harvester ant behavior (feedback control, mathematical algorithm, and physical sensors) present a severe challenge for the evolutionary paradigm. From a Darwinian perspective, they must have arisen through a combination of random mutations and natural selection. A much more plausible explanation is that they are evidence of intelligent design.
Hosea11:9KJV"I will not execute the fierceness of mine anger, I will not return to destroy Ephraim: for I am God, and not man; the Holy One in the midst of thee: and I will not enter into the city."
Having rejected the immutability of the most fundamental binary of all (i.e that between creator and creature) with their nonsensical God-man hypothesis. Why are so many of Christendom's clerics puzzled that many of their flock find no issue with rejecting the immutability of the far less fundamental gender binary?
You say that Darwinism invokes the free lunch fallacy, defies mathematical falsification and further more is a clear violation of occam's razor? Tell us about it trinitarian?
If God can become man why can't the same sovereign power not make it possible for any chosen creature to become God?
I mean if God can be three and yet one with no contradiction he can be nine and yet three with no contradiction. Don't believe me? Consider.
Revelation1:4,5NASB"John to the seven churches that are in Asia:Grace and peace to you from Him who is and who was and who is to come,and from the seven spirits who are before His throne and from Jesus Christ..." Making a total of nine members of the multipersonal Godhead revealed in scripture but there is no principle in Christendom's philosophy that can be invoked to limit it to this figure. That's the thing with rejecting Commonsense as a principle once you are off the reservation all bets are off
John11:34,35KJV"And said, Where have ye laid him? They said unto him, Lord, come and see. 35Jesus wept."
Why these tears for a saint who finally received his reward? If Jesus and his followers honestly believed that Lazarus was in heaven joyfully cavorting with the angels and saints in the presence of JEHOVAH God himself, would they not have responded quite differently to news of his departure from this life.
John11:24KJV"Martha saith unto him, I know that he shall rise again in the resurrection at the last day. " Note Martha's actual hope for her brother though.
where would she have gotten such an idea?From her Lord perhaps?
John6:39KJV"And this is the Father's will which hath sent me, that of all which he hath given me I should lose nothing, but should raise it up again at the LAST DAY. "
No one goes to heaven when they die including Jesus himself. John20:17KJV"Jesus saith unto her, Touch me not; for I am not yet ascended to my Father: but go to my brethren, and say unto them, I ascend unto my Father, and your Father; and to my God, and your God."
Acts2:31KJV"He seeing this before spake of the resurrection of Christ, that his soul was not left in hell, neither his flesh did see corruption." Thus like everyone else Jesus went to hell(sheol) when he died.His hope was his God and Father just like the rest of us.Hebrews5:7KJV"Who in the days of his flesh, when he had offered up prayers and supplications with strong crying and tears unto him that was able to save him from death, and was heard in that he feared;"
John11:34KJV"And said, Where have ye laid him? ..." Note please our Lord did not ask where have you laid his body but where have you laid HIM. Third person singular referring to the person.obviously Lazarus was not in heaven.How could it be regarded as a kindness to recall anyone from the joy of heaven to the trials of this present age. Reject the mental contortions necessary to believe Christendom's falsehoods.