Search This Blog

Friday, 31 August 2018

On the law of noncontradiction.

re to Think About
IV. The Law of Excluded Middle
       One logical law that is easy to accept is the law of non-contradiction. This law can be expressed by the propositional formula ¬(p^¬p). Breaking the sentence down a little makes it easier to understand. p^¬p means that p is both true and false, which is a contradiction. So, negating this statement means that there can be no contradictions (hence, the name of the law). In other words, the law of non-contradiction tells us that a statement cannot be both true and false at the same time. This law is relatively uncontroversial, though there have been those who believe that it may fail in certain special cases. However, it does lead us to a logical principle that has historically been more controversial: the law of excluded middle.  
       The law of excluded middle can be expressed by the propositional formula p_¬p. It means that a statement is either true or false. Think of it as claiming that there is no middle ground between being true and being false. Every statement has to be one or the other. That’s why it’s called the law of excluded middle, because it excludes a middle ground between truth and falsity. So while the law of non-contradiction tells us that no statement can be both true and false, the law of excluded middle tells us that they must all be one or the other. Now, we can get to this law by considering what it means for the law of non-contradiction to be true. For the law of noncontradiction to be true, ¬(p^¬p) must be true. This means p^¬p must be false. Now, we must refer back the truth table definition for a conjunction. What does it take for p ^ ¬p to be false? It means that at least one of the conjuncts must be false. So, either p is false, or ¬p is false. Well, if p is false, then ¬p must be true. And if ¬p is false, then p must be true. So we are left with the disjunction p _ ¬p, which is exactly the formulation I gave of the law of excluded middle. So we have just derived the law of excluded middle from the law of non-contradiction.  
       What we just did was convert the negation of a conjunction into a disjunction,
by considering what it means for the conjunction to fail. The rule that lets us do this is known as De Morgan’s rule, after Augustus De Morgan. Formally speaking, it tells us that statements of the following two forms are equivalent: ¬(p^q) and ¬p_¬q. If you substitute ¬p for q in the first formula, you will have the law of non-contradiction. So you might want to try doing the derivation yourself. You will, however, need the rule that tells us that p is equivalent to ¬¬p. The point of this exercise, however, was to show that it is possible to derive the law of excluded middle from the law of non-contradiction. However, it is also possible to convince yourself of the truth of the law of excluded middle without the law of non-contradiction.  
       We can show using the method of truth tables that the disjunctive statement
p _ ¬p is always true. As a point of terminology, a statement that is always true, irrespective of the truth values of its components, is called a tautology. p _ ¬p is a tautology, since no matter what the truth value of p is, p_¬p is always true. We can see this illustrated in the truth table belowp    ¬p    p _ ¬p
T     F     T
F     T     T  
       You can try constructing a similar truth table to show that the law of non-contradiction is also a tautology. Its truth table is a bit more complicated, though. However, since the law of excluded middle is a tautology, it should hold no matter what the truth value of p is. In fact, it should be true no matter what statement we decide p should represent. So the law of excluded middle tells us that every statement whatsoever must be either true or false. At first, this might not seem like a very problematic claim. But before getting too comfortable with this idea, we might want to consider Bertrand Russell’s famous example: “The present King of France is bald.” Since the law of excluded middle tells us that every statement is either true or false, the sentence “The present King of France is bald” must be either true or false. Which is it?  
       Since there is no present King of France, it would seem quite unusual to claim that this sentence is true. But if we accept the law of excluded middle, this leaves us only one option - namely, to claim that it is false. Now, at this point, we might choose to reject the law of excluded middle altogether, or contend that it simply does not hold in some cases. This is an interesting option toconsider, but then we might need to consider why the method of constructing truth tables tells us that the law of excluded middle holds, if it actually doesn’t. We would also have to consider why it is derivable from the principle of non-contradiction. After all, this sentence doesn’t pose a problem for the law of non-contradiction, since it’s not both true and false. So we’ll ignore this option for now.    
       Returning to the problem at hand, we must now consider the question of what it means for the sentence “The present King of France is bald” to be false. Perhaps it means “The present King of France is not bald.” But that would imply that there is a present King of France, and he is not bald. This is not a conclusion we want to reach. Russell, in his 1905 paper “On Denoting” presented his own solution to this problem, which comes in the form of a theory of definite descriptions. Under this theory, we can think of there being a hidden conjunctive structure in the sentence “The present King of France is bald.” What the sentence really says is that there is a present King of France, and he is bald. So the fact that there is no present King of France implies that this sentence is false, and we have the solution we need.    
       Russell’s solution clearly suggests that we can’t just extract the logical structure of a sentence from its grammatical structure. We have to take other things into account. If you’re interested in issues about the relationship between logic and language, you might want to take a class in philosophy of language. The other essay in this section, entitled “Logic and Natural Language”, covers some other issues in this area.    
       One method of proof that comes naturally from the law of excluded middle is a proof by contradiction, or reductio ad absurdum. In a proof by contradiction, we assume the negation of a statement and proceed to prove that the assumption leads us to a contradiction. A reductio ad absurdum sometimes shows that the assumption leads to an absurd conclusion which is not necessarily a contradiction. In both cases, the unsatisfactory result of negating our statement leads us to conclude that our statement is, in fact, true. How does this follow from the law of excluded middle? The law of excluded middle tells us that there are only two possibilities with respect to a statement p. Either p is true, or ¬p is true. In showing that the assumption of ¬p leads us to a contradictory conclusion, we eliminate the possibility that ¬p is true. So we are then forced to conclude that p is true, since the law of excluded middle is supposed to hold for any statement whatsoever.    
       Now, I’ve been a bit flippant in talking about statements. Statements can be about a lot of different things. The above discussion illustrated a problem with statements about things that don’t actually exist. I’m sure most people would agree that the designation “the present King of France” refers to something that doesn’t exist. But what about situations where it’s not so certain? One of the main metaphysical questions in the philosophy of mathematics is the question of whether or not mathematical objects actually exist. Think about the question of whether numbers exist. If they do exist, then what are they? After all, they’re not concrete things that we can reach out and touch. But if they don’t exist, then what’s going on in math?    
       Metaphysical worries have motivated certain people to argue that proofs by contradiction are not legitimate proofs in mathematics. Proponents of intuitionism and constructivism in mathematics place a significant emphasis on the construction of mathematical objects. One way to characterize this position is that in order to show that a mathematical object exists, it is necessary to construct it, or at the very least, provide a method for its construction. This is their answer to the metaphysical question. Suppose we had a mathematical proof in which we assumed an object did not exist, and proved that our assumption lead us to a contradiction. For an intuitionist or a constructivist, this proof would not be a sufficient demonstration that the object does exist. A sufficient demonstration would have to involve the construction of the object.    
       Even if questions about existence get too complicated, we can still ask the question “What mathematical objects can we legitimately talk about?” The intuitionist answer is that we can talk about those mathematical objects which we know can be constructed.    
       Simply speaking, intuitionistic logic is logic without the law of excluded middle. I have outlined some small part of the motivation behind developing such a system, but more details can be found in the work of L.E.J. Brouwer and Arend Heyting.

Sunday, 26 August 2018

Safe spaces:A danger to intellectual development?:Pros and cons.

The amazing Randi on the academic mindset.

It's design all the way down/up?

From Micro to Macro Scales, Intelligent Design Is in the Details
Evolution News @DiscoveryCSC

From the molecular nanomachines within a tiny cell to the large-scale structure of the universe, design is everywhere to be found. Sometimes the best defense of intelligent design is just to ponder the details. Here are some new illustrations:

Fastest Creature Is a Cell

If you were asked what the fastest creature on earth is, would you guess a cheetah or a peregrine falcon? There’s an even faster critter you would probably never guess. It’s called Spirostomum ambiguum, and it’s just 4mm in size. This protozoan, Live Science says, can shorten its body by 60 percent in just milliseconds. How does it do it? Scientists “have no idea how the single-celled organism can move this fast without the muscle cells of larger creatures,” the article says. “And scientists have no clue how, regardless of how the contraction works, the little critter moves like this without wrecking all of its internal structures.” Saad Bhamla, a researcher at Georgia Tech, wants to find out. And in the process, he will gain design information that can be applied in human engineering:

“As engineers, we like to look at how nature has handled important challenges,” Bhamla said in the release. “We are always thinking about how to make these tiny things that we see zipping around in nature. If we can understand how they work, maybe the information can cross over to fill the gap for small robots that can move fast with little energy use.” 

Cells Do “The Wave”

Speaking of speed, most cells have another faster-than-physics trick. When a cell needs to commit hari-kari, it performs an act something like “The Wave” in a baseball stadium. Researchers from Stanford Medicine, investigating programmed cell death or apoptosis, noticed wave-fronts of specialized destroying enzymes, called caspases, spreading throughout the cell faster than diffusion could explain. 

Publishing in Science, they hypothesized that “trigger waves” accelerate the process of apoptosis, similar to how a “wave” of moving arms can travel rapidly around a stadium even though each person’s arms are not moving that fast. Another example is how one domino falling can trigger whole chains of dominoes across a gym floor. The mechanism presupposes that the elements, like the dominoes, are already set up in a finely-tuned way to respond appropriately. This may not be the only example of a new design principle. It may explain how the immune system can respond quickly.

“We have all this information on proteins and genes in all sorts of organisms, and we’re trying to understand what the recurring themes are,” Ferrell said. “We show that long-range communication can be accomplished by trigger waves, which depend on things like positive feedback loops, thresholds and spatial coupling mechanisms. These ingredients are present all over the place in biological regulation. Now we want to know where else trigger waves are found.”

The Complicated Ballet

Just organizing a chromosome is a mind-boggling wonder. But what do enzymes do when they need to find a spot on DNA that is constantly in motion? It’s enough to make your head spin. Scientists at the University of Texas at Austin describe it in familiar terms:

Thirumalai suggests thinking of DNA like a book with a recipe for a human, where the piece of information you need is on page 264. Reading the code is easy. But we now understand that the book is moving through time and space. That can make it harder to find page 264.

Yes, and the reader might be at a distant part of the nucleus, too. The challenge is not just academic. Things can go terribly wrong if the reader and book don’t meet up properly. They call it a “complicated ballet” going on.

“Rather than the structure, we chose to look at the dynamics to figure out not only how this huge amount of genetic information is packaged, but also how the various loci move,” said Dave Thirumalai, chair of UT Austin’s chemistry department. “We learned it is not just the genetic code you have to worry about. If the timing of the movement is off, you could end up with functional aberrations.”

Strong Succulent Seeds

The seed coats of some plants, like succulents and grasses, have an odd architecture at the microscopic level. Researchers at the University of New Hampshire, “inspired by elements found in nature,” noticed the wavy-like zigzags in the seed coats and dreamed of applications that need lightweight materials that are strong but not brittle.

The results, published in the journal Advanced Materials, show that the waviness of the mosaic-like tiled structures of the seed coat, called sutural tessellations, plays a key role in determining the mechanical response. Generally, the wavier it is, the more an applied loads [sic] can effectively transit from the soft wavy interface to the hard phase, and therefore both overall strength and toughness can simultaneously be increased.

Researchers say that the design principles described show a promising approach for increasing the mechanical performance of tiled composites of man-made materials. Since the overall mechanical properties of the prototypes could be tuned over a very large range by simply varying the waviness of the mosaic-like structures, they believe it can provide a roadmap for the development of new functionally graded composites that could be used in protection, as well as energy absorption and dissipation.

Small High Flyers

You may remember the episode in Flight about Arctic terns, whose epic flights were tracked by loggers. Another study at Lund University found that even smaller birds fly up to 4,000 meters (over 13,000 feet) high on their migrations to Africa. Only two individuals from two species were tracked, but the researchers believe some of the birds fly even higher on the return flight to Sweden. It’s a mystery how they can adjust their metabolism to such extreme altitude, thin air, low pressure, and low temperature conditions.

Don’t Look for Habitable Planets Here

The centers of galaxies, we learned from The Privileged Planet, are not good places to look for life. Cross off another type of location now: the centers of globular clusters. An astronomer at the University of California, Riverside, studied the large Omega Centauri cluster hopefully, but concluded that “Close encounters between stars in the Milky Way’s largest globular cluster leave little room for habitable planetary systems.” The core of the cluster has mostly red dwarfs, which have their own habitability issues to begin with. Then, Stephen Cane calculated that interactions between the closely-associated stars in the cluster would occur too frequently for comfort. His colleague Sarah Deveny says, “The rate at which stars gravitationally interact with each other would be too high to harbor stable habitable planets.”

Solar Probe Launches

The only habitable planet we know about so far is the earth. Surprisingly, there is still a lot about our own star, the sun, that astronomers do not understand. A new mission is going to fly to the sun to solve some of its mysteries, Space.com reports, but like the old joke says, don’t worry: it’s going at night. Named the Parker Solar Probe after 91-year-old Eugene Parker, who discovered the solar wind in 1958, the spacecraft carries a specially designed heat shield to protect its instruments. The probe will taste some of the material in the solar corona to try to figure out why the corona is much hotter than the surface, the photosphere. See Phys.org to read about some of the mission’s goals.

Speaking of the solar wind, charged particles from the sun would fry any life on the earth were it not for our magnetic field that captures the charged particles and funnels them toward the poles. Word has it that Illustra Media is working on a beautiful new short film about this, explaining how the charged particles collide with the upper atmosphere, producing the beautiful northern and southern lights — giving us an aesthetic natural wonder as well as planetary protection.

Son of the God or son of the apes?

No, We Are Not “Beasts”
Wesley J. Smith

The New York Times is fond of running “big idea” opinion essays claiming that humans are just another animal in the forest — and, sometimes, that plants are persons too.

This past Sunday’s example involved a writer, Maxim Loskutoff, recounting the time he and his girlfriend were threatened by a grizzly bear while hiking in Montana — a terrifying experience that taught him a lesson. From “The Beast in Me“:

It was a strange epiphany. To be human today is to deny our animal nature, though it’s always there, as the earth remains round beneath our feet even when it feels flat. I had always been an animal, and would always be one, but it wasn’t until I was prey, my own fur standing on end and certain base-level decisions being made in milliseconds (in a part of my mind that often takes 10 minutes to choose toothpaste in the grocery store), that the meat-and-bone reality settled over me. I was smaller and slower than the bear. My claws were no match for hers. And almost every part of me was edible.

Flies, Oysters, and Plankton

Of course we are animals biologically. But so what? Flies, oysters, and most plankton are too. That identifier — in the biological sense — does not have significant import outside of the biological sciences.

But the human/animal dichotomy has a much deeper meaning. Morally, we are a species apart. We are unique, exceptional.

Loskutoff’s life matters more than the threatening grizzly bear’s — which was why a ranger with a very big gun ran up to save the couple, willing to kill the bear if that proved necessary. If the writer was just “prey,” why not let natural selection take its course?

Watch Here for Deep Thinking

And here comes the deep thinking part:

Of course there are aspects of our communal society — caring for the old, the domestication of livestock, the cultivation of crops — that link us to only a few other species, and other aspects, such as the written word, that link us to none as yet discovered, but in no place but our own minds have we truly transcended our animal brethren….As the anthropologist Clifford Geertz famously said, “Man is an animal suspended in webs of significance he himself has spun.”

“Webs of Significance”

That is ridiculous on its face as we are the only species in the known universe that spins “webs of significance.” Only we create meaning and purpose, from which springs moral agency — the knowledge that there is such a thing as right and wrong, and that we should do “right” — another aspect of human exceptionalism. No animal (in the moral sense off the term) does any of that.

Loskutoff’s conclusion is predictable for writing of this genre:

Yet there is something of the experience with the bear that remains inside me, a gift from my moment of pure terror. It’s the knowledge of my animal self. That instinctive, frightened, clear-eyed creature beneath my clothes. And it brought with it the reassuring sense of being part of the natural world, rather than separated from it, as we so often feel ourselves to be. My humanity, one cell in the great, breathing locomotion spreading from sunlight to leaves to root stems to bugs to birds to bears.

I reject that we are merely “one cell in the great, breathing locomotion” of the rest of life on this planet. We are both part of the natural world and self-separated and intentionally apart from it — to the point that we are able to substantially mold nature to meet to our needs and desires.

And with that exceptional moral status comes not only our unique value, but responsibilities to (among others) care properly for the environment and to treat animals humanely — duties that arise simply and merely because we are human.

Animals, in contrast, are amoral. They owe us and each other nothing. After all, that bear would have done nothing “wrong” if she had torn Loskutoff apart for dinner, no matter the agony caused. She would have just been acting like a hungry bear.

That’s a distinction with a world of difference.

The gatekeepers are at it again.

So, Who Is Doing “Pseudoscience”?
Granville Sewell

A new book from MIT Press, Pseudoscience: The Conspiracy Against Science,includes a chapter by Adam Marcus and Ivan Oransky, founders of the website Retraction Watch. In “Pseudoscience, Coming to a Peer-Reviewed Journal Near You,” I found my own name mentioned. They write:

Although one might assume that journals would hold a strong hand when it comes to ridding themselves of bogus papers, that’s not always the case. In 2011, Elsevier’s Applied Mathematics Letters retracted a paper by Granville Sewell of the University of Texas, El Paso, that questioned the validity of the second law of thermodynamics — a curious position for an article in a mathematics journal, but not so curious for someone like Sewell, who apparently favors intelligent design theories over Darwinian natural selection.

Did I really “question the validity of the second law”?

Accepted and Withdrawn

Well, let’s look at the abstract of the accepted but withdrawn-at-the-last-minute Applied Mathematics Letters (AML) article, “A Second Look at the Second Law”:

It is commonly argued that the spectacular increase in order which has occurred on Earth does not violate the second law of thermodynamics because the Earth is an open system, and anything can happen in an open system as long as the entropy increases outside the system compensate the entropy decreases inside the system. However, if we define “X-entropy” to be the entropy associated with any diffusing component X (for example, X might be heat), and, since entropy measures disorder, “X-order” to be the negative of X-entropy, a closer look at the equations for entropy change shows that they not only say that the X-order cannot increase in a closed system, but that they also say that in an open system the X-order cannot increase faster than it is imported though the boundary. Thus the equations for entropy change do not support the illogical “compensation” idea; instead, they illustrate the tautology that “if an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of spaceships, computers and the Internet not extremely improbable, we have to conclude that the second law has in fact been violated here.

In Section 3, I wrote:

The second law of thermodynamics is all about probability; it uses probability at the microscopic level to predict macroscopic change. Carbon distributes itself more and more uniformly in an isolated solid because that is what the laws of probability predict when diffusion alone is operative. Thus the second law predicts that natural (unintelligent) causes will not do macroscopically describable things which are extremely improbable from the microscopic point of view. The reason natural forces can turn a computer or a spaceship into rubble and not vice versa is probability: of all the possible arrangements atoms could take, only a very small percentage could add, subtract, multiply and divide real numbers, or fly astronauts to the moon and back safely…But it is not true that the laws of probability only apply to closed systems: if a system is open, you just have to take into account what is crossing the boundary when deciding what is extremely improbable and what is not.

Then, in my conclusion:

Of course, one can still argue that the spectacular increase in order seen on Earth does not violate the second law because what has happened here is not really extremely improbable… And perhaps it only seems extremely improbable, but really is not, that, under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into nuclear power plants and spaceships and digital computers. But one would think that at least this would be considered an open question, and those who argue that it really is extremely improbable, and thus contrary to the basic principle underlying the second law of thermodynamics, would be given a measure of respect, and taken seriously by their colleagues, but we are not.

Even if, as Marcus and Oransky believe, intelligent design were “pseudoscience,” my AML paper was still not pseudoscience. Why? Because it did not mention or promote intelligent design, and it did not question the second law, only the absurd compensation argument, which is always used to avoid the issue of probability when discussing the second law and evolution. But these authors apparently feel that it is pseudoscience to force Darwinists to address the issue of probability when defending their theory against the second law.

A Published Apology

Marcus and Oranski continue:

The article was retracted, according to the notice, “because the Editor-in-Chief subsequently concluded that the content was more philosophical than mathematical and, as such, not appropriate for a technical mathematics journal such as Applied Mathematics Letters.” Beyond the financial remuneration, the real value of the settlement for Sewell was the ability to say — with a straight face — that the paper was not retracted because it was wrong. Such stamps of approval are, in fact, why some of those who engage in pseudoscience want their work to appear in peer-reviewed journals.

Well, whether the article was appropriate for AML or not is debatable, but it was reviewed and accepted, then withdrawn at the last minute, as reported here. And since Elsevier’s guidelines state that a paper can only be withdrawn after acceptance because of major flaws or misconduct, yes, I wanted people to know that Elsevier did not follow its own guidelines, and that the paper was not retracted because major flaws were found, and that is exactly what the published apology acknowledged. Marcus and Oranski omit the first part of the sentence they quote from the apology, which states that the article was withdrawn “not because of any errors or technical problems found by the reviewers or editors.”


They conclude:

And it means that the gatekeepers of science — peer reviewers, journal editors, and publishers — need always be vigilant for the sort of “not even wrong” work that pseudoscience has to offer. Online availability of scholarly literature means that more such papers come to the attention of readers, and there’s no question there are more lurking. Be vigilant. Be very, very vigilant.

In a peer-reviewed 2017 Physics Essays paper, “On ‘Compensating’ Entropy Decreases,” I again criticized the widely used compensation argument, and again I did not question the validity of the second law or explicitly promote intelligent design. Here were my conclusions in that paper:

If Darwin was right, then evolution does not violate the second law because, thanks to natural selection of random mutations, and to the influx of stellar energy, it is not really impossibly improbable that advanced civilizations could spontaneously develop on barren, Earth-like planets. Getting rid of the compensation argument would not change that; what it might change is, maybe science journals and physics texts will no longer say, sure, evolution is astronomically improbable, but there is no conflict with the second law because the Earth is an open system, and things are happening elsewhere which, if reversed, would be even more improbable.

An Unfair Characterization?

And if you think this characterization of the compensation argument is unfair, read the second page (71) of the Physics Essays article and you will see that the American Journal of Physics articles cited there are very explicit in making exactly the argument that evolution is extremely improbable but there is no conflict with the second law because the Earth is an open system and things are happening outside which, if reversed, would be even more improbable. As I point out there, one can make exactly the same argument to say that a tornado running backward, turning rubble into houses and cars, would likewise not violate the (generalized) second law.


Please do read this page (it is not hard to read, but here is an even simpler analysis of the compensation argument), and you will be astonished by how corrupt science can become when reviewers are “very, very vigilant” to protect consensus science from any opposing views. And you can decide for yourself who is promoting pseudoscience.

Convergence v. Darwin

The Real Problem With Convergence

Biology is replete with instances of convergence — repeated designs in distant species. Marsupials and placentals, for instance, are mammals with different reproductive designs (placentals have significant growth in the embryonic stage attached to the nutrient-rich placenta whereas marsupials have no placenta and experience significant development after birth) but otherwise with many similar species.

The marsupial flying phalanger and placental flying squirrel, for example, have distinctive similarities, including their coats that extend from the wrist to the ankle giving them the ability to glide long distances. But evolutionists must believe that these distinctive similarities evolved separately and independently because one is a marsupial and the other is a placental, and those two groups must have divided much earlier in evolutionary history. Simply put, evolution’s random mutations must have duplicated dozens of designs in these two groups.

It is kind of like lightning striking twice, but for evolutionists — who already have accepted the idea that squirrels, and all other species for that matter, arose by chance mutations — it’s not difficult to believe. It simply happened twice rather than once (or several times, in the cases of a great many convergences).

What is often not understood, however, by evolutionists or their critics, is that convergence poses a completely different theoretical problem. Simply put, a fundamental evidence and motivation for evolution is the pattern of similarities and differences between the different species. According to this theory, the species fall into an evolutionary pattern with great precision. Species on the same branch in the evolutionary tree of life share a close relationship via common descent. Therefore, they share similarities with each other much more consistently than with species on other branches.

This is a very specific pattern, and it can be used to predict differences and similarities between species given a knowledge of where they are in the evolutionary tree.

Convergence violates this pattern. Convergence reveals striking similarities across different branches. This leaves evolutionists struggling to figure out how the proverbial lightning could strike twice, as illustrated in a  recent symposium::

Does convergence primarily indicate adaptation or constraint? How often should convergence be expected? Are there general principles that would allow us to predict where and when and by what mechanisms convergent evolution should occur? What role does natural history play in advancing our understanding of general evolutionary principles?
It is not a good sign that in the 21st century evolutionists are still befuddled by convergence, which is rampant in biology, and how it could occur. This certainly is a problem for the theory.

But a more fundamental problem, which evolutionists have not reckoned with, is that convergence violates the evolutionary pattern. Regardless of adaptation versus constraint explanations, and any other mechanisms evolutionists can or will imagine, the basic fact remains: a fundamental evidence and prediction of evolution is falsified.

The species do not fall into the expected evolutionary pattern.

The failure of fundamental predictions — and this is a hard failure — is fatal for scientific theories. It leaves evolution not as a scientific theory but as an ad hoc exercise in storytelling. The species reveal the expected evolutionary pattern — except when they don’t. In those cases, they reveal some other pattern.

So regardless of where you position yourself in this debate, please understand that attempts to explain convergence under evolutionary theory, while important in normal science, do nothing to remedy the underlying theoretical problem, which is devastating.

Occam's razor v. Darwin II

A Suspicious Pattern of Deletions
Andrew Jones

Winston Ewert recently published a paper in BIO-Complexity suggesting that life is better explained by a dependency graph than by a phylogenetic tree. The study examines the presence or absence of gene-families in different species, showing that the average gene family would need to have been lost many times if common ancestry were true. Moreover, the pattern of these repeated losses exhibits a suspiciously better fit to another pattern: a dependency graph. 

It turns out this pattern of nonrandom “deletions” has been noticed before, although it appears that the researchers involved didn’t realize they might be looking at a dependency graph. Way back in 2004, Austin Hughes and Robert Friedman published a paper titledShedding Genomic Ballast: Extensive Parallel Loss of Ancestral Gene Families in Animals.” (For more about Hughes, who passed away in 2015, see here in the online journal Inference.) From the Abstract:

An examination of the pattern of gene family loss in completely sequenced animal genomes revealed that the same gene families have been lost independently in different lineages to a far greater extent than expected if gene loss occurred at random.

The Nematode Genome 

They noted that different aspects of the nematode genome suggest it belongs in different places on the tree of life. They argue that presence or absence of genes could be better for inferring phylogenetic relationships than sequence similarities, and they found that this method produced the standard phylogenetic tree with 100 percent bootstrap support. Yet they also found that many genes were “lost” in parallel, in multiple branches of the tree. One criticism of Ewert’s work is that this phenomenon might be due to missing data: not all genes in all species have been catalogued. But Hughes and Friedman used five whole genomes, and so they could argue that the pattern is real, not an artifact. 

Hughes and Friedman also argue that horizontal gene transfer is a much less likely explanation, since it is rarely seen in animals, and it is made even more so by the fact that whether some gene families are “lost” or some are “extra,” the deviations from the tree are not distributed randomly.

Moreover, many of the gene families non-randomly “lost” were elements of the core machinery of the cell, including proteins involved in amino acid synthesis, nucleotide synthesis, and RNA-to-protein translation. Despite this, the researchers argued:

The fact that numerous gene families have been lost in parallel in different animal lineages suggests that these genes encode proteins with functions that have been repeatedly expendable over the evolution of animals.

Core functions that are also expendable? The researchers are implying that all the gene families were present in the common ancestor of all animals, that it had a massively bloated and inefficient metabolism, with multiple different pathways to do any particular synthesis, and then lost them over time because animals need to have an efficient metabolism. Okay, but why did it have all these extra pathways? And when are all the gene families supposed to have evolved? This shunts all evolutionary creative (complexity-building) events back to the biological Big Bang of the Cambrian explosion. 

Ewert’s hypothesis explains the same data more simply: there never was a bloated ancestor, and those genes weren’t lost so many times. The pattern isn’t best explained by any kind of tree. It is best explained by a dependency graph.


And even still yet more primeval tech v. Darwin.

Omega-3 Nutrition Pioneer Tells How He Saw Irreducible Complexity in Cells 40 Years Ago
Evolution News @DiscoveryCSC

On a new episode of ID the Future, Jorn Dyerberg, the Danish biologist and co-discoverer of the role of omega-3 fatty acids in human health and nutrition, talks with Brian Miller about finding irreducible complexity in cells forty years ago.

Download the podcast or listen to it here.

It wasn’t until he encountered ID researchers like Michael Behe that he gave it that name — but he saw how many enzymes and co-enzymes it took working together to make metabolism work in every living cell. And if neo-Darwinism is true, and these enzymes showed up one at a time, “And over these eons, the other enzymes would just be sitting there waiting for the next one to come.”

Saturday, 25 August 2018

And still yet more primal tech v. Darwin.

Enzymes Are Essential for Life; Did They Evolve?
Olen R. Brown

Editor’s note: We are delighted to welcome a new contributor, the esteemed microbiologist Olen R. Brown. Among other distinctions, Dr. Brown is Professor Emeritus at the University of Missouri. 

Darwinian evolution, even in its 21st-century form, fails the formidable task of explaining how the first enzyme arose. Evolution also fails to explain how the first enzyme was changed into the approximately 75,000 different enzymes estimated to exist in the human body or the 10 million enzymes that are thought to exist in all of Earth’s biota. Join me in a legitimate process in science, one made popular by Albert Einstein. It’s called a Gedanken — a thought experiment. Let us see if evolution meets the challenge of logic required if it is to explain how enzymes came to be. 


Enzymes have what seem to be near-miraculous abilities. They are catalysts that greatly accelerate reactions by providing an alternate reaction pathway with a much lower energy barrier. Thus, although they do not create new reactions, they greatly enhance the rate at which a particular substrate is changed into a particular product. Every chemical reaction in the cell that is essential to life is made possible by an enzyme. Richard Wolfenden has concluded that a particular enzyme required to make RNA and DNA enormously speeds up the process.1 Without the enzyme, this reaction is so slow that it would take 78 million years before it happened by chance. Another enzyme, essential for making hemoglobin found in blood and the chlorophyll of green leaves, enormously accelerates an essential step required for this biosynthesis. Wolfenden explains that enzyme catalysis allows this step in biosynthesis to require only milliseconds but 2.3 billion years would be required without the enzyme. These enormous differences in rates are like comparing the diameter of a bacterial cell to the distance from the Earth to the Sun. 

Regenerating ATP

Think of the enormous number of different chemical reactions required for life. Now, focus on only one such reaction, the need to regenerate ATP — the energy source for all life processes. As Lawrence Krauss has written, “The average male human uses almost 420 pounds of ATP each day … to power his activities… there is less than about 50 grams of ATP in our bodies at any one time; that involves a lot of recycling… each molecule of ATP must be regenerated at least 4,000 time each day.”2 This means that 7 x 1018 molecules of ATP are generated per second. By way of comparison, there are estimated to be only 100 billion stars (1 x 1011) in our galaxy, the Milky Way. To efficiently regenerate a molecule of ATP (recharging the cell’s battery) requires a specific enzyme. To create a molecule of ATP is even more complex. The citric acid cycle is only one important part of this process and it has eight enzymes. As the name “cycle” implies, these enzymes must function in sequence. The absence of any one enzyme stops the process. How the interdependent steps of a cycle can originate is left without explanation. Life in Darwin’s hopeful, warm, little pond is dead in the water. 

Obviously, in seeking to explain how enzymes arose and diversified, the evolutionist must use the processes of evolution. It is proposed, and I concede it as possible, even logical, that the gene coding for an essential enzyme could be duplicated and the duplicated gene could be expressed as a mutation. The protein coded by this duplicate gene might catalyze a slightly different reaction. There are known examples, but they produce only slight differences in products or reaction mechanisms. Consequently, a mutation can introduce only very limited changes in a specific protein. This limits the scope of change to triviality compared to the scope required by evolution. 

“There Was a Little Girl…”

I am reminded of a poem. “There was a little girl, and she had a little curl, right in the middle of her forehead. When she was good, she was very, very good, and when she was bad she was horrid.”3 The power of genetic mutation as a source of change for evolution is like the little girl in this poem. It is good (even very good) at explaining what it can explain — trivial changes — but horrid at explaining any changes needed for the evolution of species. Likewise, natural selection is ineffectual as an editor for evolution. (See “Gratification Deferred”,4 in my book The Art and Science of Poisons.)

Thus, scientific evidence is entirely lacking for the notion that enzymes arose by chance. The idea is ludicrous. This is true even if a primeval solution contained all the twenty amino acids of proteins but no genes and no protein-synthesizing machines. Ah, but, you may say, the Nobel laureate George Wald has written5: “Most modern biologists, having reviewed with satisfaction the downfall of the spontaneous generation hypothesis, yet unwilling to accept the alternative belief in special creation, are left with nothing.” He also wrote (on the same page): “One has only to contemplate the magnitude of this task to concede that the spontaneous generation of a living organism is impossible. Yet here we are as a result, I believe, of spontaneous generation.” In the same article he wrote: “Time is in fact the hero of the plot… the ‘impossible’ becomes possible, the possible probable, and the probable virtually certain. One has only to wait: time performs the miracles.” To be fair, Wald puts the word “impossible” in quotation marks. One may believe this, but surely it is beyond the logical meanings of words and concepts — and Wald appeals to “miracles,” does he not?  

Time for a Gedanken

With this quandary, should not today’s scientist be allowed a Gedanken? Formerly, from at least the time of Aristotle, it was generally thought that organic substances could be made only by living things — one definition of vitalism. Frederick Wöhler’s laboratory synthesis of urea in 1828 was startling evidence incompatible with this idea. Perhaps a new look at the essential differences between life and non-life could be instructive. Not to be misunderstood, I do not mean to advocate for vitalism in its old sense, but simply to recognize the vast differences between non-living matter and a living cell in light of today’s knowledge of molecular biology. That a void exists in understanding this difference cannot be denied. 

Astronomy, physics, and chemistry have laws that are useful for calculating and making predictions and, to a degree, even as explanations. The high school student can state the law of gravity and make calculations. The professor can do little more to explain this, or any, law of nature. Are not all natural laws, considered as explanations, only tautologies in the end? How is information content that is designed in a law made manifest in nature? Do laws disclose innate properties of energy, matter, and space-time? 

Scientists are permitted to propose laws that mathematically describe fundamental particles and their behaviors without being stigmatized as appealing to the supernatural. Why should this not be acceptable in biology? Let us ponder the origin of enzymes and begin a conversation about the laws required to permit complexification in life. A starting place is the admission that perhaps things that appear to be designed are, in fact, designed. How this design occurs is surely a subject for scientific investigation, no more to be disallowed than is the use of laws in physics, astronomy, and chemistry to guide mathematical understanding of complex outcomes.

Notes:
  1. Richard Wolfenden. “Without enzymes, biological reaction essential to life takes 2.3 billion years.”
  2. Lawrence M. Krauss. Atom, an Odyssey from the Big Bang to Life on Earth… and Beyond. 1st ed., Little, Brown and Company 2001.
  3. Henry Wadsworth Longfellow. “There Was a Little Girl,” Vol. I: Of Home: Of Friendship, 1904.
  4. Olen R. Brown. The Art and Science of Poisons. Chapter 3, “The Dawnsinger.” Bentham Science Publishers.
  5. George Wald. “The Origin of Life,” Scientific American, August 1954, pp 46, 48.

Sunday, 12 August 2018

A clash of Titans.LXXIV

Globalism:The bane of the working man?:Pros and cons.

Social media: Democracy's friend?:Pros and cons.

Yet more on Darwinism's non-functional crystal ball.

Failed Predictions Weekend: Hunter on Darwinian Evolution, the DNA Code, and More
Evolution News @DiscoveryCSC

e trust you are enjoying your weekend! On a classic episode of ID the Future, our Evolution News contributor Dr. Cornelius Hunter has more to say about his website Darwin’s Predictions, which critically examines 22 basic predictions of evolutionary theory.


In this second podcast of the series, Dr. Hunter discusses the uniqueness of DNA code and differences In this second podcast of the series, Dr. Hunter discusses the uniqueness of DNA code and differences in fundamental molecules. Listen to episode or download it here.

There is no such thing as engineerless engineering.

Modern Software and Biological Organisms: Object-Oriented Design
Walter Myers III

In my last post, I discussed the problem with “bad design” arguments. I also offered a defense of design theory by demonstrating the exceeding intricacy and detail of the vertebrate eye as compared to any visual device crafted by even the brightest of human minds. In fact, when we examine the various kinds of eyes in higher animals, we see the same modern object-oriented software design principles that computer programmers use to build the applications we use every day. That includes the executing code of the browser that displays this article. Formal object-oriented programming, as a method used by humans, has only been around since the 1950s Yet it is clearly represented in the biological world going back to the appearance of the first single-celled organisms 3.5 billion years ago (in the form of organelles such as the nucleus, ribosome, mitochondria, Golgi apparatus, etc.).

The Eye in Higher Animals

Let’s consider the eye, which is but one of many subsystems (along with the brain, heart, liver, lungs, etc.) in higher animals that coordinate their tasks to keep an organism alive. I discussed as much in a previous article comparing the code of modern computer operating systems to the DNA code that is executed to build and maintain biological organisms. In computer programming, it is a very bad practice to write code willy-nilly and expect it to perform a useful function. Highly complex computer programs are always specified by a set of requirements that outline the functionality, presentation, and user interaction of the program. Each of these is generally called a functional specification. (Modern agile software development techniques many teams use now are more of an evolutionary process, but still require working from a backlog of requirements through progressive short iterations.)

With respect to the eye in organisms such as chordates, molluscs, and arthropods, the functional requirements (or backlog, in agile terms) might, at a high level, look like this:

Collect available light
Regulate light intensity through a diaphragm
Focus light through lenses to form an image
Convert images into electrical signals
Transmit signals to the visual processing center of the brain
Process visual detail, building a representation of the surrounding environment
This set of specifications should not be taken lightly. To presume that blind, undirected processes can generate novel functionality solving a highly complex engineering problem, as we see above, is highly fanciful. These specifications call for an appropriate interface of the eye (in this case, the optic nerve) that connects it with the visual cortex. They call for a visual cortex that will process the images, which must then interface with a brain that can subsequently direct the whole of the organism to respond to its environment. 

Human designers can only crudely approximate any of this. Yet, as complex as the eye might be, it is one small but critical component of the entire system that makes up a higher biological organism. What’s more, in the design of complex systems, the set of requirements must map to an architecture that defines the high-level structure of the system in terms of its form and function, promoting reuse of component parts across different software projects. We see such component reuse in biological organisms, with different eye types as a potent example.

A Range of Eye Layouts

Scientists believe there are basically ten different eye layouts (designs) that occur in nature. From a computational perspective, we can view this generally as a three-level, object-oriented hierarchy that can describe animal eyes, as seen in the  class diagram below. 

At the top level, we have the Eye class which, for the sake of simplicity, is composed minimally of a collection of one or more photoreceptors and an optic nerve that transmits images to the visual cortex. (Note the filled diamond shape that represents composition in relationships between whole and part.) The Eye class itself is “abstract” in the sense that it cannot be instantiated itself, but provides the properties needed by all its subclasses that at some level will be instantiated as an eye of some given type. On the second level, we have the Simple and Compound eye types which have an inheritance relationship with the Eye class. This means these two eye types inherit all the properties of the abstract Eye base class, along with its necessary components. These classes are abstract as well. On the third level, we have class objects that inherit all the properties of either the Simple or Compound classes. At this level we still have abstract classes which will have all the properties or components necessary to exist in any given higher animal but require further differentiation at the species level to be properly instantiated.


Now let’s take a closer look at the Refractive Cornea eye layout, which is present in most mammals, birds, and reptiles. The Refractive Cornea class diagram below shows the major components of which it is composed, which are a lens, sclera, iris, cornea, uvea, and retina.


Humans and Falcons

What is important is not just the particular parts, but structurally how these parts are uniquely instantiated in animal species with refinements based on the requirements specific to the habitat in which they live. For example, even though humans and falcons have the same basic components of the refractive cornea eye type, structurally they are quite different, as seen here.

The requirements of a falcon’s refractive cornea eye type include that it be able to find its prey over great distances. This means that, as compared to humans, it needs a larger lens, more aqueous humor to nourish the cornea, a more convex retina, and a higher concentration of light-gathering cones. To aid high speed hunting dives, which can surpass 200 miles per hour, the falcon eye also sports a translucent  nictitating membrane (third eyelid)  to clear any debris on dives and also keep the eye moist. At the component level of the refractive cornea, though humans and falcons share similar components in the abstract, there are significantly different architectural properties and functions that must be implemented by direction of the DNA code possessed by each organism. 

So, let’s now look at the simplified class hierarchy for human and falcon eyes. At the level of instantiation, where we find the Human Eye and Falcon Eye classes below, they must implement the component interfaces of the Eye and Refractive Cornea specific to their species. In the case of the Falcon Eye class, it also aggregates the Nictitating Membrane class that is associated with it as a species (and other species of birds as well, such as the bald eagle).


Convergent Evolution?

I’m certain Darwinian evolutionists will argue that this class diagram construction is simply one that comes after the fact, and only supports what is described as convergent evolution across species, where unrelated organisms evolve similar traits based on having to adapt to the habitats in which they find themselves. Indeed, evolutionary biologists believe the eye, which they admit is precisely engineered, evolved independently more than fifty times. 

Yet, when we compare highly complex computer software programs to biological organisms, even the most complex software programs designed by literally the best PhD minds on the planet pale in comparison with the simplest of biological organisms. To say that evolution, which is blind, undirected, and counts fully on fortuitous random mutations, will solve the same engineering problem multiple times through what is essentially mutational luck is unrealistic given statistically vanishing probabilities over the short timelines of even hundreds of millions of years. However, when we look at the various eye designs as discussed here, it is clear they each meet a set of specific requirements, following principles of object-oriented design we would expect from a master programmer.