Search This Blog

Wednesday, 11 May 2016

Genetic evidence for common ancestry?Or give us a miracle and we'll explain the rest.

Stunning Evidence for Common Ancestry? S. Joshua Swamidass on the Chimp-Human Divergence


As 'Just so' as it gets II

Rafting Monkeys "Fill a Gap" in Evolutionary Theory?


Tuesday, 10 May 2016

Darwinism vs. the real world XXIX

Calcium's Role in the Body -- and a Note on the Origin of This Series



Monday, 9 May 2016

Twenty years later and proto-life is more irreducibly complex than ever

Galloping Flagella and Cilia Railroads -- Getting Ready to Celebrate Twenty Years of Darwin's Black Box


Sunday, 8 May 2016

Putting the Darwinian debate in its place.

David Berlinski: Does Darwin Matter?


Pakistan:a country with an army or an army with a country?:Pros and Cons.

Leviathan vs.Darwin

Crocodile eyes are fine-tuned for lurking
By Jonathan Webb

A new study reveals how crocodiles' eyes are fine-tuned for lurking at the water surface to watch for prey.
The "fovea", a patch of tightly packed receptors that delivers sharp vision, forms a horizontal streak instead of the usual circular spot.
This allows the animal to scan the shoreline without moving its head, according to Australian researchers.
They also found differences in the cone cells, which sense colours, between saltwater and freshwater crocs.
Published in the Journal of Experimental Biology, the findings suggest that although the beasts have very blurry vision underwater, they do use their eyes beneath the surface.

This is because light conditions are different in salt and freshwater habitats, but only underwater - and the crocodiles' eyes show corresponding tweaks.
"There's generally more blue light in saltwater environments, and more red light in freshwater environments. Animals tend to adapt to this," explained Nicolas Nagloo, a PhD student at the University of Western Australia.
He and his colleagues studied eyeballs from juvenile "salties" and "freshies", shipped to the university from a crocodile farm in Broome.
When they measured the light absorbed by single photoreceptors in the retina, they found that those of the freshwater crocs were shifted towards longer, redder wavelengths compared with their saltwater cousins.

Finding this skewed sensitivity in crocodiles was unexpected, Mr Nagloo said, because the famous predators were only semi-aquatic and did their hunting, feeding and mating on land."It's surprising because these guys can't actually focus underwater. [But] light sensitivity seems to be important to them," he told BBC News.
"That tells us there's potentially some aspect of their behaviour underwater that we're not aware of yet."
The team also studied the density of receptors across the crocodile retina. In this regard, the two species were more similar.
Overall, crocodile vision appears to be less precise than ours, achieving a clarity some six or seven times lower than the human eye. But their "foveal streak" is a striking adaptation that suits their lifestyle perfectly.
The fovea is a dent in the retina, containing a huge concentration of receptor cells. The indentation arises because other cells, which transmit visual information to the brain, are shifted to the sides."Typically, the fovea is circular and located in the centre of the retina. It provides animals with an area of very high visual clarity, in a small area of their visual environment," Mr Nagloo said.
It is this small patch of high-resolution information that allows us, for example, to read; but we humans have to move our eyes around to drink in details across a scene.
"In the case of crocodiles... it's spread across the middle of the retina, and it gives them maximum clarity all along the visual horizon."
This arrangement reflects the predator's iconic ability to lurk with just its eyes above the water, waiting motionless for prey to wander too close to the river's edge.Other animals, particularly mammals like deer and rabbits that live in open spaces and themselves face predation, are known to possess a similar "visual streak". But that is a more subtle feature than the furrow-shaped fovea of the crocodile, Mr Nagloo said.
"A visual streak is just an elevated cell density, in an elongated shape. A fovea takes that to the extreme - the number of photoreceptors is so high that they have to move the transmitters away, to make room for them. So the wiring is different.
"I haven't seen any other animals with this kind of specialisation."

Darwinism and S.D.L algorithm II

How Did Birds Get Their Wings? Bacteria May Provide a Clue to the Genomic Basis of Evolutionary Innovation, Say Evolutionists
That evolution occurred is known to be a fact but how evolution occurred is not known. In particular we are ignorant of how evolutionary innovations arose. Of course biological novelties and innovations arose from a series of random chance events, but it is less than reassuring that we cannot provide more detail. How exactly did the most complex designs spontaneously arise? What mechanisms overcame, over and over, the astronomical entropy barriers, by sheer luck of the draw? As Craig MacLean’s and Andreas Wagner’s, and coworker’s, new PLOS Genetics paper begins, “Novel traits play a key role in evolution, but their origins remain poorly understood.” Could it be that evolution is not actually a fact? No, not according to evolutionists. And this new paper claims to provide the basis for how the seemingly impossible became the mundane.

The paper begins by summarizing the many proposed genetic mechanisms for the evolution of biological innovations:

An evolutionary innovation is a new trait that allows organisms to exploit new ecological opportunities. Some popular examples of innovations include flight, flowers or tetrapod limbs [1,2]. Innovation has been proposed to arise through a wide variety of genetic mechanisms, including: domain shuffling [3], changes in regulation of gene expression [4], gene duplication and subsequent neofunctionalization [5,6], horizontal gene transfer [7,8] or gene fusion [9]. Although innovation is usually phenotypically conspicuous, the underlying genetic basis of innovation is often difficult to discern, because the genetic signature of evolutionary innovation erodes as populations and species diverge through time.

1. Mayr E. Animal Species and Evolution. Cambridge: MA: Harvard University Press; 1963.

2. Pigliucci M. What, if anything, is an evolutionary novelty? Philos Sci. 2008;75: 887–898. Available:http://philpapers.org/rec/PIGWIA

3. Patthy L. Genome evolution and the evolution of exon-shuffling—a review. Gene. 1999;238: 103–14. Available: http://www.ncbi.nlm.nih.gov/pubmed/10570989 pmid:10570989

4. True JR, Carroll SB. Gene co-option in physiological and morphological evolution. Annu Rev Cell Dev Biol. 2002;18: 53–80. doi: 10.1146/annurev.cellbio.18.020402.140619. pmid:12142278

5. Zhang J. Evolution by gene duplication: An update. Trends Ecol Evol. 2003;18: 292–298. doi: 10.1016/S0169-5347(03)00033-8.

6. Bergthorsson U, Andersson DI, Roth JR. Ohno’s dilemma: evolution of new genes under continuous selection. Proc Natl Acad Sci U S A. 2007;104: 17004–9. doi: 10.1073/pnas.0707158104. pmid:17942681

7. Boucher Y, Douady CJ, Papke RT, Walsh DA, Boudreau MER, Nesbø CL, et al. Lateral gene transfer and the origins of prokaryotic groups. Annu Rev Genet. 2003;37: 283–328. doi: 10.1146/annurev.genet.37.050503.084247. pmid:14616063

8. Wiedenbeck J, Cohan FM. Origins of bacterial diversity through horizontal genetic transfer and adaptation to new ecological niches. FEMS Microbiol Rev. 2011;35: 957–976. doi: 10.1111/j.1574-6976.2011.00292.x. pmid:21711367

9. Thomson TM, Lozano JJ, Loukili N, Carrió R, Serras F, Cormand B, et al. Fusion of the human gene for the polyubiquitination coeffector UEV1 with Kua, a newly identified gene. Genome Res. 2000;10: 1743–56. pmid:11076860 doi: 10.1101/gr.gr-1405r

The unspoken problem here is, as usual, serendipity. The various proposed genetic mechanisms for the evolution of biological innovations all suggest an amazing bit of fortuitous luck. For random chance events just happened to create these various complicated structures and mechanisms (such as horizontal gene transfer and protein domains their shuffling) which then produced new evolutionary breakthroughs.

Evolution didn’t know what was coming. Evolution did not plan this out, it did not realize that horizontal gene transfer would lead the way to new biological worlds. The evolution of horizontal gene transfer would require a long sequence of random mutations, many of which would not provide any fitness advantage. And when the construction project was completed, and the first horizontal gene transfer capability was possible, there would be no immediate advantage.

This is because there would have been no genes to transfer. The mechanism works only when it is present in more than one, neighboring, cells. One cell gives, and another cells receives. By definition the mechanism involves multiple cells.

But it doesn’t stop there. Even if the first horizontal gene transfer capability was able to spread across a population, and even if it did provide a fitness advantage to the fortunate citizens, there would not be even a hint of the enormous world of biological innovations that had just been opened.

In other words, what this evolutionary narrative entails is monumental serendipity. Biological structures and mechanisms (horizontal gene transfer in this case, but it is the same story with the other hypotheses listed above) are supposed to have evolved as a consequence of a local, proximate, fitness advantage: a bacteria could now have a gene it didn’t have before.

But it just so happened that the new structures and mechanisms would also, as a free bonus, be just what was needed to produce all manner of biological innovations, far beyond assisting a lowly bacteria increase its fecundity.

This is monumental serendipity.

The science contradicts the theory

Undaunted, the new paper finds that one of the other mechanisms, gene duplication and subsequent neofunctionalization, is a key enabler and pathway to biological innovations.

That conclusion resulted from what otherwise was a fine piece of research work. The experimenters exposed different populations of Pseudomonas aeruginosa, a dangerous infectious bacteria, to 95 new sources of its favorite food: carbon.

The bacteria had to adjust to the new flavors of carbon and they did so with various genetic modifications, including various genetic mutations. In the most challenging cases (where the new carbon sources were most difficult for the bacteria to adjust to), the bacteria often produced mutations in genes involved in transcription and metabolism. And these mutations often occurred in genes where there were multiple copies, so the mutations occurred in one copy while the other copy could continue in its normal duties.

The problem is, these genetic duplicates were preexisting in the P. aeruginosa genome. This is yet another instance of serendipity.

Why? Because preexisting duplicates are not common. Only about 10% of the genes have duplicates lying around, and fortunately, the genes needed for adaptation (involving transcription and metabolism) just happened to have such duplicates.

Now there were a few instances of de novo gene duplication. That is, once the experiment began, and after the P. aeruginosa populations were exposed to the challenging diets, a total of six genes underwent duplication events. But in each and every case, the duplication events occurred repeatedly and independently, in different populations (for each of the 95 different carbon sources, the experimenters ran four parallel trials with independent populations).

This result indicates directed gene duplication. This is because it is highly unlikely that random, chance, gene duplication events just happened hit on the same gene in different populations. Here is an example calculation.

Let’s assume that in the course of the experiment, which ran for 30 days and about 140 generations of P. aeruginosa, some genes may undergo duplication events by chance. Next assume there is a particular gene that needs to be duplicated and modified in order to for P. aeruginosa to adapt to the new food source. (Note that there may be several such genes, but as we shall see that will not affect the conclusion). Given that there are four separate, independent trials, what is the probability that the gene will be duplicated in two or more of those trials?

Let P_dup be the probability that any gene is duplicated in the course of the experiment. For our gene of interest, it may be duplicated in 0, 1, 2, 3, or all 4 of the trials. The binomial distribution describes the probability, P, of each of these outcomes. To answer our question (i.e., What is the probability that the gene will be duplicated in two or more of those trials?) we sum the binomial distribution’s value for N = 2, 3 and 4. In other words, we calculate P(2) + P(3) + P(4).

This will give us the probability of observing what was observed in the experiment (i.e., the duplication events occurred repeatedly and independently, in different populations, in all 6 cases where duplication events were observed).

Well for a reasonable value of P_dup, the probability that any gene is duplicated in the course of the experiment, such as 0.0001, the probability of observing multiple duplications events for any given food source (i.e., P(2) + P(3) + P(4)) is about 60 in one billion, or 6  times 10^-8. Even worse, the probability of observing this in all 6 cases where duplication events were observed is about 5 times 10^-44.

It isn’t going to happen.

Exceptionally high rates of gene duplication, in particular genomic regions of Salmonella typhimurium, in a high growth rate medium, were observed to be about 0.001 and even slightly above 0.01 in rare cases.

If we go all out and set P_dup to an unrealistically high 0.1, our results are still unlikely. The P(2) + P(3) + P(4)) is .05, and the probability of observing this in all 6 cases where duplication events were observed is about 2 times 10^-8.

In order to raise these probabilities to reasonable levels, such that what was observed in the experiment is actually likely to have occurred, we need to raise P_dup to much higher values. For example, for a P_dup of .67 (two-thirds probability), P(2) + P(3) + P(4)) is .89, and the probability of observing this in all 6 cases where duplication events were observed is about .5.

But even this doesn’t work. For if we were to imagine unrealistically high P_dup values of 0.1 or higher, then massive numbers of duplication events would have been observed in the experiments.

But they weren’t.

Once again, the science contradicts the theory. Our a priori assumption that evolution is a fact, and that the P. aeruginosa adaptations to the new food sources were driven by random mutations, did not work. The theory led to astronomically low probabilities of the observed results.

What the observed gene duplications are consistent with is directed gene duplications. Just as mutations have been found to be directed in cases of environmental challenges, it appears that gene duplications may also be directed.

The paper’s premise, that biological innovations such as flowers and wings are analogous to bacteria adapting to new nutrient sources, is fallacious. But setting that aside, the experimental results do not make sense on evolution’s mechanism of random mutations and natural selection. Instead, the results indicate directed adaptation.
Posted by Cornelius Hunter 

Saturday, 7 May 2016

File under "well said" XXIV

“Wise men speak because they have something to say; fools because they have to say something.”
 Plato.

Design by chance and necessity S.T.O.M.Ped

Atheist biologist makes an excellent case for Intelligent Design
November 19, 2015 Posted by vjtorley under Intelligent Design


Matthew Cobb is a professor of zoology at the University of Manchester and a regular contributor over at Why Evolution Is True. Recently, while critiquing a cartoon from xkcd (shown above), he argued that our DNA is the mindless product of a series of historical accidents. But then he let the cat out of the bag, at the end of his post:

On a final note, in some cases, within this amazing noise, there are also astonishing examples of complexity which do indeed appear to be the result of optimisation – and they would boggle the mind of anyone, not just a cocky computer scientist in a hat. In Drosophila there is a gene called Dscam, which is involved in neuronal development and has four clusters of exons (bits of the gene that are expressed – hence exon – in contrast to the apparently inert introns).

Each of these exons can be read by the cell in twelve, forty-eight, thirty-three or two alternative ways. As a result, the single stretch of DNA that we call Dscam can encode 38,016 different proteins. (For the moment, this is the record number of alternative proteins produced by a single gene. I suspect there are many even more extreme examples.)

Cobb triumphantly concluded:

In other words, DNA is even more complicated than [xkcd cartoonist] Randall [Munroe] imagines – it is historical, messy, undesigned. And when occasionally it is optimised, the degree of complexity is mind-boggling. Biology is not quite impossible, it is just incredibly difficult!

But the damage was done. Even as he chided cartoonist Randall Munroe for claiming that DNA is subject to “the most aggressive optimisation process in the universe” and insisted that our genes are “a horrible, historical mess” consisting mostly of junk DNA, and that they are really the product of mindless tinkering rather than design, Cobb was forced to concede that amidst all this chaos, there were indeed some “astonishing examples of complexity which do indeed appear to be the result of optimisation” which “would boggle the mind of anyone, not just a cocky computer scientist in a hat.”

Intelligent Design supporters are often accused of appealing to something called an API: an Argument from Personal Incredulity. The acronym comes from Professor Richard Dawkins. The reasoning is supposed to go like this: I cannot imagine how complex structure X could have come about as a result of blind natural processes; therefore an intelligent being must have created it. This, Dawkins rightly points out, is not a rational argument. Certainly it has no place in a science classroom.

But my own conversion to Intelligent Design was not based on an API, but on something which I have decided to call the STOMPS Principle. STOMPS is an acronym for: Smarter Than Our Most Promising Scientists. The reasoning goes like this: if I observe a complex system which is capable of performing a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, then it would be rational for me to assume that the system in question was also designed. That is not to say that nothing will shake my conviction, but if you claim that an unguided natural process could have done the job, then I am going to demand that you explain how the process in question could have accomplished this stupendous feat. I shall demand a specification of a mechanism, and a demonstration that this mechanism is at least capable of generating the complex system we are talking about, within the time available, without appealing to mathematical miracles (like winning the Powerball Jackpot ten times in a row). To demand any less would be the height of irrationality.

Professor Matthew Cobb concedes that our junky DNA contains genes which encode for proteins. He concedes that within the “noise” of our junky DNA, there are also “astonishing examples of complexity which do indeed appear to be the result of optimisation,” and that the complexity of this DNA code would “boggle the mind” of even “a cocky computer scientist in a hat.” This sounds like a perfect example of a case where the STOMPS Principle could be legitimately invoked. If Nature contains systems which accomplish a feat in a manner which is far better than what our best scientists can do, then it’s reasonable to infer that these systems were intelligently designed.

At this point, some evolutionists may respond by invoking what philosopher Daniel Dennett has termed Leslie Orgel’s second law: “Evolution is cleverer than you are.” The relevant question here is: cleverer at what? We have seen that all living things employ a genetic code: a set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. Despite diligent inquiry on our part, we have yet to uncover a single instance in Nature of unguided processes generating a code of any sort – let alone one which would “boggle the mind” of even “a cocky computer scientist in a hat.” Whatever else evolution might be clever at, code-making is hardly its forte.

But, we shall be told, evolution refines the code in our DNA all the time – through natural selection winnowing random mutations, as well as purely chance-driven processes such as genetic drift. Who are we to say that it could not have generated this code by an incremental series of refinements, over billions of years?

I used to be a computer programmer, for ten years. I think I know what it means to refine computer code. Evolution doesn’t do anything like that: what it does is corrupt the code in organisms’ cells, in ways that occasionally turn out to improve those organisms’ prospects for survival. That might be good for the organisms, but from a code-bound perspective, it isn’t “good” at all: it’s just the corruption of a code. And corruption is the opposite of generation.

So when I hear someone tell me that “nature, heartless, witless nature” could have not only generated a code, but generated one which even our brightest scientists are in awe of, my response is: “You’re pulling my leg.”

Finally, I’d like to address Professor Matthew Cobb’s argument that “[o]ur genes are not perfectly adapted and beautifully designed,” because our DNA is littered with junk: they are instead the product of “evolution and natural selection.” My response to that argument is: so what? Even if Professor Cobb is right about junk DNA – and I’m inclined to think he is (for reasons I’ll discuss in another post) – that’s beside the point. At most, it shows is that DNA which doesn’t code for anything wasn’t designed. But my question is: what about the DNA which does code for proteins, and which does so in a manner that boggles the ingenuity of our brightest minds? Professor Cobb, it seems, is missing the wood for the trees here.

Junk DNA might be described as degenerate code – but there has to be a code in the first place, before it can degenerate. The existence of junk DNA cannot be used as an argument against design: all it establishes is that the designer of our DNA – whether out of benign neglect, laziness, illness, or ignorance that something has gone amiss – doesn’t always fix the code he created, when it becomes corrupted. Accordingly, junk DNA cannot be used as a legitimate argument against the proposition that the DNA in our cells which codes for genes was designed.

A personal story

A few years ago, I came across an article by an Australian botanist (who is also a creationist) named Alex Williams, entitled, “Astonishing Complexity of DNA Demolishes Neo-Darwinism” (Journal of Creation, 21(3), 2007). At the time I knew very little about specified complexity and other terms in the Intelligent Design lexicon. I heartily dislike jargon, and I was having difficulty deciding whether there was any real scientific merit to the Intelligent Design movement’s claim that certain biological systems must have been designed. But when I read Alex Williams’ article, the case for Intelligent Design finally made sense to me. What impressed me most, with my background in computer science, was that the coding in the cell was far, far more efficient than anything that our best scientists could have come up with. Here are some excerpts from the article:

The traditional understanding of DNA has recently been transformed beyond recognition. DNA does not, as we thought, carry a linear, one-dimensional, one-way, sequential code—like the lines of letters and words on this page… DNA information is overlapping-multi-layered and multi-dimensional; it reads both backwards and forwards… No human engineer has ever even imagined, let alone designed an information storage device anything like it…

There is no ‘beads on a string’ linear arrangement of genes, but rather an interleaved structure of overlapping segments, with typically five, seven or more transcripts coming from just one segment of code.
Not just one strand, but both strands (sense and antisense) of the DNA are fully transcribed.
Transcription proceeds not just one way but both backwards and forwards…
There is not just one transcription triggering (switching) system for each region, but many.
(Bold emphasis mine – VJT.)

I’d like to make it clear that as someone who believes in a 13.8 billion-year-old universe and in common descent, I do not share Williams’ creationist views. In particular, I think his argument for a young cosmos, based on Haldane’s dilemma, rests on faulty premises. But I do think that Williams is on solid scientific ground when he writes that no human engineer has ever even imagined, let alone designed an information storage device anything like DNA. Here we have an appeal to the STOMPS principle: DNA encodes information in a way which is far cleverer than anything that our most intelligent programmers could have designed, so it is reasonable to infer that DNA itself was designed by a superhuman intelligent agent.

I’d like to conclude this post with a quote from someone whose impartiality is not in doubt: Bill Gates, the founder of Microsoft Corporation, who is also an agnostic:

Biological information is the most important information we can discover, because over the next several decades it will revolutionize medicine. Human DNA is like a computer program but far, far more advanced than any software ever created. 
(The Road Ahead, Penguin: London, Revised Edition, 1996 p. 228.)

If an agnostic like Bill Gates, who is an acknowledged expert on computing, thinks that the complexity of human DNA surpasses that of any human software design, then it is surely reasonable to infer that human DNA – or at the very least, its four-billion-year-old progenitor, the DNA in the first living cell, was originally designed by some superhuman Intelligence.

Professor Cobb is undercut by one of his own commenters

We have seen that Professor Matthew Cobb’s argument against DNA having been designed is a philosophically flawed one. But reading through the comments attached to his post, I came across two comments by a reader named Eric (see here and here) which blew Professor Cobb’s case right out of the water, from a computing perspective:

… Matthew’s comment “Our genes are not perfectly adapted and beautifully designed. They are a horrible, historical mess” makes the analogy to human programming better, not worse….

I would guess that the entire etymology of computer programming languages is a result of historical contingency (i.e. a horrible, historical mess) as much as it is a result of optimization or rational choice. The reason Java forms the basis of so many internet-based languages is because that’s what was included in the earliest version of Netscape Navigator, which captured the market at the time. And the reason there are so many Visual Basic type programming languages is because Basic is what ran on the first generation of IBM personal computers. Geez, I know labs that were programming their nuclear physics detector setups in Fortran in the 1990s, and that is a language invented for use with punch cards.

Now computer programming languages will probably always require a more formal and rigorous syntax than natural language, but IMO the specific formal syntaxes that we used today are more due to the vagaries of human history than they are any sort of rational choice of the best options.

For that matter, why the frak do we even bother with www? Http vs. Https? That’s four redundant and therefore worthless letters out of five, the equivalent of 80% “junk DNA” in one of the most common and most recent human-built computer syntaxes. What sense does it make? None. Why do we have it? History.