Search This Blog

Sunday, 9 November 2014

Counting on chance and necessity

BIO-Complexity Paper: Why Chaitin's Mathematical "Proof" of Darwinian Evolution Fails

There are many reasons why Chaitin's model does not accurately reflect how Darwinian evolution works, if it does work, in the real world of biology (some of which are summarized nicely over at Theory, Evolution, and Games Group). Not the least of such problems is the fact that the simulation basically grants itself infinite probabilistic resources (actually, computing resources, but that's analogous to probabilistic resources in biology).
As Ewert, Dembski, and Marks explain:
Because metabiology programs have unbounded length and can run for an unbounded amount of time, the unboundedness essentially undermines the creativity required to solve the large-number problem. With unbounded resources and unbounded time, one can do most anything.
In the real world, probabilistic resources are limited. Time is finite, and populations have finite sizes, and this imposes limits on what traits can evolve especially (as Stephen Meyer shows in Darwin's Doubt) when traits require multiple mutations before providing some advantage. Metabiology uses a population of one single "digital organism," which then "evolves" over time, but it grants itself essentially infinite computing resources to do this. With such a generous endowment, sure, anything can eventually evolve. But we don't live in an unlimited world, which is precisely why Darwinian evolution faces major theoretical problems. They thus write:
Although elegant in conception, metabiology departs from reality because it pays no attention to resource limitations. Metabiology's math obscures the huge amounts of time required for the evolutionary process. The programs can run for any arbitrarily large number of steps. Additionally, programs can be of any length with no penalty imposed for longer programs.
Ewert, Dembski, and Marks explain that Chaitin's program uses a halting oracle as a source of "active information." A halting oracle is a hypothetical meta-program that can tell you if a given program will ever stop running. As they explain, such an oracle could be useful in disproving certain mathematical conjectures, such as Goldbach's conjecture, "which hypothesizes that all even numbers greater than two can be written as the sum of two primes." They discuss how a halting oracle could determine if the conjecture was false:
Suppose a program X can be written to test each even number sequentially to see if it were the sum of two primes. If a counterexample is found, the program stops and declares "I have a counterexample!" Otherwise, the next even number is tested. If Goldbach's conjecture is true, the program will run forever. If a halting oracle existed, we could feed it X. If the halting oracle says "this program halts," Goldbach's conjecture is disproved. A counterexample exists. If the halting oracle says, "This program never halts," then Goldbach's conjecture is proven! There exists no counterexample.
The difficulty for Chaitin is he admits that for metabiology, "[The halting oracle] is where all the creativity is really coming from in our program," but also admits that such an oracle is "mathematical fantasy." Ewert, Dembski, and Marks thus aptly observe, "A computer tool proven not to exist is admittedly at the outset an obvious major strike against a theory purporting to demonstrate reality."
Now at this point, one might reasonably ask, "If a halting oracle is a hypothetical fantasy, how can Chaitin claim to use one in metabiology?" The answer is that Chaitin isn't actually creating a computer program. He's seeking a mathematical proof of the Darwinian model, where he's allowed to indulge thought experiments that invoke hypothetical entities. All he needs to be able to do is to prove what might happen if he theoretically had a halting oracle. But it doesn't help that one could never exist.
That problem aside, they explain that the "halting oracle" used by Chaitin's program has three different options of how to find a search target: "
  • Exhaustive Search (poor)
  • Random Evolution (good)
  • Intelligent Design (better)
Metabiology uses "random evolution" but not in a manner that is biologically realistic. The program is capable of systematically simulating all possible programs, which in effect allows it to totally rewrite itself instantly -- something that simply doesn't happen in biology -- and thereby grants unrealistic access to the equivalent of unlimited resources in an organism's developmental process. This unrealistically guarantees the program could never get stuck on a local fitness peak because it can keep trying out entirely new "genetic codes" indefinitely until a better one is found.
Which, again, is nothing like how the real world of biology, where an organism is forced to rely on the genome it receives, which at best will have just a few small mutational differences from its parents. Darwinian evolution in real biology requires a grueling ascent of mount improbable, but Chaitin's program can fly wherever it wants at any time. To put it another way, metabiology cheats by giving a digital organism unrealistic access to any program at any time which will lead to a higher fitness state. As Ewert, Dembski, and Marks explain:
As any computer programmer will tell you, landscapes of computer program fitness are the opposite of smooth. We would therefore not expect Darwinian evolution to fare well. Chaitin notes this when he writes [3], "The fitness landscape has to be very special for Darwinian evolution to work." The environment for evolution to occur, therefore, has to be carefully designed. Indeed, in the paradigm of conservation of information, smooth landscapes can be source of significant active information [14]. Metabiology's construction of smooth landscapes is accomplished by running all viable programs, a computationally expensive approach that is only possible because there are no resource limitations.
Chaitin also presents a different model that uses what he calls "intelligent design" to find a search target. This obviously doesn't help show what unguided evolutionary processes can accomplish. The author of the blog "Theory, Evolution and Games Group" critiques Chaitin's model, writing that metabiology uses "a teleological model -- a biologist's nightmare." Ewert, Dembski, and Marks explain:
Like AVIDA and ev, metabiology makes use of external information sources to assist in the search. Like the simple Hamming oracle, the halting oracle can be mined for information with various degrees of sophistication. Evolution thus requires external sources of knowledge to work. The degree to which this knowledge is used can be assessed using the idea of active information.
They conclude: "In order for evolution to occur in these models, external knowledge must be imposed on the process to guide it. Metabiology thus appears to be another example where its designer makes an evolutionary model work. ... Consistent with the laws of conservation of information, natural selection can only work using the guidance of active information, which can be provided only by a designer." Properly understood, in other words, these programs demonstrate that evolution requires intelligent design.

Thursday, 6 November 2014

To him was given a large sword.

Revelation6:4NIV"Then another horse came out, a fiery red one. Its rider was given power to take peace from the earth and to make people kill each other. To him was given a large sword."



Saturday, 1 November 2014

Why are Darwinists hiding from their own ideas?



Busting another Darwinist Myth: Have ID Proponents Invented Terms like "Microevolution" and "Macroevolution"?



In 2005 I busted the Darwinist myth that ID-proponents have invented terms like "Darwinist" or "Darwinism" by noting that, well, Darwinists themselves have long-used such terms to describe themselves and their viewpoints. Jonathan Wells also recentlybusted this same myth, and Anika Smith recently busted the myththat evolution is not "random." In 2006, I also busted the myth that skeptics of neo-Darwinism don't exist outside the United States.
When engaging in debates, every once in a while I hear the claim that Darwin-critics also invented terms like "microevolution" or "macroevolution." For example, Jonathan Wells reports, "In 2005, Darwinist Gary Hurd claimed that the distinction between microevolution and macroevolution was just a creationist fabrication. ... Hurd wrote to the Kansas State Board of Education: "...'macro' and 'micro' evolution ... have no meaning outside of creationist polemics." (Jonathan Wells, The Politically Incorrect Guide to Darwinism and Intelligent Design, pgs. 55-56). This is also a Darwinian urban legend, for such terms have been used regularly in the scientific literature. Indeed, textbooks commonly teach this terminology, including two of the textbooks I used in college when learning about evolutionary biology.
The glossary of my college introductory biology text, Campbell's Biology (4th Ed.) states: "macroevolution: Evolutionary change on a grand scale, encompassing the origin of novel designs, evolutionary trends, adaptive radiation, and mass extinction." Futuyma's Evolutionary Biology, a text I used for an upper-division evolutionary biology course, states, "In Chapters 2h3 through 25, we will analyze the principles of MACROEVOLUTION, that is, the origin and diversification of higher taxa." (pg. 447, emphasis in original). Similarly, these textbooks respectively define "microevolution" as "a change in the gene pool of a population over a succession of generations" and "slight, short-term evolutionary changes within species." Clearly Darwin-skeptics did not invent these terms.
Other scientific texts use the terms. In his 1989 McGraw Hill textbook, Macroevolutionary Dynamics, Niles Eldredge admits that "[m]ost families, orders, classes, and phyla appear rather suddenly in the fossil record, often without anatomically intermediate forms smoothly interlinking evolutionarily derived descendant taxa with their presumed ancestors." (pg. 22) Similarly, Steven M. Stanley titles one of his books, Macroevolution: Pattern and Process (The Johns Hopkins University Press, 1998 version), where he notes that, "[t]he known fossil record fails to document a single example of phyletic evolution accomplishing a major morphological transition and hence offers no evidence that the gradualistic model can be valid." (pg. 39)
The scientific journal literature also uses the terms "macroevolution" or "microevolution." In 1980, Roger Lewin reported in Science on a major meeting at the University of Chicago that sought to reconcile biologists' understandings of evolution with the findings of paleontology. Lewin reported, "The central question of the Chicago conference was whether the mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No." (Roger Lewin, "Evolutionary Theory Under Fire," Science, Vol. 210:883-887, Nov. 1980.)
Two years earlier, Robert E. Ricklefs had written in an article in Science entitled "Paleontologists confronting macroevolution," contending:
The punctuated equilibrium model has been widely accepted, not because it has a compelling theoretical basis but because it appears to resolve a dilemma. ... apart from its intrinsic circularity (one could argue that speciation can occur only when phyletic change is rapid, not vice versa), the model is more ad hoc explanation than theory, and it rests on shaky ground.
(Science, Vol. 199:58-60, Jan. 6, 1978.)
Finally, in 2000 Douglas Erwin wrote a paper the journal Evolution and Development entitled "Macroevolution is more than repeated rounds of microevolution" where he explained the historical controversy over whether microevolutionary processes can explain macroevolutionary change:
Arguments over macroevolution versus microevolution have waxed and waned through most of the twentieth century. Initially, paleontologists and other evolutionary biologists advanced a variety of non-Darwinian evolutionary processes as explanations for patterns found in the fossil record, emphasizing macroevolution as a source of morphologic novelty. Later, paleontologists, from Simpson to Gould, Stanley, and others, accepted the primacy of natural selection but argued that rapid speciation produced a discontinuity between micro- and macroevolution. This second phase emphasizes the sorting of innovations between species. Other discontinuities appear in the persistence of trends (differential success of species within clades), including species sorting, in the differential success between clades and in the origination and establishment of evolutionary novelties. These discontinuities impose a hierarchical structure to evolution and discredit any smooth extrapolation from allelic substitution to large-scale evolutionary patterns. Recent developments in comparative developmental biology suggest a need to reconsider the possibility that some macroevolutionary discontinuities may be associated with the origination of evolutionary innovation. The attractiveness of macroevolution reflects the exhaustive documentation of large-scale patterns which reveal a richness to evolution unexplained by microevolution. If the goal of evolutionary biology is to understand the history of life, rather than simply document experimental analysis of evolution, studies from paleontology, phylogenetics, developmental biology, and other fields demand the deeper view provided by macroevolution.
(Douglas Erwin, "Macroevolution is more than repeated rounds of microevolution,"Evolution and Development, Vol. 2(2):78-84, 2000.)
So the next time a Darwinist tells you that scientists don't use terms like "microevolution" or "macroevolution," remind them why this claim is a long-debunked myth!

Sleight of hand.

Flying Fish in the Darwin Magic Show

Monday, 27 October 2014

Some more on the skills gap.



The skilled trades:A viable choice.



The divine law and blood VII:Right and smart.

Evidence in favor of bloodless surgery mounts



BY KEVIN JESS     OCT 28, 2009
Physicians around the world are now successfully treating patients with bloodless surgery. Evidence shows that many benefits are being realized by using alternatA recent study conducted at the Maritime Heart Center in Halifax, Nova Scotia showed that blood transfusions for stable cardiac-surgery patients increased their risk of death, renal failure, and sepsis or infection.














The results were released at the 
Canadian Cardiovascular Congress show, and was presented by Robert Riddell, a medical student at Dalhousie University in Halifax, Nova Scotia.
The study looked at 3842 consecutive patients, who were all undergoing different types of cardiac surgery.
According to theheart.org,the patients were sorted into four groups: the first received no blood product transfusions; the second received blood products during their surgery; the third group received blood products within the first 48 hours; and the fourth received blood products 48 hours or later after surgery.
After making adjustments for age, sex and other factors the study concluded that blood transfusions dramatically increased morbidity and mortality rates compared with those who received bloodless surgery.
The study also suggests that the later the blood transfusion the worse off the patient is.
There are realistic alternatives to blood transfusions today.
According to AllSands, since the tragedy of AIDS people have become all too aware that our blood supply can never be completely safe and that in a recent poll 89 per cent of Canadians would rather have an alternative to donated blood.
Most people associate the refusal of blood transfusions to the stand taken by Jehovah's Witnesses who look upon the procedure as against Bible teachings.
However, according to AllSands, that stand has led to bloodless medicine and surgery reaching "an advanced level of development and is the preferred treatment of many informed people."
The avoidance of blood during surgery means that post-operative infections and complications are avoided, and blood types would not have to be matched, erasing any complications from matching errors.
Bloodless surgeries typically cost 25 per cent less than those that use donated blood with extra savings from a 50 per cent increase in recovery times, translating into shorter hospital stays.
In the event of a large blood loss, in most cases the volume of blood can be maintained by alternative fluids such as Ringer’s lactate solution, dextran, hydroxyethel starch and others that will prevent hypovolemic shock.
Drugs are now being used before surgery to stimulate the production of red blood cells, blood platelets and various white blood cells to increase the volume of blood as well as other medications to reduce blood loss.
Surgeons are now able to manage bleeding better by the use of biological hemostats. New glues and sealants can block puncture wounds or cover larger areas of exposed bleeding tissue.
Patients can now have blood that is lost in surgery or trauma to be salvaged by the use of machines that cleanse the blood and return it to the patient without storing it.
According to the Encyclopedia of Surgery, new instruments and surgical techniques now allow surgeons to perform procedures with minimal blood loss.
All of the above procedures have been performed successfully on thousands of patients worldwide, who seek safer medical care, whether it be for religious reasons or not.
By the end of 2002, 30 per cent of all requests for bloodless surgeries came from people who were not Jehovah's Witnesses.

Sunday, 26 October 2014

Objectivity?Don't be silly.

But Who Needs Reality-Based Thinking Anyway? Not the New Cosmologists

Saturday, 25 October 2014

The more things change...

Ecclesiastes4:1 HCSB"Again, I observed all the acts of oppression being done under the sun. Look at the tears of those who are oppressed; they have no one to comfort them. Power is with those who oppress them; they have no one to comfort them."


Round and round we go.



Tuesday, 21 October 2014

Psalms 14 American standard version

The fool hath said in his heart, There is no God. They are corrupt, they have done abominable works; There is none that doeth good.
2 Jehovah looked down from heaven upon the children of men, To see if there were any that did understand, That did seek after God.
3 They are all gone aside; they are together become filthy; There is none that doeth good, no, not one.
4 Have all the workers of iniquity no knowledge, Who eat up my people as they eat bread, And call not upon Jehovah?
5 There were they in great fear; For God is in the generation of the righteous.
6 Ye put to shame the counsel of the poor, Because Jehovah is his refuge.
7 Oh that the salvation of Israel were come out of Zion! When Jehovah bringeth back the captivity of his people, Then shall Jacob rejoice, and Israel shall be glad.

Monday, 20 October 2014

On those gaps.

I find this 'God of the gaps' objection  that Darwinists often raise when ever Darwin sceptics  point to the simple fact that their theory doesn't have the explanatory content its advocates claim odd.Surely if competing explanations to any occurrence are being compared the relative feasibility the respective explanations must be up for consideration.
  Thus no objection should be raised to an advocate of one of those competing theories attempting to demonstrate why its rivals are less feasible than the one he advocates.The issue is not merely one of gaps but of which explanation is best  able to bridge these gaps or are simply not up to the challenge based on our collective experience and knowledge of the way things work.
   This principle is especially important when investigating events that occurred in the distant past.One can conjure all sorts of engaging and plausible sounding narratives,but which of these best bridges the gaps.
   For instance it's not inconceivable that a combination of chance and necessity could have produced some of the buildings,roads,furniture,boats or apparent works of art that are  unearthed by archaeologists from time to time.Likely if someone put their minds to it they could construct a narrative outlining a possible,perhaps even seemingly plausible,series of events that could over the course of many centuries produce,say,an apparently well designed bridge.
  Would it be unskilled pleading by advocates of the competing explanation that the structure is far more likely to have been planned and built by an intelligent agent or agents to point out those factors that make the competing hypothesis less feasible than their own.Would it be fair to caricature the design advocates' argument in the aforementioned example as being "complexity therefore human ingenuity" or would something like "apparent engineering sophistication therefore mindless random processes unlikely" be a fairer summation. 
  Likely there would be no objection to examining capacity of both these ideas to bridge our information gap in such a situation.
  Is it consistent then to object when Darwin sceptics do essentially the same thing.
     

Saturday, 18 October 2014

Which came first...?

What Chaperone Proteins Know



Here's a riddle for you: Proteins are used to make proteins, so if we assume a purely naturalistic origin of life, where did the first proteins come from?
If a cell is a factory, proteins are the factory workers. Proteins conduct most of the necessary functions in a cell. Proteins are made up of amino acid building blocks. A chain of amino acids must fold into the appropriate three-dimensional structure so that the protein can function properly. Within cells are proteins known as chaperones that help fold the amino acid chain into its proper three-dimensional structure. If the amino acid chain folds improperly, then this could wreak havoc on the cell and potentially the entire organism. The chaperone works to prevent folding defects and is a key player in the final steps of protein synthesis.
However, as important as chaperones are, there are still many questions as to how exactly they work. For example, do the chaperones fold the amino acid chain while it is still being constructed (during translation), or is the amino acid chain first put together, and then the folding beings? Or, is it some combination of both? Studies indicate that it is indeed a combination of both. There are two different kinds of chaperone proteins within the cell, one for translation and one for post-translation. With these two different kinds of chaperones, where and how does regulation happen to prevent misfolds?
Recent research on bacterial cells sheds light on the chaperones' important function. One chaperone in particular, Trigger Factor, plays a key role in correcting misfolds that may occur early on in the translational process. Trigger Factor can slow down improper amino acid folding, and it can even unfold amino acid chains that have already folded up incorrectly.
Here are some of the neat features of Trigger Factor:
  • Trigger Factor actually constrains protein folding more than the ribosome does. It doesn't just "get in the way" like the ribosome. It also regulates the folding.
  • Trigger Factor's function is specific to the particular region of the amino acid chain. It does not just perform one function no matter what the composition of the amino acid chain. It changes based on the region of the chain it is working with.
  • Trigger Factor also changes its activity based on where the protein is in the translation process.
  • Trigger Factor's process depends on how the amino acid chain is bound to the ribosome, and can even unfold parts of the chain that were misfolded in the translation process.
An additional factor that regulates when amino acid chains fold into proteins is its distance from the ribosome (the place where the amino acid chain is made). The closer the chain is to the ribosome, the less room it has to fold into a three-dimensional protein. Trigger Factor works with this spatial hindrance, making an interesting and complex regulation system.
Trigger Factor is only called into the game once the amino acid chain is a certain length (around 100 amino acids long) and when the chain has certain features, such as hydrophobicity. As the authors state it, Trigger Factor keeps the protein from folding into its three-dimensional structure until the amino acid chain has all of the information it needs to fold properly:
In summary, we show that the ribosome and TF each uniquely affect the folding landscape of nascent polypeptides to prevent or reverse early misfolds as long as important folding information is still missing and the nascent chain is not released from the ribosome.
So we have a protein that is able to perform various functions that inhibit or slow protein folding until the amino acid has the right chemical information for folding to occur.
This does not solve the riddle about proteins being made from proteins (otherwise known as the chicken-and-the-egg problem). It actually adds another twist to the riddle: How does one protein know how much information a completely different protein needs to fold into a three dimensional structure? How does a protein evolve the ability to "know" how to respond to specific translational circumstances as Trigger Factor does?



Why the search for a simple lifeform is a fool's errand.

Ciliate Organism Undergoes "Scrambled Genome" and "Massive...Rearrangement"




The pond-dwelling, single-celled organism Oxytricha trifallax has the remarkable ability to break its own DNA into nearly a quarter-million pieces and rapidly reassemble those pieces when it's time to mate... The organism internally stores its genome as thousands of scrambled, encrypted gene pieces. Upon mating with another of its kind, the organism rummages through these jumbled genes and DNA segments to piece together more than 225,000 tiny strands of DNA. This all happens in about 60 hours.
One of the paper's lead researchers points out something that would occur to most any reader: "People might think that pond-dwelling organisms would be simple, but this shows how complex life can be, that it can reassemble all the building blocks of chromosomes." That kind of changes the meaning of the insult "Pond-scum"!

This ciliate organism is strange in other ways, as its cell contains two nuclei. One, called the somatic macronucleus (MAC), is used like a typical eukaryotic cell's functioning nucleus -- to generate proteins and function kind of like a CPU. But the second nucleus, called the germline micronucleus (MIC), is used to store genetic material that will be passed on to offspring during reproduction. And it's in the second nucleus that all the rearrangement and scrambling of the genome takes place.

The reproductive process of these organisms is also very strange. They don't use sex to reproduce, whether by binary fission or by creating a "new" organism. Rather, when two members of this species have "sex," they only exchange DNA for the purpose of replacing old, broken down genes. This allows them to "replace aging genes with new genes and DNA parts from its partner." Though the genome of the organism is reborn with each new generation, the organism itself is essentially immortal. The process goes approximately like this:

First the information in the second nucleus (the germline micronucleus) is broken into about 225,000 small fragments. Next, the organism swaps about half of that DNA with its mate. Then, the organism reassembles its thousands of chromosomes in the germline micronucleus. And this reassembly process shows the important functionality of non-coding DNA: "millions of noncoding RNA molecules from the previous generation direct this undertaking by marking and sorting the DNA pieces in the correct order." After sex, the old somatic macronucleus disintegrates, and a new somatic micronucleus is created from a copy of the newly assembled germline micronucleus. The paper describes the process:
In the micronucleus (MIC), macronuclear destined sequences (MDSs) are interrupted by internal eliminated sequences (IESs); MDSs may be disordered (e.g., MDS 3, 4, and 5) or inverted (e.g., MDS 4). During development after conjugation [sex], IESs, as well as other MIC-limited DNA, are removed. MDSs are stitched together, some requiring inversion and/or unscrambling. Pointers are short identical sequences at consecutive MDS-IES junctions. One copy of the pointer is retained in the new macronucleus (MAC). The old macronuclear genome degrades. Micronuclear chromosome fragmentation produces genesized nanochromosomes (capped by telomeres) in the new macronuclear genome. DNA amplification brings nanochromosomes to a high copy number.
Obviously this is an incredibly complex process, which requires numerous carefully orchestrated cellular subroutines. In fact, don't miss the paper's reference to the term "pointer." That's a term from computer science, where a pointer is a computer programming element that tells a computer where to put some piece of information. In a similar way, these ciliate organisms use pointers tell the organism where to put some piece of DNA information when it reassembles the genome.

"Radical Genome Architecture"

If that sounds complicated, consider some of the details reported in the paper about the germline micronucleus (MIC). In fact, the big story here is that this research represents the first attempt to decipher what's going on in the germline micronucleus. According to the paper, the MIC contains "over 225,000 [DNA] segments, tens of thousands of which are complexly scrambled and interwoven," where, "Gene segments from neighboring loci are located in extreme proximity to each other, often overlapping." The paper puts it this way:
The germline genome is fragmented into over 225,000 precursor DNA segments (MDSs) that massively rearrange during development to produce nanochromosomes containing approximately one gene each.
These nanochromosomes come in two types: scrambled and unscrambled. They thus further find:
In addition to the intense dispersal of all somatic coding information into >225,000 DNA fragments in the germline, a second unprecedented feature of the Oxytricha MIC genome is its remarkable level of scrambling (disordered or inverted MDSs). The germline maps of at least 3,593 genes, encoded on 2,818 nanochromosomes, are scrambled. No other sequenced genome bears zthis level of structural complexity.
They describe a striking example of scrambling: "The most scrambled gene is a 22 kb MIC locus fragmented into 245 precursor segments that assemble to produce a 13 kb nanochromosome encoding a dynein heavy chain family protein." But the complexity of the germline micronucleus goes even deeper, as some of the scrambled genes entail genes encoded within genes:
A third exceptional feature we noted is 1,537 cases (1,043 of which are scrambled) of nested genes, with the precursor MDS segments for multiple different MAC chromosomes interwoven on the same germline locus, such that IESs for one gene contain MDSs for another.
Additionally, these precursor DNA sequences may encode multiple chromosomes in the somatic macronucleus, which are shuffled and spliced back together during the reassembly process:
A fourth notable feature arising from this radical genome architecture is that a single MDS in the MIC may contribute to multiple, distinct MAC chromosomes. Like alternative splicing, this modular mechanism of "MDS shuffling" ... can be a source of genetic variation, producing different nanochromosomes and even new genes and scrambled patterns. At least 1,267 MDSs from 105 MIC loci are reused, contributing to 240 distinct MAC chromosomes. A single MDS can contribute to the assembly of as many as five different nanochromosomes.
There's a lot more in this paper discussing the complexity of these processes that deconstruct and reassemble the genome of Oxytricha trifallax. The natural question that arises is "How did this evolve?" The paper doesn't even attempt to offer an answer -- it's simply descriptive.

In a way, the degradation and reassembly of the genome brings to mind the liquidation and rebuilding of an insect's body during holometabolism. For more on that, see the Illustra film Metamorphosis. Holometabolism has also baffled evolutionary biologists since the programming for the entire process must be fully in place before it occurs, or you end up with a dead organism. Given the importance of a genome to an organism's survival, one would expect the same to be true of the processes involved in the degradation and reassembly of the Oxytricha trifallax genome.
From the perspective of intelligent design, these complex processes are more readily accounted for. They require a cause capable of thinking ahead, with planning and foresight. Intelligent agency is capable of doing that. An intelligent agent could produce the information to program the process of deconstructing and reassembling the Oxytricha trifallax genome from the beginning.

A goal-directed creative process like ID can shed light on the mystery of the Oxytricha trifallax genome. Obviously this paper in no way suggests that ID is the answer. But something tells me that unguided evolutionary explanations of the genomic complexity reported by these researchers won't be forthcoming.