Search This Blog

Friday, 22 January 2016

In search of earth 2.0

Common design vs. common descent

Modular Paint-Box Explains Butterfly Color Patterns:
Evolution News & Views January 21, 2016 12:32 AM

Butterflies of the Amazon exhibit astonishing beauty and diversity. The Heliconius genus, in particular, is striking for its examples of mimicry, where members of different species have converged on the same color patterns on their wings. Collections of these butterflies can be arranged into series where yellow and red spots grow and shrink, grading into one another. How did this diversity come about?

The usual answer is Darwinian evolution -- a mutation of a gene leading to a novelty, then common ancestry and natural selection. A new model, published in PLOS Biology, does not dispute that kind of evolution, but offers different mechanisms that fit with intelligent design. As the authors discuss in news from the University of Cambridge, it's a story of shared technology.

Research finds independent genetic switches control different splotches of colour and pattern on Heliconius butterfly wings, and that these switches have been shared between species over millions of years, becoming "jumbled up" to create new and diverse wing displays. [Emphasis added.]

Everything else about the butterflies appears normal; the body anatomy; the antennae; even the basic wing shape. It's primarily the color splotches that vary. Each one has a so-called dennis patch at the base of the wing, and red "ray" streaks that fan out across the hindwing. Within any individual, though, the patterns remain symmetrical: the left wing is a mirror image of the right wing.

The Cambridge researchers looked at the genomes of dozens of specimens.

New research on butterfly genomes has revealed that the genetic components that produce different splotches of colour on wings can be mixed up between species by interbreeding to create new patterns, like a "genetic paint box."

This is not the usual mutation-selection mechanism of Darwinian evolution, in other words. The researchers believe that existing genetic switches can be "shared" between species by interbreeding, hybridization and introgression, a process of gene flow from one species into another population that results from repeated backcrossing of an interspecific hybrid with one of its parent species.

Novelty is not originating by mutation, in short, but rather by new combinations of genetic switches that were already present. Interbreeding and hybridization shuffles these genetic switches that turn particular colors on and off in different parts of the wings. Is this type of sharing common in the world?

It has been known for some time that exchange of genes between species can be important for evolution: humans have exchanged genes with our now extinct relatives which may help survival at high altitudes, and Darwin's Finches have exchanged a gene that influences beak shape. In butterflies, the swapping of wing pattern elements allows different species to share common warning signs that ward off predators -- a phenomenon known as mimicry.

However, the new study, published today in the journal PLOS Biology, is the first to show such mixing of genetic material can produce entirely new wing patterns, by generating new combinations of genes.

One wonders if these researchers realize they have undermined Neo-Darwinism in these famous examples. Swapping of genetic elements does not require mutation and selection. Moreover, early humans, finches and butterflies had to be able to interbreed or at least hybridize to benefit from the shared information.

The authors state that these switches evolved just once, then are used in different parts of the wings. In different individuals, they can see little regions of color "jumping about all over the place." And because of pleiotropy -- the linkage of the color function to other functions on the gene -- the gene itself cannot evolve:

The key to this evolutionary butterfly painting is the independence of each genetic switch. "The gene that these switches are controlling is identical in all these butterflies, it is coding for the same protein each time. That can't change as the gene is doing other important things," said lead author Dr Richard Wallbank, also from Cambridge's Department of Zoology.

"It is the switches that are independent, which is much more subtle and powerful, allowing evolutionary tinkering with the wing pattern without affecting parts of the genetic software that control the brain or eyes.

"This modularity means switching on a tiny piece of the gene's DNA produces one piece of pattern or another on the wings -- like a genetic paint box," Wallbank said.

We must not get confused about the meaning of "evolution" in these papers. The wing patterns may vary, but all the species are members of one genus. We're looking at a mechanism for robustness and flexibility that is consistent with intelligent design. In the tropics where these butterflies live, the ability to try out new color patterns can be advantageous to attract mates and avoid predators. These are tiny changes to combinations of existing switches, not matters of great transformations, such as gaining a new organ. It's mere shuffling of what already exists.

The paper is even more emphatic about nature preventing novelty by mutation:

One of the major impediments to evolutionary innovation is the constraint on genetic change imposed by existing function. Mutations that confer advantageous phenotypic effects in a novel trait will often result in negative pleiotropic effects in other traits influenced by the same gene. Several mechanisms have been proposed by which evolution can circumvent such constraints, resulting in phenotypic diversification. In particular, the modularity of cis-regulatory elements means that novel modules can encode new expression domains and functions without disrupting existing expression patterns. This modularity underlying gene regulation has led to the assertion that much of morphological diversity has arisen through regulatory evolution.

What does this do to old-fashioned Darwinian gradualism?

This might seem to imply that the evolution of novel regulatory alleles is relatively gradual, requiring the evolution of many small effect substitutions, but recent adaptive radiations can show extremely rapid rates of morphological change. The role of regulatory modularity therefore remains to be tested in adaptive radiations in which morphological variation evolves very rapidly.

Although this paper is full of the word "evolution," it's really a different concept they are proposing.

We show that two patterning switches -- one that produces red rays on the hindwing and the other a red patch on the base of the forewing -- are located adjacent to one another in the genome. These switches have each evolved just once among a group of 16 species but have then been repeatedly shared between species by hybridisation and introgression. Despite the fact that they are now part of a common pattern in the Amazon basin, these two pattern components actually arose in completely different species before being brought together through hybridisation. In addition, recombination among these switches has produced new combinations of patterns within species. Such sharing of genetic variation is one way in which mimicry can evolve, whereby patterns are shared between species to send a common signal to predators. Our work suggests a new mechanism for generating evolutionary novelty, by shuffling these genetic switches among lineages and within species.

In addition, they appeal to "convergent evolution" to explain some spectacular cases of mimicry. But if this were all due to chance, why are the patterns symmetrical? What about Darwinian mechanisms would require the left and right wings match?

None of what they describe requires neo-Darwinism or supports macroevolution. The switches were already there. They only control how much color goes into each spot. They can be shared within and across species by shuffling mechanisms, allowing rapid changes in wing patterns. It sounds like a great way for butterflies to quickly send different signals in different situations, so that they remain viable. Think of a flagman, using the same flags but in different combinations to send different messages. The flags were already there, and the man (or a programmable robot) knows how to use them.

Evolutionists have focused long and hard on the patterns on butterfly wings, which indeed are interesting, but they need to explain the weightier matters: how an egg becomes a crawling caterpillar, enters a chrysalis, and emerges a flying butterfly. As illustrated in Metamorphosis: The Beauty and Design of Butterflies, that's like turning a Model T into a helicopter.


The very traits they have focused on (wing patterns) turn out to be non-Darwinian and restrained by pleiotropy, reliant on "switches" that look designed. How much more must the critical organs for digestion, reproduction, and flight require specific information targeted for those functions? Only intelligence creates that kind of information.

Tuesday, 19 January 2016

A clash of titans V

Civil War IV

PZ Myers on Royal Society “rethink evolution” meet:
January 19, 2016 Posted by News under Evolution, Darwinism

But that’s not how science works.”

From his blog Pharyngula,
Larry Moran is attending — not as a representative of the crackpot contingent, but, I suspect, to cast a cynical eye on the shenanigans. The Third Way of Evolution gang seems to be excited about the meeting, which is not a good sign — these are people who have taken some useful ideas in evolutionary theory, like epigenetics and niche construction, and turned the dial up to 11 to argue that these concepts are so revolutionary that they demand a complete upheaval of neo-Darwinian thinking.
Many evidence-based concepts do demand it, actually.

What’s changed is this: Darwinism (natural selection acting on random mutation) was once a default explanation of change in life forms over time. Now it competes with a variety of evidence-based explanations. The question is, which one or combination best explains the pattern we see in a given case?

From a lay perspective, it is somewhat like this: We grew up with Big Phone, Inc.’s monopoly on communications (cross them, and your best bet is smoke signals on a windless hill).

Then we moved, and found ourselves in a region where five different phone companies’ signals are carried through the wires and towers. So which one is best for a given situation? We must think about that.

Meanwhile, Big Phone is shrieking from the sidelines that all this chaos is a disaster. Mainly, it appears, for Big Phone. Most communicators have never been more connected than now, for better or worse. But back to Myers:

… And, unfortunately, I’ve just learned that the Queen of Hyperbolic Revolutionary Evolution, Susan Mazur, Journalistic Flibbertigibbent, is all wound up about it, which is also not a good sign. She’s raving about Paradigm-Shifters who will come up with a replacement for the modern synthesis.


The correct term and pronunciation is “flibbertigibbet,” Dr. Myers.Flibbertigibbets like Mazur are what happens to people who are tone deaf.

As we mentioned in connection with University of Toronto’s Larry Moran a couple months back,
Moran also misses the point about interviewer Suzan Mazur, of whom he says dismissive things. When journalists who publish in key venues become interested in an otherwise obscure train wreck, we can reasonably suspect that a shift is taking place. That’s why we call it “news” and not “olds.”
Myers also shares this insight:
Mazur clearly has no idea at all how science works. Twenty people attending a meeting don’t get to suddenly declare that a theory is replaced, and I don’t care who they are. More.
Yes, maybe twenty is too many. Even one person can replace a big theory, even some guy standing at an obscure lectern in a patent office in Switzerland in the early 1900s…

But we will assume that the organizers are correct in thinking that twenty is a reasonable number in this case.

See also: So who’s in and who’s out at Royal Society 2016 “rethink evolution” meet?

and

Progressive Review hopes for post-Darwinian science “But Darwin clearly didn’t have all the answers, and science has moved many miles since his time.”

On formalising design detection III

Back to Basics: Understanding the Design Inference:
January 18, 2016 Posted by Eric Anderson under Intelligent Design, Design inference, Darwinist rhetorical tactics, Darwinian Debating Devices, Back to Basics of ID




This is prompted primarily by a recent post and by the unfortunate realization that some people still do not understand the design inference, despite years of involvement in the debate. Specifically, there was discussion at Barry’s prior post about whether Elizabeth Liddle admits that “biological design inferences” may be valid in principle. Over 200 comments appeared on the prior thread, including a fair amount of back and forth between Barry, Elizabeth and me, all of which may be worth reviewing for those who are interested.

However, the primary takeaway from that thread is that we need another back-to-basics primer on intelligent design – specifically, what the design inference is and how it works. Yes, I know regular readers have a great deal of exposure to this topic. And I know that many of you have an excellent grasp of the design inference. But please stay with me to the end and I trust this post will shed some additional light and provide perhaps some additional nuances on the issues. Perhaps less in terms of providing you with additional insight and more in terms of understanding the rhetorical tools and mindset of intelligent design opponents (or proponents who may have misunderstood some basic issues).

With that need clearly established from the prior thread and a more recent thread, I have finally taken a deep breath, gathered my courage, and set aside many hours to put this together. (Yes, that is the amount of time it takes to carefully analyze, lay out, and properly articulate an issue like this. Unfortunately, the quick one-liner complaints and dismissals are much more facile to post.) I apologize in advance for the length, as I dislike lengthy head posts in general, but there are some key issues here that need to be fleshed out in detail.

Background

We hear from time to time, as in the two threads I cite above, claims essentially equivalent to the following:

“The design inference is a valid mode of inquiry in principle, but it cannot be applied to biological systems.”

or

“The design inference only works with human artifacts and cannot be applied to life because we have never seen a designer of life.”

or

“It is possible to detect design in theory, but it cannot be applied without knowing characteristics of the designer.”

or, this zinger:

“Design detection works with some phenomena, but life is ‘too complex to have been designed.’”

It is claims like these I wish to address. For present purposes and to keep this to some merciful length, I am setting aside issues about whether ID is science, whether ID is falsifiable, whether ID has a positive case, whether ID is directly testable, and so on. Our present purpose is to examine specifically the situation in which an individual asserts that (a) the design inference has some application, but (b) it cannot be applied to biological systems or to non-human designers.

How Can We Determine Design?

There are a couple of approaches to determining whether something is designed.* These can be broken into two broad categories: namely, actual knowledge and inference.

In the first category, we have actual knowledge of the artifact being designed. In this situation we see or experience as directly as possible a creative event – something being designed. All of us have experience with this: we have created or built something with our own hands, or have worked on a schematic, or have written lines of code. In such cases we have direct, actual knowledge that the artifact in question was designed. We can also include in this category creative events that we witness directly.** No-one disputes these examples and there is no need to infer anything when we have actual knowledge. We need not discuss this category further.

In the second category, we don’t have actual knowledge of the artifact being designed. Rather, we infer it was designed by examining indirect evidence after the fact. These pieces of indirect evidence can be legion. Indeed, in some cases they can seem so many and so obvious that we are tempted to think we have “actual knowledge” of the design. However, on closer inspection we realize we are really drawing an inference.Was the iPhone designed? Of course, you say, only a fool would think otherwise. Fair enough, but why are you so sure it was designed? Did you personally design it? Did you actually witness someone else design it? Even if you personally worked on a couple of the parts, how do you know the rest of the iPhone was designed?

Make no mistake, the fact that Apple claims it was designed, or that Apple sells it, or that no-one questions it was designed, or that patents were filed, or that you have actual personal knowledge of some similar system being designed – none of these things shifts the iPhone into the category of “Actual Knowledge.” No, when we say the iPhone was designed we are still drawing an inference. A correct one, undoubtedly, but still an inference.

And what is it that gives us such confidence the iPhone was designed?

There are many possible pieces of indirect evidence that could bolster our claim that the iPhone was designed. A key list might include: (a) perhaps we know an Apple engineer who claims actual knowledge of its design; (b) there were lots of engineers around at the time who could have been available to work on it; (c) perhaps we know of some similar system that was designed; (d) the iPhone contains multiple components that have been brought together into a greater functional whole; and (e) to our knowledge there is no natural event or other non-design process that can produce something like an iPhone.

Excellent. So we are pretty confident the iPhone was designed. Confident enough that we would stake a great deal on it. And we can draw the same conclusion for millions upon millions of other human inventions. But it is still an inference.
Let’s up the ante a bit. What about something like Stonehenge? Let’s go through the same list:

(a) We know an engineer who actually worked on Stonehenge? No. But how cool would that be!

(b) Lots of engineers around at the time? This one is more tricky and deserves some discussion.

We don’t know if there were lots of engineers, but, we might argue, perhaps a small handful, or even one, would suffice. On the other hand, do we have actual knowledge that there was even one engineer capable of designing Stonehenge around at the time? No. We do not.

We have corroborating evidence (bone fragments, burial mounds, and the like) that there were humans around at the time. So we can infer that there was a designer around at the time. But we have no evidence – none at all – outside of Stonehenge itself, that there was a designer around at the time capable of producing Stonehenge. Indeed, it is the very existence of Stonehenge itself, coupled with our conclusion that Stonehenge was designed, which gives us the following piece of information and allows us to draw the following conclusion: there was a designer around at the time capable of designing Stonehenge. This is the way it always works, whether we are talking about some newly-discovered structure, a previously-unknown manuscript, or otherwise.

It is absolutely critical to understand this point: In archaeological and other investigations of the past, we do not infer that an artifact was designed because we have knowledge of a designer capable of producing the artifact. Rather, we infer there was a designer capable of producing the artifact, because we have found the artifact and concluded that it was designed.

Do not get this backwards. Doing so is a failure of logic at the most basic level and a failure to understand the process of investigation.

Let me briefly add one last related point here. Stonehenge is a somewhat simple case (though perhaps a more confusing case to those who aren’t thinking through the chain of logic clearly), because there is much corroborating evidence in the Stonehenge area for the existence of humans. However, there are many other examples in archaeology where independent evidence of whether designers were around is absent. In those cases, the chain of logic works precisely the same way it always does: we eventually infer that there were capable designers around, because we have found an artifact and concluded that it was designed.

(c) Actual knowledge of a similar system being designed? Perhaps.

This turns on how “similar” something needs to be. Stonehenge, by many accounts, is a fantastic, singular monument. And, no, we cannot rely on the fact that lots of other henges have been found in the surrounding area – we don’t have any actual knowledge about their design either and must draw an inference just the same. So we might have actual knowledge of some stones being cut and placed generally, but we really have no actual knowledge of something highly similar to Stonehenge being produced.

Nevertheless, under this particular criterion we might reasonably conclude that we know of designed systems which at least contain analogous characteristics or which have a number of similarities to aspects of Stonehenge.

(Incidentally, there are some interesting documentaries on recent work done at Stonehenge with cutting-edge mapping technologies. The area turns out to contain an entire massive complex of henges and other structures, not just a single henge. Well worth checking out.)

(d) Stonehenge contains multiple components that have been brought together into a greater functional whole? Definitely.

(e) To our knowledge there is no natural process or event that could produce Stonehenge. Correct.

So, to review, why are we so utterly, completely, unabashedly confident that Stonehenge was designed? Is it because we have actual knowledge and were there at the time to witness it? Is it because we know someone who claims to have worked on it? No and no.

Is it because we know there were engineers around at the time who were capable of producing Stonehenge? No. Quite the opposite: it is our conclusion of design that allows us to infer there were engineers at the time capable of producing Stonehenge.

Is it because we have actual knowledge of similar systems being designed? Perhaps, turning a bit on how we define “similar.” Is it because it contains multiple components that have been brought together into a greater functional whole. Yes. Is it, in part, because to our knowledge there is no natural process that could reasonably produce Stonehenge? Yes.

How the Inference Runs

Take another close look at the above. There is a fundamental point here that seems lost on some critics of the design inference and which is absolutely critical: We infer that Stonehenge was designed not, as sometimes claimed by design critics, because we know there were designers around at the time capable of producing the artifact in question, but because of the characteristics of the artifact itself.

Then, having concluded the artifact was designed, we can infer that there were designers around at the time capable of producing the artifact. This is the directional arrow of the logic. This is the way the design inference works. This is the way it always works.

Furthermore, let us note for completeness that the arrow of reasoning can never run in the opposite direction. Even if we know for certain that a designer exists in the right time and place, and even if we know for certain that the designer has the capability of designing, we still cannot conclude, based simply on those facts, that an artifact was designed. After all, there is no requirement that a designer actually produce anything. The designer may have existed; the designer may have been capable. But the only way we know that the designer actually designed the artifact in question is by examining the artifact itself.

The inference simply cannot operate any other way.

What is so often happening in the design critic’s mind, the essentially philosophical stance being taken, is that design can only be considered in certain circumstances. Not that the artifact doesn’t need to be examined on its own merits. It does. Not that we need to have independent evidence of the designers and their characteristics beforehand. We don’t. The design inference works across the spectrum. In can be applied just fine. It is just that the critic is unwilling to apply the design inference in certain unpalatable circumstances.

Is Life is Too Complex to Be Designed?

Let us now turn to a particularly remarkable stance taken by Elizabeth Liddle on the prior thread. I refer to Elizabeth only because it is a convenient example and because it also highlights some of the mistakes made by critics of intelligent design generally. Specifically, although she claimed willingness to consider design in the case of human ingenuity, when it comes to biology she says, “I think [life] is too complex to have been designed.”

Let that sink in for a moment.

How could someone say that and what could it possibly mean? In the context of the design inference, we are looking for indicia of design as we examine an artifact: complex specified information, irreducibly complex functional structures, and so on. What does it mean to claim that we can infer some simpler things are designed, but that life is “too complex” to have been designed? (Please don’t get hung up on the fact that Elizabeth misuses the term “complex.” She is essentially talking about indicia of design, otherwise her whole statement is meaningless.)

Essentially, she is arguing that there are certain indicia of design we can look to, but when there are too many, then we cannot infer design because it is too much. Here is how it looks graphically:
Incidia of Design
The first column is easy and makes sense; the second is fine as well.

It scarcely bears mentioning that the conclusion in the last column is nonsensical. Blatantly inconsistent. Utterly irrational and illogical. Indeed, it is difficult to understand how anyone could be so completely off track as to even countenance such a viewpoint. We might be tempted to conclude that such an individual can only be either (1) incapable of rational thought, or (2) purposely deceptive.

At least in the context of the design inference.

A Third Way

But is there another way to think about a conclusion of design? Is there a third possibility that might explain how someone could take the view that some indicia of design = design while even more indicia of design = non-design? There is indeed another way of looking at it, and this is precisely what is happening with Elizabeth.

Elizabeth, as she has made clear, does not believe it is possible to infer design without knowing about the designer’s existence and something about the designer’s capabilities. For those intrepid souls who have stayed with me to this point, you know this is wrong, and demonstrably so, as has already been shown above. Nevertheless, let’s run with it for a moment to see if we can at least identify the chain of thought that leads to this remarkable “too complex to be designed” idea.

If one mistakenly thinks that we have to know that (a) a designer exists and (b) the designer is capable of producing the artifact in question, then it follows that we cannot infer life was designed. After all, the argument goes, we do not have direct independent evidence that a designer existed at the time nor that the designer was capable of producing the artifact in question. Thus, even though life is teeming with indicia of design, even though life is “too complex” compared with other things Elizabeth believes are designed, she is unwilling to infer design because she doesn’t have independent evidence about the designer and the designer’s capabilities.

She unfortunately fails, as do so many others, to realize that the exact same situation applies to almost every other inference of design – certainly to artifacts from the remote past, whether Stonehenge or otherwise. So her attempt to identify design is, at best, poorly thought through and inconsistently applied.

The Demand for Evidence Beyond the Artifact

Elizabeth says, regarding her conclusion that life was not designed:

But were evidence to arise, say, of optimised solutions from one lineage being transferred to another (as we install cameras in phones), or of evidence of artefactual fabrication, or of the presence of designers on early earth, or possibly something I haven’t thought of that indicated that living things were designed, I might have to think again.

Note the part I have highlighted in bold. This is the crux of the matter. She is looking for actual evidence of design – or at least a version of actual evidence she is willing to accept.  In other words, if we find independent evidence to our liking that something was designed then we will acknowledge it was designed.

This is not to say that Elizabeth never infers design. She clearly does. But she reserves her willingness to infer design to those cases in which there is some other reason, something other than the characteristics of the artifact in question, to conclude the artifact may have been designed. Let’s say she concludes artifact x, such as Stonehenge, is designed. We might do well to ask how she could come to that conclusion in the case of artifact x, but not in the case of living systems.

The answer is rather simple.  Perhaps she feels artifact x is closer in time to other things that are known to be designed; perhaps she feels that artifact x is more “similar” to things known to be designed; perhaps she just has a difficult time accepting the idea that a designer could be capable of designing living systems (remember, she thinks they are “too complex”); perhaps it is the pedestrian (but rapidly weakening) observation that no designer she knows of has yet designed a complex living system. Fine. As with the iPhone, we can infer design based on a number of criteria, apart from the characteristics of the artifact itself.

But that is not what the design inference is about. The design inference is precisely about identifying design from the artifact itself in the absence of additional evidence, in particular evidence about the existence or capabilities of the designer. Furthermore, as pointed out above, absent actual knowledge of design we must always fall back to an analysis of the artifact in question. Thus, any inference that relies only on other evidence will never be as solid or as robust as an inference based on the characteristics of the artifact itself.

Whatever the case, Elizabeth is clearly not willing to accept the design inference,*** despite any statements that could be understood as such. The most that can be said for Elizabeth’s approach is that in certain cases she is willing to make a design inference. Just not the design inference relevant for purposes of intelligent design. Her approach is not based on an examination of the artifact to find complex specified information, for example. Rather, it is a vague, ill-defined, “I need to see some additional corroborating evidence beyond the artifact that looks good to me” type of approach.

To be sure, one might indeed infer design with reference to other pieces of evidence beyond the artifact itself, or with reference to some poorly-defined “similar” system or some independent corroboration that designers were around at the time or some other piece of evidence that is palatable or seems subjectively good enough. But such an approach (i) does not constitute the design inference for purposes of intelligent design, and (ii) is embarrassingly poorly-defined and terribly inconsistent in application and practice.

Here we come face to face with the irony of the situation. The oft-repeated allegations that the design inference is poorly defined or difficult to apply are multiplied by orders of magnitude when the magnifying lens of critical thinking is turned on the inconsistent, poorly-defined approach of those individuals wont to claim that design can only be discerned in this field of study but not that, or in this time and place but not that, or with this type of designer but not that.

Conclusion

Despite continued criticisms of the design inference, upon closer examination the arguments of critics fail to meet the tests of objectivity, logic, and practical application. The design inference, properly understood, continues as an extremely reliable and robust, and in some cases our only, avenue to determine design.

Absent actual knowledge, which rarely, and in the case of historical events and artifacts almost never, exists, an inference to design always turns on an examination of the artifact in question. This is true with Stonehenge and other ancient artifacts just as much as with living systems. Assertions that we must have independent knowledge about the existence or characteristics of a designer before design can be inferred are misguided and demonstrate a basic lack of understanding of both how the design inference works and the direction of reasoning in investigation.

In addition, willingness to apply the design inference to only a particular field, or only a particular timeframe, or only a particular kind of designer bespeaks a fundamental confusion of the issues and, too often, an unspoken philosophical bias.

—–



* To nip a common red herring in the bud, we are talking about actual design, as commonly understood, by an intelligent being. Please, for the sake of rational discourse and for clarity of discussion in this thread, do not twist words and make claims about “nature designing” or something being “designed by a natural process” or similar nonsense. That is not what design means in the context of intelligent design and it is not what this thread is about.

** To nip the next red herring in the bud, we do not need to get into an angels-on-the-head-of-a-pin discussion about what reality is, whether we can trust our senses, whether what we see is actually real. Such philosophical musings, interesting as they may be, are completely irrelevant to the present discussion and are not what this thread is about.

*** There is another important reason for Elizabeth’s unwillingness to accept the design inference, but that will have to wait for a subsequent post.



Bear with me for a moment while we dive into some examples that I think will be very helpful in elucidating the issues. Two specific examples will suffice.

Monday, 18 January 2016

Yet more on reality's antidarwinian bias III

An "Exquisitely Designed" Enzyme that Maintains DNA Building Blocks
Evolution News & Views January 16, 2016 4:49 AM

It's molecular machine time, and today we'll be looking at a particularly amazing one. It's essential, it's "evolutionarily ancient," and it's unique. This machine, named ribonucleotide reductase, or RNR for short, is a beauty. News from MIT explains why your life depends on this machine:

Cell survival depends on having a plentiful and balanced pool of the four chemical building blocks that make up DNA -- the deoxyribonucleosides deoxyadenosine, deoxyguanosine, deoxycytidine, and thymidine, often abbreviated as A, G, C, and T. However, if too many of these components pile up, or if their usual ratio is disrupted, that can be deadly for the cell.

A new study from MIT chemists sheds light on a longstanding puzzle: how a single enzyme known as ribonucleotide reductase (RNR) generates all four of these building blocks and maintains the correct balance among them. [Emphasis added.]

The image shows the complex machine with active sites that precisely fit the four different building blocks. Its job is to take the ribonucleotides that make up RNA and turn them into the deoxyribonucleotides that make up DNA.

"There's no other enzyme that really can do that chemistry," she says. "It's the only one, and it's very different than most enzymes and has a lot of really unusual features."

In order for the machine's moving parts to work, one of several "effector molecules" has to fit into its special spot, like a key that opens a latch. This causes the enzyme to open up a "distant" active site and let the appropriate RNA building block in. Then, the enzyme latches it into place for its operation.

Depending on which of these effectors is bound to the distant regulatory site, the active site can accommodate one of the four ribounucleotide substrates. Effector binding promotes closing of part of the protein over the active site like a latch to lock in the substrate. If the wrong base is in the active site, the latch can't close and the substrate will diffuse out.

"It's exquisitely designed so that if you have the wrong substrate in there, you can't close up the active site," Drennan says. "It's a really elegant set of movements that allows for this kind of molecular screening process."

This four-in-one machine takes on four different shapes depending on whether the RNA nucleotide is A, G, C, or U. It sends out DNA's four, A, G, C, and T with the appropriate sugar deoxyribose instead of ribose (replacing an OH radical with a hydrogen atom, a "reduction" reaction). But that's not all this amazing machine does:

The effectors can also shut off production completely, by binding to a completely different site on the enzyme, if the pool of building blocks is getting too big.

RNR is a multi-tool if there ever was one. The effectors, the substrates and the active sites are all closely matched to the operation at hand, whether generating more DNA building blocks or regulating their supply in the cell. A paper from the Annual Review of Biochemistry (2007) puts it this way:

An intricate interplay between gene activation, enzyme inhibition, and protein degradation regulates, together with the allosteric effects, enzyme activity and provides the appropriate amount of deoxynucleotides for DNA replication and repair.

A lot of new building blocks are needed for repair and replication. You can see why this enzyme is essential for a cell when it divides or suffers stress.

The news item mentioned a latch, but another paper from 2015 in the Journal of Biological Chemistry speaks of a switch mechanism:

Ribonucleotide reductase (RNR) catalyzes the reduction of ribonucleotides to the corresponding deoxyribonucleotides, which are used as building blocks for DNA replication and repair. This process is tightly regulated via two allosteric sites, the specificity site (s-site) and the overall activity site (a-site). The a-site resides in an N-terminal ATP cone domain that binds dATP or ATP and functions as an on/off switch, whereas the composite s-site binds ATP, dATP, dTTP, or dGTP and determines which substrate to reduce.

It's like a surgical robot that has a clamp with an on-off switch. The switch (the effector) turns the machine on, opening up the distant active site and letting the appropriate substrate in. The enzyme then clamps down on the substrate and "reduces" it by replacing the oxygen radical with a hydrogen. When released, the DNA building block is ready for use, the effector switches the machine off, and the enzyme is ready for the next operation.

Somehow, when there are too many building blocks floating around in the cell, an effector binds to a different active site, disabling the machine. It's uncanny how each part seems to know what's needed and how to provide it. This involves feedback from the nucleus, where genes respond to the supply by either locking the RNR enzymes or making more of them.

Catherine Drennan on the MIT team calls this enzyme "evolutionarily ancient" and speculates about its origin.

Deoxyribonucleotides are generated from ribonucleotides, which are the building blocks for RNAs -- molecules that perform many important roles in gene expression. RNR, which catalyzes the conversion of ribonucleotides to deoxyribonucleotides, is an evolutionarily ancient enzyme that may have been responsible for the conversion of the earliest life forms, which were based on RNA, into DNA-based organisms, Drennan says.

She buys into the "RNA World" scenario for the origin of life, a view that is loaded with problems. RNR is an enzyme made of protein. Did a world of floating RNA fragments somehow build this complex, multi-part protein machine before DNA-based organisms existed? That makes no sense. A ribozyme with enough "code" for RNR would itself be impossibly complex to imagine forming by chance. It would have no way to translate that code into a polypeptide without a ribosome, also made of RNA and protein. Finally, even if by multiple miracles a primitive RNR appeared with its effectors and started cranking out product, the "RNA world" would have no idea what to do with a bunch of DNA building blocks floating around. Notably, Drennan's paper in eLife says nothing about any of this. Instead, it praises the "elegant set of protein rearrangements" performed by RNR.


RNR is one of a multitude of complex, highly-specific, multi-component machines with moving parts. Found in the simplest bacteria and archaea all the way up to human beings, it deserves better than to be treated like hopeful junk that arose by chance and found a job by accident. It deserves to be honored as an "exquisitely designed" molecular machine that performs an "intricate interplay" of functions vital to life, just like an intelligent designer would envision, plan, and create.

Patience's rights tossed under the bus in the lone star state?

The Arrogance of "Doctor Knows Best"
Wesley J. Smith January 15, 2016 2:19 PM

The Texas Advance Directive Act (TADA) allows a hospital bioethics committee and doctors to veto wanted life-sustaining treatment if they believe the suffering thereby caused is unwarranted -- with the cost of care always in the unspoken background. It is a form of ad hoc health care rationing -- death panels, if you will -- that place the moral values and opinions of strangers over those of the patient and family.

Futile care theory would even allow strangers to veto the contents of a patient's written and expressly stated advance directive.

Texas Right to Life (among others) has been an adamant opponent of the law, attempting to get it repealed. This effort has been impeded repeatedly by the Texas Catholic Conference (see my article here) perhaps because the state's Catholic hospital association likes the law. Texas Alliance for Life (TAL) often carries the Catholic Conference's water on this matter, in agreement on this issue, ironically, with the utilitarian bioethics movement.

Why? It's a bit of a puzzlement. I don't doubt, they think it is the right thing. But it should also be noted that hospitals benefit financially by refusing wanted but expensive treatment. Perhaps their social justice inclinations see limited resources as best spent on other patients.

In the wake of the Chris Dunn case, in which the patient -- conscience and aware -- clearly wanted life-sustaining treatment to continue, TAL defenders of futile care expose the "doctor knows best" arrogance of the futile care movement. From "Balancing the Rights of Patients and Doctors," in Public Discourse (my emphasis):

A person in possession of his mental faculties is not morally bound to choose treatments whose negative effects are disproportionate to any good that could come from them. By the law of transitivity, it would seem to follow that neither his doctor nor his surrogates are either. Some may say that patients are the only ones able to judge the proportionality of suffering due to life-sustaining treatments. In this case, those treatments decreased the ability of the patient to judge.

I have heard such excuses and rationalizations in futile care controversies again and again: The patient doesn't really know what is best; the family is acting on guilt; misplaced religious belief is forcing a wrong choice; they should leave such decisions to the "experts." Bah!

Besides, Catholic moral teaching -- at least, as I understand it -- allows the patient to decide when suffering being experienced supersedes the benefit being received. It does not give that decision to doctors or bioethicists. Thus, for example, St. John Paul II decided not to try to stay alive by any means necessary. He was not prevented from doing so by others as is done in futile care cases.

The article also exhibits some mendacity by omission when it discusses the refusal by other hospitals to take Dunn, while leaving out important facts:

It is telling that, even with the assistance of the hospital over several weeks to find another care provider, none would accept Chris's transfer, indicating that other doctors agreed with the attending physician's prognosis.

But patients caught up in futile care cases usually lose money for hospitals in our capitated funding system. Moreover, this whole Texas controversy began when Houston hospitals created a futile care policy and agreed to honor such determinations made by other institutions. Heads we win, tails you lose.

If continuing wanted treatment is the wrong thing to do, that should not be decided by a Star Chamber bioethics committee made up of colleagues who reflect corporate or institutional values, meeting in secret with no real transparency or accountability. Rather, if maintaining life when that is wanted is so egregious as to be inhumane, the controversy belongs in open court, with cross examination, an official record, and a right to appeal.


Bioethics committees have a very important role to play as mediating bodies in the event of treatment disputes. But they should never be empowered to become institutionally authorized, quasi-judicial death panels.

The Watchtower Society's commentary on Goodness.

GOODNESS:
The quality or state of being good; moral excellence; virtue. Goodness is solid through and through, with no badness or rottenness. It is a positive quality and expresses itself in the performance of good and beneficial acts toward others. The most common words for “good” in the Bible are the Hebrew tohv and the Greek a·ga·thosʹ; a·ga·thosʹ is usually used in a moral or religious sense.

Jehovah’s Goodness. Jehovah God is good in the absolute and consummate sense. The Scriptures say: “Good and upright is Jehovah” (Ps 25:8), and they exclaim: “O how great his goodness is!” (Zec 9:17) Jesus Christ, though he had this quality of moral excellence, would not accept “Good” as a title, saying to one who addressed him as “Good Teacher”: “Why do you call me good? Nobody is good, except one, God.” (Mr 10:17, 18) He thus recognized Jehovah as the ultimate standard of what is good.

When Moses asked to see His glory, Jehovah replied: “I myself shall cause all my goodness to pass before your face, and I will declare the name of Jehovah before you.” Jehovah screened Moses from looking upon his face, but as he passed by (evidently by means of his angelic representative [Ac 7:53]) he declared to Moses: “Jehovah, Jehovah, a God merciful and gracious, slow to anger and abundant in loving-kindness and truth, preserving loving-kindness for thousands, pardoning error and transgression and sin, but by no means will he give exemption from punishment.”—Ex 33:18, 19, 22; 34:6, 7.

Here goodness is seen to be a quality that involves mercy, loving-kindness, and truth but does not condone or cooperate in any way with badness. On this basis David could pray to Jehovah to forgive his sins ‘for the sake of Jehovah’s goodness.’ (Ps 25:7) Jehovah’s goodness, as well as his love, was involved in the giving of his Son as a sacrifice for sins. By this he provided a means for helping those who would want that which is truly good, and at the same time he condemned badness and laid the basis for fully satisfying justice and righteousness.—Ro 3:23-26.

A Fruit of the Spirit. Goodness is a fruit of God’s spirit and of the light from his Word of truth. (Ga 5:22; Eph 5:9) It is to be cultivated by the Christian. Obedience to Jehovah’s commands develops goodness; no man has goodness on his own merit. (Ro 7:18) The psalmist appeals to God as the Source of goodness: “Teach me goodness, sensibleness and knowledge themselves, for in your commandments I have exercised faith,” and, “You are good and are doing good. Teach me your regulations.”—Ps 119:66, 68.

Goodness Bestows Benefits. Goodness can also mean beneficence, the bestowing of beneficial things upon others. Jehovah desires to express goodness toward his people, as the apostle Paul prayed for the Christians in Thessalonica: “We always pray for you, that our God may count you worthy of his calling and perform completely all he pleases of goodness and the work of faith with power.” (2Th 1:11) Many are the examples of God’s abundant goodness to those who look to him. (1Ki 8:66; Ps 31:19; Isa 63:7; Jer 31:12, 14) Moreover, “Jehovah is good to all, and his mercies are over all his works.” (Ps 145:9) With a purpose he extends good to all, that his goodness may bring many to serve him and that they may thereby gain life. Likewise, any individual exercising goodness is a blessing to his associates.—Pr 11:10.

As servants of God and imitators of him, Christians are commanded to prove what is God’s good and perfect will for them (Ro 12:2); they are to cling to what is good (Ro 12:9), to do it (Ro 13:3), to work what is good (Ro 2:10), to follow after it (1Th 5:15), to be zealous for it (1Pe 3:13), to imitate what is good (3Jo 11), and to conquer evil with it (Ro 12:21). Their doing of good is to be especially extended to those related to them in the Christian faith; additionally, it is to be practiced toward all others.—Ga 6:10.


A Related Term. Similar to the Greek word for good (a·ga·thosʹ) is another word, ka·losʹ. The latter denotes that which is intrinsically good, beautiful, well adapted to its circumstances or ends (as fine ground, or soil; Mt 13:8, 23), and that which is of fine quality, including that which is ethically good, right, or honorable (as God’s name; Jas 2:7). It is closely related in meaning to good, but may be distinguished by being translated “fine,” “right,” “honest,” or “well.”—Mt 3:10; Jas 4:17; Heb 13:18; Ro 14:21.

Friday, 15 January 2016

On single neighbour nations.

On our neighbours ' minds III

Animal Minds: In Search of the Minimal Self:

New Scientist suggested, as one of its big ideas for 2015, that the ability of humans to talk to animals would transform what it means to be human. Actually, it wouldn't. But the ability of animals to understand what humans are saying would transform what it means to be an animal.

In a 2009 issue of Nature, Johan J. Bolhuis and Clive D. L. Wynne asked a key question: Can evolution explain how minds work? They identified serious flaws in the studies of animal minds. One of them is interpreting animal behavior as if it were human behavior (anthropomorphism):

For instance, capuchin monkeys were thought to have a sense of fairness because they reject a slice of cucumber if they see another monkey in an adjacent cage, performing the same task, rewarded with a more-sought-after grape. Researchers interpreted a monkey's refusal to eat the cucumber as evidence of "inequity aversion" prompted by seeing another monkey being more generously rewarded. Yet, closer analysis has revealed that a monkey will still refuse cucumber when a grape is placed in a nearby empty cage. This suggests that the monkeys simply reject lesser rewards when better ones are available. Such findings have cast doubt on the straightforward application of Darwinism to cognition. Some have even called Darwin's idea of continuity of mind a mistake.

It is a mistake. Continuities can be merely apparent, not actual.

Consider, for example, the laptop computer vs. the typewriter. Both feature the QWERTYUIOP keyboard. That might suggest a physical continuity between the two machines. The story would run thus: Computer developers added more and more parts to the typewriter, and subtracted some, until they had transformed the typewrter into a laptop.

But of course, they didn't. They adapted a widely recognized keyboard layout to an entirely new type of machine. Continuities are created by history, not laws. If we don't know the history, we don't know whether a similarity reflects continuity or not.

Bolhuis and Wynne continue, "In other words, evolutionary convergence may be more important than common descent in accounting for similar cognitive outcomes in different animal groups."

Indeed. There is no specific type of brain uniquely associated with intelligent behavior in animals (other than humans). There is, however, convergence in intelligent behavior among vertebrates (crows) and invertebrates (octopuses).

Yet most invertebrate species do not stand out in intelligence. That fact should receive more attention than it does. The nature and origin of intelligence may be quite different from what researchers have supposed.

We have tentatively identified some patterns. Metabolism and anatomy may play a larger role than earlier suspected. For example, reptiles can show intelligent behavior when their metabolism permits, as can invertebrates with sophisticated appendages, such as octopuses and squid.

It is even worth asking whether individual animals demonstrate more intelligence if they live with humans. For one thing, they may live much longer and in more complex environments.

Some might protest that when humans eliminate the lethal razor of natural selection, "daily and hourly scrutinizing, throughout the world, every variation, even the slightest," we cause animals to become less intelligent.

But is intelligence highly selected in nature? As engineers know all too well, new solutions to any problem are accompanied by numerous failures. The "smart crow" and "smart primate" tests, for example, are devised by humans who systematically reward the animals for carefully designed feats of intelligence, but do not destroy them for failure. Blind nature rewards and penalizes more haphazardly than that.

Then there is the fact that intelligent animals often do not learn from each other. In some intelligent bird species, one bird can solve a problem but others do not learn the solution by copying that bird, even if it is obviously in their evolutionary interests to do so. Thus the species does not develop a body of knowledge. As each clever bird dies, all gains are wiped out. There is no vast history of solved problems, as there is in human civilization, for even the cleverest bird to build on.

Bolthuis and Wynne offer a sober prediction:

As long as researchers focus on identifying human-like behaviour in other animals, the job of classifying the cognition of different species will be forever tied up in thickets of arbitrary nomenclature that will not advance our understanding of the mechanisms of cognition. For comparative psychology to progress, we must study animal and human minds empirically, without naïve evolutionary presuppositions.

They're right, and here is a useful illustration of the problem: A recent article on the role of epigenetics in the mating chances of male fish refers to their social status. I questioned the use of the term "social status" in relation to the behavior of fish, and was promptly informed by a knowledgeable fish hobbyist that "All biologists understand what is meant by this."

If so, that's a problem. "Social status" is a term developed by human beings to describe a conscious experience among humans. But animals may not experience their "social status" in the same way we do. A bee may be fed "royal jelly," and become a queen -- but is she conscious of her status? Are the bees that tend her conscious of it? The insect mind may not even work in a way that enables such an understanding.

So where in this spectrum, ranging from merciful oblivion through acutely painful knowledge, do male fish fighting over mates fit? Do they experience the conflict as "selves"? We simply don't know, and that fact should inspire caution in our choice of terminology. Careless words can subvert careful questions.

Philosopher Vincent Torley, who wrote his thesis on animal mind, agrees respecting the bees, noting, "A neural representation of each individual's ranking within a group does not require its possessor to have the highly abstract notion of 'social status.' Indeed, a representation of a ranking would not require consciousness at all."

And to think that among human beings, a sense of social status is so finely honed that it can depend on concepts as abstract and immaterial as the numbers in a "Hollywood" or "power" zip code...

So What Sorts of Consciousness Might Animals Have?

Philosopher Thomas Nagel is famous for asking the question, "What Is It Like to Be a Bat?" (1974). He meant that "an organism has conscious mental states if and only if there is something that it is like to be that organism." If so, the bat experiences events, as opposed to merely being one of them.

Is the bat a "self"? A "self" is more than the mere drive to continue existing that distinguishes all life from non-life. Self must also be more than sentience (an earthworm's reaction to light, for example, need not be conscious). It implies the existence of not-self in a complex environment. It does not, however, imply immortality or a capacity for abstract thought.

Perhaps the simplest way of putting it would be that a dog not only wants something, but he knows what he wants and whether he has gotten it -- and may learn various skills along the way for getting it again, and intentionally remember them. We could call this intentionality.

Vincent Torley's thesis is titled, "The Anatomy of a Minimal Mind." I prefer to use the term "minimal self" for individual animal intelligence. As a layperson, I find it easier to understand; it does not raise so many complex questions as "mind."

For example, Middle Dog resents his position in a household because he wants to be Top Dog. I find his canine mind generally opaque. However, I can see that he consciously experiences his resentment, even if it might lack reason, moral sentiment, or empathy. And Middle Dog will know if he succeeds in his quest or not. (So, probably, will everyone else.)

Some, like philosopher Edward Feser, argue that animal minds cannot form concepts, whereas others claim that chimpanzees are entering the Stone Age.

Torley takes a middle view: Animals can, it appears, form concepts, in the sense of "same vs. different" or "more vs. less." But in the absence of language, they typically cannot process abstractions. Nor do intelligent animals create symbols, understand abstract rules, or probe beneath mere perceptions, all of which are everyday matters for humans.

They do not, for example, survey their own mental states ("Why do I think I should bark at the moon?"). Yet humans of average intelligence may often ask themselves, "Why am I doing this anyway?"

As Torley says, "A defender of animal rationality could still argue that non-human animals might still possess a very simple, primitive concept of 'self,' which is 'built into' their psyches":

I have argued that the key reason why we can reasonably impute mental states to these creatures, and describe them as having minimal minds, is that both their internal representations of the outside world (minimal maps) and their patterns of bodily movement robustly instantiate a key feature that was formerly thought to be the hallmark of mental states: intrinsic intentionality.

No "Tree of Intelligence" Pattern

Naturalism, as a philosophical commitment, requires us to start with the assumption that the human mind is merely the outcome of a long, slow, random process, winding through various forms of animal mind. This suggests we can learn a great deal about the human mind by studying animal minds.

The empirical evidence does not really support that view. Not only is the human mind more powerful by orders of magnitude, but animal minds show no consistent tree of intelligence pattern in their development that would clearly support the naturalist interpretation.


We do not yet have a theory that sheds light on why some animal species appear much more intelligent than others, leaping past conventional taxonomic classifications. But seeing past Darwin to the question of how information really originates may help us acquire one.

Darwinism Vs. the real world XXIV

The Immune System: An Army Inside You:
Howard Glicksman January 14, 2016 6:20 PM 

Editor's note: Physicians have a special place among the thinkers who have elaborated the argument for intelligent design. Perhaps that's because, more than evolutionary biologists, they are familiar with the challenges of maintaining a functioning complex system, the human body. With that in mind, Evolution News is delighted to offer this series, "The Designed Body." For the complete series,  see here  Dr. Glicksman practices palliative medicine for a hospice organization.


The body is made up of matter organized into trillions of cells that make up its tissues and organs. Since all matter must follow the laws of nature, this means that the body must do the same. In earlier articles I have shown that the body must overcome the laws of nature to survive. The sodium-potassium pumps, for example, are needed to allow each cell to control its volume and chemical content by resisting diffusion and osmosis. There must be enough albumin in the blood to resist the natural force of hydrostatic pressure and maintain blood volume and flow to the tissues. The sympathetic nervous system must increase the cardiac output and peripheral vascular resistance to elevate the blood pressure sufficiently to counteract gravity when we stand up.

With the emergence of life, not only did the cellular and organic make-up of the body require specific innovations to overcome the laws of nature and survive, it also had to learn how to deal with what it encountered in its environment. Life does not take place in a vacuum or in the imaginations of evolutionary biologists. Hemostasis and the clots it forms allow the body to prevent itself from bleeding to death when it is bumped, scraped, or cut. And the bones, muscles, and nerves work together so the body can detect danger and avoid or defend itself from it.

In my last article I showed that the body is always being exposed to microorganisms, such as bacteria, viruses, and fungi, which are present in nature but are too small to see with the naked eye. If these microbes invade the body and become widespread, they can cause a lot of damage. We saw that the first line of defense against infection is the skin and the epithelial tissues that line the respiratory, gastrointestinal, and genitourinary tracts.

If microbes breach these passive barriers and enter the tissues, the second line of defense swings into action. This is called the immune system, and it consists of numerous different cells and proteins that work together to fight and usually defeat the invading force. For our earliest ancestors to survive long enough to reproduce, they would have needed this two-pronged defense. Neither the passive barriers that protect the underlying tissues, nor the immune system, is capable, on its own, of protecting the body from life-threatening infection. They both have to be present and in working order .

In ancient times, when invaders penetrated the surrounding protective wall of a town, the defenders generally had four important tasks to perform very quickly. The first was to detect and positively identify the enemy. The second was to sound the alarm so others could help join in the defense. The third was to provide information on the enemy to those in reserve. And the fourth was to repel, wound, or kill the intruders to protect the residents. Similarly, once microbes get past the epithelium and penetrate into the tissues below, the body's immune defense must have the ability to perform these same four important tasks as well.

The first requires that the cells and proteins of the immune system have a way of detecting the presence of the microbes and be able to identify them as an invading force that needs to be destroyed. In other words, are these cells host cells (self), or foreign cells (not self)? The job of the immune system is to kill invading microorganisms, so it had better be sure that what it's encountering is indeed foreign and in need of destruction, otherwise it may end up killing its own cells by friendly fire. As with hemostasis, it's important that the immune system only turn on when it's needed and turn off and stay off when it's not.

After determining that there is a microbial invasion going on, the second task of the immune system is to send out messages so that it can bring other forces to the field. This involves releasing chemicals that not only increase the blood flow to the site of infection, allowing immune cells and proteins to leak out of the blood through the capillaries, but attracts them to the battlefield as well. This causes the area around the infection to swell up and become red -- what we call inflammation.

In addition to rallying the troops, the third task of the immune system is to provide information about the whereabouts and nature of the enemy to those in reserve. This is accomplished by some of the first responder immune cells snipping off pieces of the dead microbes and sending them to the forces in reserve so that they can better prepare for what's awaiting them.

Finally, once the weapons of the immune system have been brought to the site of infection, it's up to them to wound or kill the invading force to prevent the infection from spreading further. The immune cells and proteins involved have many different weapons at their disposal to accomplish this task.

As with most military operations, the body's immune system has regular and specialized forces. The regular forces make up what is called the innate (natural) immune system. It's the microbial defense system with which everyone is born and it is the first to encounter the enemy, reacting within minutes. But this system, on its own, is usually not able to protect the body from overwhelming infection. Many pathogenic microorganisms have the ability to remain invisible and resistant to its strategies, allowing them to proliferate and spread throughout the body.

The specialized forces are usually needed to bolster and improve the effects of the innate immune system. Together, they are called the adaptive (acquired) immune system. This system usually requires a few days to adjust to the idiosyncrasies of the invading microbes. But when it swings into action, it provides extra intelligence, firepower, and precision accuracy that usually allow it and the innate immune system to get the job done. In contrast to the innate immune system, which is present at birth, the adaptive immune system develops over time as the body is exposed to more and more different microbes in its environment.

Now that you have a general idea of how the immune system works, we will press on. Next time, we'll look at the first responders of the innate immune system and how they do their jobs. Comparing how our immune system works to a military exercise in which an enemy must be tracked down, identified, and destroyed is a very accurate analogy.


Evolutionary biologists usually point to the ability of microorganisms to develop resistance to the body's immune system and medical therapies through genetic modification as evidence that life came about by chance and the laws of nature alone. However, this assumes the presence of the hardware needed not only to survive, but also to reproduce. Once you have the system in place, it's obvious that life can change over time, which is all that the word evolution denotes. However, the ability of life to change over time doesn't necessarily mean, as evolutionary biologists suggest, that it came about by chance and the forces of nature alone. One need only consider what it takes for the body to stay alive to recognize that important truth.