The origin of biological complexity is not yet fully explained, but several plausible naturalistic scenarios have been advanced to account for this complexity. “Intelligent design” (ID) advocates, however, contend that only the actions of an “intelligent agent” can generate the information content and complexity observed in biological systems.
ID proponents believe evolution theory is a failed enterprise that offers no credible explanations for the origins of complexity. They fault evolutionary scenarios for lacking sufficient detail. Furthermore, ID advocates claim to have presented empirical evidence that an “intelligent agent” designed at least some complex biological systems.
In contrast, this paper reviews several scientific models for the origin of biological complexity. I argue that these models offer plausible mechanisms for generating biological complexity and are promising avenues of inquiry. I take issue with ID proponents who dismiss such models for lack of “sufficient causal specificity,” arguing that this criticism is unwarranted. Finally, I look briefly at ID’s proposed explanation for the origin of biological complexity, and consider William Dembski’s “empirical evidence” for the design of bacterial flagella, arguing that his supposed evidence is biologically irrelevant.
The problem of complexity
Biological systems are staggeringly complex. Professional biologists devote their careers to describing those complexities, dissecting those systems by chemical and physical methods, and characterizing their structural components and functional interactions. How can such complex systems evolve? We understand the ways in which the individual components of a complex system can be altered in structure and function by mutation, and the way in which natural selection favors one form over another. Furthermore, in many cases we have traced the family relationships among different nucleic acid and protein variants.
Envisioning ways by which natural selection can construct biochemical and molecular systems that involve dozens of proteins integrated in complex and highly specific ways is much more difficult. How could all the necessary proteins be selected simultaneously with a common endpoint as the goal? Unless each intermediate construct possessed at least partial function, how could natural selection act?
This is the argument put forth by Michael Behe in his book Darwin’s Black Box: The Biochemical Challenge to Evolution (1996), and championed by ID advocates ever since. Behe contends that the structural and functional complexities found throughout biological systems could not have been established through evolutionary processes. He argues that the bacterial flagellum, for example, is an irreducibly complex system, in which the individual components have no function apart from the whole, and therefore could not have been selected for in nature.
By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. (Behe 1996: 39)
Biologists recognize that integrated system complexity is a feature of living systems. That is, some biological systems consist of component parts that interact in a coordinated way so that the system as a whole exhibits a specific function. It is questionable, however, whether any such systems are irreducibly complex as Behe claims (see Coyne 1996; Doolittle 1997; Miller 1999; Shanks and Joplin 1999). But even if examples of irreducible complexity are found in living systems, the origins of such systems are not necessarily outside the realm of natural processes (Orr 1996; Miller 1999; Thornhill and Ussery 2000; Catalano 2001). That the function of a highly integrated system may collapse with the removal of a component part does not mean that the system in question cannot be deconstructed to reveal an origin by undirected evolutionary processes.
Behe was not the first to recognize that biological complexity poses a challenge (see for example Cairns-Smith 1986). During the past decade, the discipline of complexity science has blossomed, attracting an interdisciplinary contingent of scientists, including biologists interested in the very question Behe addresses: Can natural mechanisms account for the observed complexity of biological systems? (See Adami and others 2000; Strogatz 2001; Adami 2002; Carlson and Doyle 2002; Csete and Doyle 2002.)
Naturalistic models for the evolution of biological complexity
Several models have been advanced to account for a naturalistic origin of the complexity seen in biological systems. Following are brief descriptions of four models advanced to account for the origin of biological complexity.
Incremental additions model
The incremental additions model hypothesizes that an initial association of components favorable to some function may become an essential association through time (Lindsay 2000; Orr 1996, 2002). The complexity of the system may increase with the addition of new components. Suppose, for example, that a molecule carries out a particular catalytic function. If an association with another molecule enhances that function — for example, through structural stabilization — then natural selection can favor the association. The second molecule is initially beneficial although not essential. The second molecule may become essential, however, if an inactivating mutation in the first molecule is compensated for by the presence of the second.
There are numerous examples of molecules whose function is enhanced in the presence of another molecule. Consider the activity of RNase P (an RNA-protein complex responsible for processing transfer RNA molecules). The RNA component of the molecule possesses the catalytic activity and has been shown to function without its protein partner, albeit at a very much lower activity (Reich and others 1988; Altman 1989).
Work done with hammerhead ribozymes (RNA molecules capable of cleaving other RNA molecules) has demonstrated that the activity of one of these ribozymes increases 10- to 20-fold in vitro in the presence of a non-specific RNA-binding protein (Tsuchihashi and others 1993; Herschlag and others 1994). Furthermore, ribozymes are routinely generated whose activity can be regulated by other molecules (Soukup 1999), and in vitro evolution experiments have generated protein-dependent ribozyme ligases (Robertson and Ellington 2001).
Group II self-splicing introns, although capable of independent cleavage of RNA under some conditions, require stabilization by maturase proteins for effective in vivo functioning. It is generally accepted that the catalytically active RNA components of spliceosomes are able to function because spliceosome proteins stabilize a functional conformation (Lodish and others 2003). Therefore, one might speculate that a ribozyme could lose independent activity through a mutational event and yet continue to function in association with a protein molecule that promotes or stabilizes a catalytically active ribozyme structure.
Scaffolding is another mechanism whereby irreducible complexity might be established (Lindsay 2000; Shanks and Joplin 2000; Orr 2002). In the incremental additions model, a beneficial association of components becomes an essential association because mutational events compromise the independent activity of one or more component parts. In the scaffolding model, superfluous components are lost, leaving a system in which the remaining components appear tightly matched as if they were specifically designed to fit and function together. The arch is an example of an irreducibly complex structure that requires scaffolding for its construction (Cairns-Smith 1986; Lindsay 2000; Shanks and Joplin 2000; Schneider 2000; Orr 2002). Scaffolding may also be functional in nature.
Many biochemical systems are characterized by “redundant complexity” (Shanks and Joplin 1999, 2000). Biochemical pathways rarely function in isolation; rather, one pathway interconnects with another (see Nelson and Cox 2000). For example, carbon atoms entering the Calvin-Benson cycle within a chloroplast may find their way into any one of many different molecules and be shunted into other pathways. There are also many cases of a redundancy of enzymatic components, or variant isoforms. Gene duplications increase the number of genes in a species, which can then evolve in different ways. This branching pattern in protein evolution is significant. For example, several different yet related hemoglobin molecules are utilized in human development. These variant forms are understood to have arisen from gene duplication, mutation and selection processes (Lodish and others 2003).
An initial loss of redundant components in a biochemical pathway will not destroy function. However, at the point where a system cannot endure further loss of components without losing function, an irreducible system exists. The redundancy of biochemical components in such a scenario serves as scaffolding. Shanks and Joplin (2000) evaluate this model in reference to several of Behe’s examples of irreducibly complex biochemical systems. Robinson (1996) has also taken a similar approach by explaining in plausible evolutionary terms the origin of vertebrate blood-clotting cascades.
Natural selection acts upon an existing set of structures within a particular environmental context. An altered environment demands altered responses from an organism. Consequently, it should not be surprising to find in the fossil record and in comparative anatomical and physiological studies evidence that some structures have been modified through time to serve different functions. In fact, a common theme of biological evolution is that existing structures are often put to new uses, and new structures are created from the old. “Co-option” is the term used to describe the recruitment of existing structures for new tasks. This recruitment can explain evolutionary increases in biological complexity.
Genes co-opted for new functions can give rise to developmental and physiological novelties (Eizinger and others 1999; Ganfornina and Sanchez 1999; Long 2001; True and Carroll 2002). Genes can acquire new functions when protein-coding sequences are altered, when coding sequences are spliced differently during RNA processing, or when spatiotemporal patterns of gene expression are changed (True and Carroll 2002). Gene duplication followed by differential mutation will give rise to new protein configurations, and the alteration of regulatory controls for gene expression can result in significant developmental and morphological changes.
Many complex biological systems are characterized by a tight integration of component parts. Behe (1996) has argued that it is highly unlikely that such systems could arise through a simultaneous co-evolution of numerous parts or a direct serial evolution of the necessary components. But complex systems, even irreducibly complex ones, need not be assembled this way.
New associations of existing substructures or proteins may give rise to new functions, thus it is not necessary for the system to evolve in toto. Many critics of ID have pointed this out (Miller 1999; Thornhill and Ussery 2000; Miller 2003). A particularly instructive example of probable co-option is seen in the evolution of the Krebs (citric acid) cycle. Melendez-Helvia and others (1996) recognized that the Krebs cycle posed a real difficulty to evolutionary biologists because intermediate stages in its evolution would have no functionality. An analysis of the component enzymes and cofactors, however, revealed that the component parts and intermediate stages had functions apart from their role in the Krebs cycle.
Another example is the V(D)J gene splicing mechanism in vertebrate immune systems (Thornhill and Ussery 2000). True and Carroll (2002) also present examples of how multiple genes linked by a gene regulatory system can be co-opted as a unit for a new function; their examples include the evolution of butterfly eyespots, vertebrate limbs, complex leaves in plants, and feathers.
Emerging complexity model
Some complexity theorists believe that laws of self-organization exist that play a role in the evolution of biological complexity (Kauffman 1993, 1995; Solé and Goodwin 2000). Theoretical work in this area has expanded rapidly in the past decade (see, for example, Camazine and others 2001). The interaction of various component parts, it is argued, leads inevitably to complex patterns of organization.
One measure of complexity is the information content of a system, and Schneider’s “ev” program has demonstrated that new information can indeed emerge spontaneously. The “ev” program was constructed to simulate evolution by mutational and selection events. In the program, certain DNA sequences acted as “recognizer genes”, while other sequences were potential binding sites for the recognizer molecules. During simulations, both the recognizer genes and potential binding sequences were allowed to mutate. Selection was based upon successful binding of recognizer molecules and appropriate binding sites. The change in the complexity of the system was evaluated as a change in the information content of the DNA sequences. Specificity between recognition genes and corresponding binding sites increases the information content of the system, which is measured in bits of information according to Shannon information theory. Beginning with a random genome, the “ev” program leads to the evolution of DNA binding sites and a consequent increase in information. Furthermore, in the simulation, binding sites and recognizer genes co-evolved, becoming an irreducibly complex system. The results showed that processes of Darwinian evolution do generate information as well as irreducibly complex systems (Schneider 2000).
Conceivability vs plausibility: ID’s response
The above models are based upon natural processes that are subject to experimental investigation. Evidence supporting these models is accumulating. These models have been evaluated by ID advocate William Dembski in his book No Free Lunch (2002a). Dembski declared each model inadequate, with his most specific criticism directed toward Schneider’s “ev” program. He rejected Schneider’s claim that information had been generated de novo and accused Schneider of smuggling information into the program by specifying the program’s conditions for survival of “organisms” (Dembski 2002a). From a population biologist’s perspective, the criteria used by Schneider were perfectly reasonable. Nevertheless, Schneider eliminated the special rule that Dembski objected to, retested the program, and found the same results (Schneider 2001a, 2001b).
Arguing more globally, Dembski claimed that the No Free Lunch Theorems make it clear that the program could not do what Schneider claimed. David Wolpert, however, one of the developers of the No Free Lunch Theorems, says that Dembski applies the theorems inappropriately (Wolpert 2003).
Dembski’s criticisms of the other models were more general. He and other ID advocates complain that naturalistic models for the evolution of biological complexity lack causal specificity. According to Dembski, “Causal specificity means identifying a cause sufficient to account for the effect in question” (Dembski 2002a: 240). He argues that, until sufficient details are worked out (presumably in terms of the order in which components became associated, the manner by which these assembled components interacted to improve function, and the mutations that led to obligate dependency) there is no way to evaluate naturalistic scenarios. “Lack of causal specificity,” he says, “leaves one without the means to judge whether a transformation can or cannot be effected” (Dembski 2002a: 242).
Dembski accuses evolutionists of being satisfied with a very undemanding form of possibility, namely, conceivability (Dembski 2002b). Allen Orr reviewed No Free Lunch and took Dembski to task for using biologically irrelevant probabilities and requiring unrealistic details of causal specificity (Orr 2002). In his rebuttal, Dembski said that, for Orr, “Darwinism has the alchemical property of transforming sheer possibilities into real possibilities” (Dembski 2002b). He went on to say that “Orr substitutes a much weaker demand for ‘historical narrative,’ which in the case of Darwinism degenerates into fictive reconstructions with little, if any, hold on reality.”
Dembski positions himself as the critical empiricist, asking only for what all scientists should ask — details by which to determine the validity of Darwinist claims. Howard Van Till reviewed No Free Lunch and commented upon Dembski’s demand for causal specificity:
Many scientific hypotheses regarding the manner in which various transformational processes may have contributed to the actualization of some new biotic structure might fall short of full causal specificity — even though they may be highly plausible applications of mechanisms that are at least partially understood. When that is the case, the ID approach tends to denigrate them as nothing more than “just-so stories” and to remove them from further consideration. (Van Till 2002)
Dembski’s demand for greater details is reminiscent of earlier anti-evolutionists’ demands for more transitional fossils. Undoubtedly, there will always be gaps in the fossil record, and there will always be room for more details in evolutionary scenarios. The biologist’s search for these details is ongoing.
ID’s explanation for the origin of biological complexity
Biologists have proposed a number of models to account for biological complexity. ID proponents have criticized these models for lacking sufficient detail. It is instructive then to examine ID’s own explanations for the origin of biological complexity. Dembski (2002a) claims that certain types of biological systems, such as Behe’s “irreducibly complex” systems, must have been designed by an intelligent agent, because they possess a characteristic he calls “specified complexity.” It is possible, he says, to distinguish objects that were designed from those that arose by natural mechanisms because only designed objects have this characteristic (Dembski 1998, 2002a). ID advocates offer no models to explain the processes by which biological complexity came to be. They argue, nevertheless, that “specified complexity” is empirical evidence that the observed structure or function was intentionally designed.
How can we know that an object possesses “specified complexity”? Dembski says that structures or events that are highly complex will have a low probability of occurring by chance. Therefore a probability assessment must first be made. Because even rare or improbable events might occur by chance if given enough time, Dembski (1998) has set a probability value of 10-150 as a criterion for design.
To be specified, an object or event must possess a pattern independent of or detachable from the nature of the object or event in question (Dembski 1998). In the movie Contact, for example, SETI researchers interpret a radio signal as a sign of extraterrestrial intelligence because the signal contains the first 100 prime numbers. That particular sequence of numbers is specified because it has no inherent relationship with radio waves and is therefore independent of the radio waves themselves. Finally, a designed object or event, regardless of its complexity or specificity, cannot be the outcome of a deterministic natural law.
ID proponents argue that certain biological systems exhibit specified complexity and therefore must have been intentionally designed. But is specified complexity a reliable indicator of design? The validity of Dembski’s approach is questionable at best. Flaws in his argument have been pointed out previously (see for example, Orr 1996, 2002; Miller 1999, 2003; Schneider 2001a; Van Till 2002). But perhaps the best way to evaluate ID’s claim is to consider the application of their criteria to a specific example.
The bacterial flagellum: ID’s test case
Dembski (2000) says, “Design theorists are not saying that for a given natural object exhibiting specified complexity, all the natural causal mechanisms so far considered have failed to account for it and therefore it had to be designed. Rather they are saying that the specified complexity exhibited by a natural object can be such that there are compelling reasons to think that no natural causal mechanism is capable of producing it.” ID advocates have presented the bacterial flagellum as a biological structure that is clearly the result of design. Dembski’s application of his own complexity-specification criterion in the case of the bacterial flagellum, however, fails to demonstrate that the flagellum is either complex or specified (Van Till 2002).
Dembski’s calculation of the probability for the origin of the flagellum treats the flagellum as a discrete combinatorial object that self-assembled by pure chance. In other words, all the proteins spontaneously formed by the chance coming together of amino acids in the correct order, then the chance assembling of those proteins in the correct arrangements. This is not an evolutionary scenario ever postulated by biologists (Miller 2003; Van Till 2002). Evolutionists envision a far different scenario. Proteins are not built or assembled with the intent to construct a flagellar system. Protein variants appear through time, forming new interactions and taking on new functions. Protein assemblies that contribute to the reproductive success of the organism are maintained and shaped by natural selection.
Although Dembski (2002a: 19) stated that, in calculating the probability of an event, it is necessary to take into account all the relevant ways an event might occur, he himself failed to do so. By calculating only the probability that the flagellum arose by sheer chance, Dembski cannot justify his claim that the flagellum is a product of design (Van Till 2002). Dembski (2003) responded to such criticisms by stating that it was not his intention to “calculate every conceivable probability connected with the stochastic formation of the flagellum ... My point, rather, was to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum.” Dembski then challenged his critics to calculate their own probabilities using whatever scenario they wish.
The bacterial flagellum is indeed a discrete combinatorial object, and the self-assembly that I describe is the one we are left with and can compute on the basis of what we know. The only reason biologists would refuse to countenance my description and probabilistic calculations of self-assembly is because they show that only an indirect Darwinian pathway could have produced the bacterial flagellum. But precisely because it is indirect, there is, at least for now, no causal specificity and no probability to be calculated. (Dembski 2002c)
There will always be a level of uncertainty in elucidating an evolutionary pathway for the origin of a flagellum or any other biological system. Dembski hides behind this uncertainty, content to continue using a pure chance model regardless of the fact that it bears no relationship whatsoever to our understanding of evolutionary processes.
ID proponents claim that biologists are engaged in a program of inquiry, which is doomed to fail. According to ID proponents, a naturalistic explanation for the origin of genetic information and complex biological organization is not possible. The ID proponents assert that they have developed rigorous criteria by which design in nature can be detected, but they have yet to demonstrate the validity of their criteria. Furthermore, ID proponents fail to engage fully the naturalistic scenarios of evolutionists to explain the origins of biological complexity.
Certainly much remains to be learned about the evolution of complexity, but there is every reason to believe it happened by natural processes. Consider for example the following case. In 1966, Kwang Jeon observed that his cultures of amoebae were dying as a result of a bacterial infection (Jeon 1991). The bacteria had apparently escaped digestion in a food vacuole and were reproducing within the amoebae. Over a period of time, some of the cultures began to recover. Bacteria were still present in the surviving amoeba, though at a much reduced level. Jeon was able to show that the bacteria had become dependent upon their host cell and the host cell was dependent upon the bacteria. Additional work demonstrated that genetic information lost from the bacterium and amoeba genomes had led to their obligate relationship. A mutually obligate endosymbiosis was established, creating what is essentially a new cell organelle. Two component systems became associated, mutated, and are now irreducibly linked to one another. Perhaps ID proponents will argue that the complexity is not sufficient to have required the action of an intelligent agent, but the point here is that undirected natural causes are all that are needed to explain an observed increase in complexity and generation of an irreducible system.
Biologists have advanced plausible naturalistic scenarios for the origins of biological complexity. These scenarios are based upon an understanding of established natural processes. To dismiss them as merely conceivable stories is unwarranted. To demand a detailed chain of causality for evolutionary scenarios is unrealistic. To insist that design has been detected in the bacterial flagellum by calculating the probability of its assembling by pure chance is simply wrong.