Some considerations about the theory of intelligent design

The so-called theory of intelligent design (ID) has gained a growing reputation in the Anglo-Saxon culture, becoming a subject of public debate. The approaches that constitute the core of this proposal, however, have been poorly characterized and systematized. The three most significant authors of ID are certainly Michael Behe, William Dembski and Stephen Meyer. Beyond the differences that can be distinguished in the work of each of them, the central fact in their arguments is the complexity of living organisms, which according to these authors, escapes any kind of natural explanation. In effect, according to the authors of ID, the irreducible complexity that can be detected in the natural world would allow to infer design in a scientifically valid way, even though many of them prefer to remain silent regarding the identity and attributes of the designer. We think that under this proposal, remains a deep epistemological confusion, since its very structure combines methodologies that are beyond the scope of historical and natural evolutionary theories. We also reject the claim that ID is a legitimate scientific theory, because it does not exhibit the classical characteristics that a scientific kind of knowledge must have. Key terms: epistemology, evolution, intelligent design, science Autor Correspondiente: Juan Eduardo Carreño, M.d. Ph.d., Laboratorio de Fisiología Integrativa y Molecular, Universidad de los Andes, San Carlos de Apoquindo 2200, Santiago, Chile, Teléfono: 56-2-4129344, Fax: 56-2-2141756, Email:


INTRODUCTION
The question on finality and purpose in the cosmos and in living beings is not new.Indeed, it has been faced by several authors from different perspectives in the course of history, including Plato, Aristotle Augustine, Thomas Aquinas, Gottfried Leibniz, John Ray, Voltaire, William Paley, and many others (Ayala, 2007a).In recent years, a new controversy has emerged about this topic in certain scientific and philosophical circles of the Anglo-Saxon culture on the so-called theory of intelligent design (ID).This proposal burst on the scene in 1991 under the leadership of Phillip Johnson, a Christian lawyer at the University of California at Berkeley, whose book Darwin on Trial first laid out the ID position (Collins, 2006).Some of its roots can be traced to earlier scientific arguments pointing out the statistical improbability of the origin of life.However ID places its major focus on perceived failings of the evolutionary theory to account for life's subsequent stunning complexity (Collins et al., 2006).Under this approach, the great complexity of natural beings, and especially of living ones, would be inexplicable in terms of a gradual process, such as that proposed by Darwinism (Ayala, 2007b).Moreover, proponents of ID, categorically sustain that the scientific analysis of nature leads them to conclude the existence of a design or plan, and therefore a designer (Johnson, 1995).As expected, in a sharply polarized cultural environment in relation to these issues, the theory of ID and its defenders have been intensely criticized by those who have seen it as a reissue of the infamous "scientific creationism".According to these detractors, ID is little more than an effort to dress anachronistic attitudes and religious beliefs with the prestigious cloth of science (Hull and Ruse, 1997;Dawkins, 1985).
The discussion around the ID theory has acquired attention beyond the academic field, becoming in some communities a subject of public discussion, especially with regard to its teaching in education a institutions as a reasonable alternative to the theory of evolution by natural selection (Ruse, 1982;Gooday et al., 2008).This situation has significantly hampered a measured and balanced analysis of the ID theory.Serious debate has been focused almost exclusively on the cases cited as examples of design, which according to some are better explained by chance, or by not well described laws according to others (Dawkins, 1995;Dawkins, 2006).While such discussions are of undoubted importance and interest, we believe that there still remains a need for a deep consideration about the epistemological status and scientific validity of this theoretical construct.In our opinion, a good strategy to proceed in that direction is to examine the work of the authors considered as the leaders of ID.The reader should keep in mind that the objective of this work is to expose the key conceptual elements and the epistemological status of the ID theory.Hence, we leave the analysis of these proposals, and the responses and counter arguments of the proponents of alternative theories for future instances.

MICHAEL BEHE AND THE CONCEPT OF "IRREDUCIBLE COMPLEXITY"
As Cornish-Bowden and Cárdenas have pointed out, Michael Behe is the only leader of ID who has an academic degree and publications in the field of biological sciences.His book "Darwin's Black Box" is one of the main factors explaining the great diffusion that this theory has reached in the general public in the USA (Cornish-Bowden and Cárdenas, 2007).In effect, the polemic tone and explicit attacks against the theory of evolution by natural selection contained in the text have made Behe the visible face of the ID theory.The key concept that underlies the objections of this author to the theory of evolution by natural selection is that of "irreducible complexity", a notion that Behe has not rigorously developed: "An irreducibly complex system -according to the authoris one that requires several closely matched parts in order to function and where removal of one of the components effectively causes the system to cease functioning" (Behe, 1998).In the light of this characterization and the several examples that Behe provides in his texts and articles, we could define irreducible complexity as a property of those systems whose functions are strictly dependent on their structural indemnity.
Based on the aforementioned concept, Behe has argued that irreducibly complex systems, such as the cilium, the flagellum, the cascade of coagulation and some aspects of the mammalian immune system, among others, could not have arisen according to a gradualist evolutionary model, because it is an all-or-nothing type of problem (Behe, 2003).In his own words: "Closely matched, irreducibly complex systems are huge stumbling blocks for Darwinian evolution because they cannot be put together directly by improving a given function over many steps, as Darwinian gradualism would have it, where the function works by the same mechanism as the completed structure.The only possible resource to a gradualist is to speculate that an irreducibly complex system might have come together through an indirect route (…) However, the more complex a system, the more difficult it becomes to envision such indirect scenarios, and the more examples of irreducible complexity we meet, the less and less persuasive such indirect scenarios become" (Behe, 1998).In other passages Behe has affirmed that not all biological systems are designed.Concluding design, then, requires the identification of the molecular components of a system and the roles that they play in it, as well as a determination that the system is not itself a composite of systems (Behe, 2007).
Even if this mechanistic approximation has reached broad dissemination in the academic community, it is not shared by all the defenders of the ID theory, and has been the target of many objections.In fact, proponents of the theory of evolution by natural selection and other evolutionary models have argued that sooner or later the alleged irreducibility of such systems will indeed be reduced by the advance of science, which will provide new and more reasonable explanations than the hypothesis of design (Cornish-Bowden, 2004).Following this strategy, several prominent scientists have developed alternative explanations to account for the origin and evolution of the biological entities that Behe characterizes as irreducibly complex (Doolittle and Zhaxybayeva, 2007).For example, Francis Collins, a physician, scientist and leader of the "Human Genome Project," has argued that gene duplication may well explain some features of structures such as the clotting system of homothermous organisms (Collins, 2006).Others have attacked one of the favorite examples of Behe, bacterial flagella, arguing that such a structure is only the variation of a system whose primary function is not associated with displacement across space, but rather to attack and perform cellular detoxification (Miller, 2004).
Assuming these and several other objections, Behe insists that the idea that certain biochemical systems have been designed by an intelligent agent does not rule out the importance and relevance of other factors.In the opinion of this author, the ID theory could perfectly coexist with the theory of evolution by natural selection as long as the latter applies to the field of microevolution.Furthermore, Behe has insisted in the possibility that designed biological systems could have undergone gradual changes over time, according to the principles of natural selection and mutation (Behe and Snoke, 2004).With this argument, Behe aims to answer the criticism of those who have argued that the ID theory does not give a reasonable interpretation of phenomena often found in living beings, such as vestigial organs and pseudo-genes, for which evolutionary theories are an obvious explanation.According to Behe, many of these features are the result of the evolution of a primitive structure.The theory of evolution by natural selection could account for variations that this structure experiences over time, while the ID theory explains the appearance of the "original model" (Behe, 2003).
WILLIAM DEMBSKI: THE CONCEPT OF "COMPLEXITY-SPECIFICATION" AND THE EXPLANATORY FILTER William Dembski, mathematician and philosopher, has developed a probabilistic and quantitative approach to the inference of design, with a higher level of abstraction and formality than that displayed by Behe.According to Dembski, once confronted with an event, we must choose between three mutually exclusive and exhaustive modes of explanation: law, chance or design.This logical approach constitutes the habitual way by which we conclude that something has been designed in everyday life."To attribute an event to a law is to say that the event will almost always happen given certain antecedent circumstances.To attribute an event to chance is to say that its occurrence is characterized by some (perhaps not fully specified) probability distribution according to which the event might equally well not have happened.To attribute an event to design is to say that it cannot plausibly be referred to either law or chance" (Dembski et al., 1998a).This ordinary procedure -continues Dembskican be formulated as a scientific one, whose basic concepts are contingence, complexity and specification.According to Dembski, an event is contingent if it is one of several possibilities, or "if it is not the result of an automatic and non-intelligent process" (Dembski et al., 1998a).Hence, in order to establish that an object, event or structure is contingent it must be shown that it is not the result of a natural law or an algorithm.However, that the event is one of several possibilities, even necessary, is not enough to infer design, because contingence eliminates an explanation based on natural law, but not chance.To eliminate this alternative mode of explanation -say Dembski-we need to introduce the notion of complexity, which he understand as improbability; in this way, to determinate that something is complex enough to infer design is to say that something has a small probability of occurrence.
However, Dembski perceives here a difficulty: "Our intuition is that small probability events are so improbable that they cannot happen by chance.Yet we cannot deny that exceedingly improbable events happen by chance all the time.To resolve the paradox we need to introduce an extraprobabilistic notion, a notion I referred to as specification" (Dembski et al., 1998a).The author defines the concept of specification as a non ad-hoc pattern that can be used to eliminate chance, that he opposes to the notion of fabrication, which designates an ad-hoc pattern that cannot legitimately be used to eliminate chance.
An example that Dembski uses frequently to clarify the idea of specification is that of an archer that stands 50 meters from a large wall.Every time the archer shoots an arrow at the wall, he paints a target around the arrow, so that the arrow is squarely in the bull's eye.What can be concluded -ask Dembski-from this scenario?Obviously, we cannot conclude something about the ability of the archer.He is matching a pattern, but an ad-hoc one.But suppose instead that the archer first paints a fixed target on the wall and then shoots at it.If he shoots one hundred arrows and each time he hits a perfect bull's eye, we can conclude, according to Dembski, that "here is a world class archer".Thus, when the archer paints a fixed target on the wall and thereafter shoots at it, he specifies the event.When he repeatedly hits the target, we can attribute his success to his skill as an archer.But when the archer paints a target around his arrow, he fabricates the event, and his abilities as an archer remain an open question.Dembski has remarked, however, that even in the example the independency of the pattern is the consequence of an a priori fixation, this is not a universal requisite of the specification, but its application to the reported example.In summary, the criterion of complexityspecification detects design -according to Dembski-by using the three concepts of contingence, complexity and specification.In this way, confronted with the explanation of an event we must answer three questions: Is the event contingent?Is the event complex?Is the event specified?Based on this sequence, Dembski has proposed the "explanatory filter", a probabilistic algorithm of great popularity among the partisans of the ID.
Figure 1 summarizes the explanatory filter, which consists of two types of nodes, initial and terminal nodes represented by ovals and decision nodes illustrated by diamonds.The purpose is to explain an event (E), attributing it to law, chance or design.So, we start at the node named "start", and then we move to the first decision node, which asks us if E is highly probable (HP)."To say that E is HP, is to say that given certain antecedent circumstances, E will, for all practical purposes, always happen" (Dembski et al., 1998a).Thus if E happens to be an HP event, we stop and attribute E to law, and chance and design are automatically precluded.But suppose that E is not an HP event, then we must pass to the next decision node, labeled "intermediate probability" (IP).According to Dembski, IP events are those we can regularly expect to occur by chance in the ordinary circumstances of life.Thus, if our event E reaches the second decision node and is judged to be an IP event, we must stop and attribute E to chance.But if the event is neither an HP nor an IP event, we have to go to the third and final decision node.In this case, E is an event of small probability (SP).Our first intuition -according to Dembski-is that SP events do not happen by chance, but as we have already seen, very improbable events happen by chance all the time.For an event to pass to the third decision node of the explanatory filter, it is therefore not enough to know that E has SP with respect to some arbitrary probability distribution.The crucial question now becomes whether E was specified (sp).If the event E was specified, we can reach the node of design, if not, we have to pass to the terminal node labeled as chance (Dembski, 1998b).
After this brief description of the explanatory filter, some precisions have to be made.Dembski argues that the order of priority among competing modes of explanation in the algorithm has nothing to do with one explanation being preferable to another.In the opinion of the author, the explanatory priority is a case of Ockham´s razor: "… when any one of the three modes of explanation fails adequately to explain an event, we move to the following mode of explanation at the next level of complication.Note that explanations that appeal to law are the simplest, for they admit no contingency, claiming things always happen that way.Explanations that appeal to chance add a level of complication, for they admit contingency, but one characterized by probability.Most complicated are those explanations that appeal to design, for they admit contingency, but not one characterized as probability" (Dembski et al., 1998a).In Dembski's opinion, the filter is robust in detecting design -or what is the same, to avoid false positives-for two reasons.The first is an inductive one: according to the author, in every instance where the explanatory filter attributes design and where the underlying causal history is known, it turns out that design is present.Dembski seems so convinced of the utility of his filter, that he throws a challenge: "I have yet to see a convincing application of the explanatory filter in which coincidences better explained by chance get attributed to design.I challenge anyone to exhibit a specified event of probability less than Borel's universal probability bound for which intelligent causation can be convincingly ruled out" (Dembski et al., 1998a).However, this inductive argument,continues Dembski-does not explain why the filter works.This fact brings us to the second argument of Dembski: the filter is a reliable criterion for detecting design because it can detect the specified choice.According to this author, what characterizes the intelligent causation is choice, because whenever an "intelligent cause" acts, it chooses from a range of competing possibilities.But saying that intelligent causation always entails discrimination between several choices is not enough.The next question is how to recognize their operation: "Not only do we need to observe that a choice has been made, but also we ourselves need to be able to specify that choice.(…) What is more, the competing possibilities that were ruled out must be live possibilities and sufficiently numerous so that specifying the possibility that was chosen cannot be attributed to chance.All the elements in this general scheme for recognizing intelligent causation (i.e., choosing, ruling out and specifying) find their counterpart in the explanatory filter.

It follows that the filter formalizes what we have been doing when we recognize intelligent causes. The explanatory filter
pinpoints what we need to be looking for when we detect design" (Dembski et al., 1998a).
Some critics have objected that while concluding design could be legitimate from the analysis of artifacts, in the biological realm, instead, we would be dealing with a kind of question for which our intellectual powers are not adequately equipped.According to this vision, in order to infer design from data provided by empirical science, we must thoroughly examine all possible natural causes at the nodes labeled as "law" or "chance" in the explanatory filter.And since this is logically impossible -continue opponents-design does not constitute a scientifically valid explanation, but an argument from ignorance and misconception, to be replaced by the new knowledge that science, by its steady progress, will give us (Ruse, 1988).

STEPHEN MEYER AND THE INFORMATION THEORY
The classic theory of information, as presented by Shannon, provides a quantitative approximation to measure the information that can be sent by a communication channel (Shannon, 1948).In this approach, the concept of information is understood in a mathematical and formal sense, which must not be confused with the ordinary use of the term, in which information is referred to as synonymous with "meaning".In contrast to that common use of the notion, the theory of information defines information as a sequence of symbols, without distinguishing between the functional arrangement of symbols and aleatory sequences.Thus, Shannon's mathematical theory can measure the quantity of information that can be present in a sequence of symbols, but cannot detect the status of the information, that is, cannot discriminate between a meaningful and a meaningless series of symbols.In Dembski's terms, therefore, the theory of information can provide a measure of the complexity or improbability of a symbolic sequence, but says nothing about its specification.
Stephen Meyer, philosopher and science historian, is another leader of the ID theory.Like his partners, Meyer has dedicated great efforts to attack evolutionary theories, especially the theory of evolution by natural selection (Meyer, 2004).Besides this aspect of his work, Meyer has also been responsible for adapting the classic theory of Shannon to the context of ID.For that purpose, Meyer has established symmetries between information and the criterion of complexity-specification of Dembski.As we mentioned before, in Shannon's theory, information can be used to designate a specified or unspecified complexity.According to Meyer, biological information is characterized by accomplishing both conditions, that is, complexity (understood as the number of components in an organism among a larger number of possibilities) and specification (understood as the precise arrangement of components in an organism necessary to perform a certain biological function), and can be indirectly measured by calculating the probability of occurrence of the system by chance, and subjecting it to Dembski's explanatory filter (Behe et al., 2002).
The equivalence between Meyer's and Dembski's proposal is clear: "… Dembski has shown that design events leave a complexity and information-theoretic signature that allows us to detect intelligent design reliably.Specifically, when systems or artifacts have a high information content or (in his terminology) are both highly improbable and specified, intelligent design necessarily played a causal role in the origin of the system in question" (Dembski et al., 1998a).There is also an obvious resemblance between the notion of specified complexity and that of irreducible complexity.Despite these similarities, it has to be remarked that Meyer is much more modest and cautious in his conclusions than Behe and Dembski, whose apodictic and aggressive tone is evident throughout their texts.According to this author, the failure of many critics and even of several members of the ID movement in the definition of the epistemological status that corresponds to this theory resides in a limited understanding of the scientific activity and the plurality of theoretical approaches that fall under this kind of knowledge.In Meyer's opinion, this diversity can be assigned to two broad categories, namely "nomological sciences" and "historical sciences".The main purpose of nomological or inductive science, in accordance with Meyer, is to increase our knowledge about the normal operation of nature through the discovery, classification and clarification of laws and natural properties.An inductive logic is typically used in these disciplines, and the explanations are based on descriptions or theories of general phenomena.The second category of science, instead, is characterized by seeking the reconstruction of past events from present facts or data by using what Meyer calls a "retrodictive" logic, because there are causative past events and not natural law entities that make the "primary explanatory work", and its verification proceeds indirectly, by comparing the explanatory power of rival theories (Behe et al., 2002).
In order to illustrate his argument, Meyer examines the theory of evolution by natural selection, showing its explicit historical purpose (that the different species of living beings did not originate independently but have resulted in successive changes from one or very few ancestors), retrodictive reasoning that characterizes their inferences (data from fossil records, comparative anatomy, embryology and biogeography are used as evidence to infer a pattern of past events) and an indirect means of verification, based on the explanatory power that the theory displays compared to alternative theories or models.Thus, in Meyer's view, the theory of evolution by natural selection shows all of the points that characterize historical sciences.Furthermore, according to this author, these epistemological features are present in the ID theory as well: "At the very least it seems we can conclude that we have not yet encountered any good reason in principle to exclude design from science.Design seems to be just as scientific (or unscientific) as its naturalistic competitors when judged according to the methodological criteria examined above.(…) Perhaps, however, one just really does not want to call intelligent design a scientific theory.Perhaps one prefers the designation "quasi-scientific historical speculation with strong metaphysical overtones".Fine, call it what you will, provided the same appellation is applied to other forms of inquiry that have the same methodological and logical character and limitations" (Behe et al., 2002).Indeed, this methodological equivalence that Meyer has concluded between the ID and naturalistic theories, as the theory of evolution by natural selection, has generated a major controversy in recent years, which, as noted above, has crossed academic circles.
BUT IS IT SCIENCE AFTER ALL?
In our view, there are various aspects of a theory that should be considered when performing a systematic analysis.Among them, we could mention its logical consistency -in terms of the proper relationship of its contents-, its explanatory power -understood as the capacity of a given proposal to plausibly explain a set of facts or events based on a small number of principles-, and its epistemological status and validity.In this final section, we will focus on the third of these criteria.
The word "science" derives from the Latin "scientia", which in turn comes from the verb "scire", which means "to know".From an etymological perspective, then, science comes to identify with knowledge.However, when a particular discipline or form of knowledge is branded as "scientific", we intend to precisely distinguish it from other forms of knowing, on the basis of "something" that makes it worthy of such an adjective.But the problem comes immediately when we try to explain what is that "something".Indeed, the nature of science has become a battlefield between the various currents in the history of philosophy, because it is not, as some naively believe, a problem that emerged with modernity, but a question that has challenged philosophical reflection from its origins in ancient Greece (McKeon, 1941).According to the classical philosophical tradition, which we follow in this analysis, science is a certain knowledge about the causes, and as such, deals with the universal and necessary (Maritain, 1995;Nelson, 1998).Science, however, is not a rigid mold, but an analogical concept whose materialization can take place in a variety of ways, a fact that the philosophy of science has inevitably faced (Simon, 1999;Serani, 2001;Vicuña and Serani, 2004).In our view, this analogical approach can protect the concept of science from the arbitrariness and inflexibility that show some positivist philosophies, and at the same time from the chaotic ambiguity of relativistic and anarchist currents of thought (Feyerabend, 1992).This notion is positioned as the norm that any discipline that seeks to be taken as scientific has to meet, but not in an univocal and rigid way.
Using these philosophical guidelines, only briefly summarized here, we have to consider the approach known as ID.To deal with this difficulty, we must begin by asking if this theory, beyond what its followers and critics declaim, can be taken as a scientific one.As Stephen Meyer has openly admitted, the main reason that initiates the theoretical speculation of design is eminently historic.How did the bacterial flagella originate?What happened in the Cambrian period, during which an increase in the biodiversity without a parallel in other time of history seems to have taken place?Beyond the specific answers that the leaders of the ID theory have been developing to these interesting questions, it is clear that what it is intended to explain here is a constellation of singular, unique and previous events, which in no case can be the object of a demonstrative, universal and necessary kind of knowledge.It can be understood, then that a methodology like that described cannot be regarded as scientific.The ID theory is, in fact, a historical interpretation that, based on its supposed effects in the present, infers the existence of causal events in the past.
As explained before, Stephen Meyer, in his philosophical analysis, has concluded that there is an epistemological equivalence between the ID and evolutionary theories.However, we believe that there is a fundamental difference between the various forms of evolutionary theories that are postulated today in paleontology and biology and the theoretical approach of ID.In the evolutionary theories, the explanations are based on natural processes, understanding by such, events and phenomena empirically verified in the present, whose causal role in the past is postulated as plausible.This is what the design theorists have pejoratively called a "methodological naturalism", a methodological feature that in our opinion is fully acceptable given the epistemological structure of these disciplines.But something different happens with the ID theory.This theoretical construction does not provide anything that could be empirically verified in the present to justify having it as a plausible model.Here we have a fundamentally different way of explanation, using such concepts as "final cause", "design", "plan" and "intelligent agent" to account for natural events.
The design theorists have admitted that their proposal is not methodologically naturalistic, with which we must agree.Our discrepancy, however, lies in the consequences that this procedure has for the epistemological status of ID.Authors such as Behe and Dembski think that methodological naturalism is just an arbitrary criterion, and that its transgression, rather than violence, could mean a release for natural sciences.Meyer, instead, has correctly concluded that the central point of debate is not on the equivalence of ID with other theories or models of natural sciences, but with what he calls "historical sciences", in the context of which the various evolutionary theories can be categorized.Even when we disagree with the response of Meyer, we believe that his treatment of the problem has been more appropriate than that of Behe and Dembski.Indeed, the question at this point is not whether the ID theory can be regarded as scientific, but if it can be taken as a valid historical interpretation, in the sense mentioned before.Because of the methodological transgressions and the hybrid lexicon that this construct exhibits, we must respond negatively.
What we intend to show here is that while the ID theory has a constructive aspect, the core of that aspect remains theoretically insufficient, and thus ends up being subsidiary to other theories and models, which, paradoxically, are precisely those that it seeks to refute.This is clearly reflected in the central concepts of this proposal.A designed biological system, Behe says, is an irreducibly complex one, i.e. a structure whose origin cannot be explained in Darwinian terms.Thus, only when the theory of evolution by natural selection cannot provide a plausible reconstruction of the origin of a biological system or an organism, we are authorized by this author to infer a design or plan.The same happens with the Explanatory Filter of Dembski, a decision algorithm in which the conclusion of design is nothing but the residue after the explanation based on law or chance has been discharged.

CONCLUSION
The configuration of the current debate between advocates of ID and proponents of neo-Darwinism has its origins in several cultural processes, some of them deeply tied to the very identity of western societies.We believe that our cultural environment could provide an excellent intellectual framework for an objective and thoughtful reflection on the problem of the history of organic life.In effect, we think that this analytical approach must not be restricted to the test of the different forms of creationism, but should be extended to the ID and to evolutionary theories and models as well.We have been working in that direction for some time, especially on the subject of the ID theory and the theory of evolution by natural selection, and we have realized the major scientific and philosophical challenge that this enterprise represents, as well as the oversimplifications that can often be found in the literature.
In this report, we have limited our efforts to expose and summarize for the non-specialist some of the concepts, proposals and arguments that fall under the ID theory.We think that this theory is a hybrid construct, in which the perspectives of historical disciplines have been combined with those of philosophy, without following the specific methodology of either of them, and with a vain attempt to formulate a scientific theory.Alternative theories that rival the current ones will be welcomed by the scientific community, if of course the authors of those novel approaches respect the methodology of this epistemological level.The fact that scientists, historians, philosophers and theologians of the most diverse intellectual traditions have shown little acceptance and even consideration of the ID after nearly two decades, is in our opinion a symptom that something is wrong with this model (Coalition of Scientific Societies, 2008).However, many of the reasons for this rejection have not been as clear and formal as expected.It is at this point, precisely, where our work is intended to be a contribution.