21st at 9pm Eastern ISCID hosted a live moderated chat event with
This chat was part
of a month long reading discussion that was held on Dembski's newest
Free Lunch. You can receive full access to the discussion that
ensued at the
ISCID Reading Discussion on No Free Lunch: Why Specified Complexity
Cannot Be Purchased Without Intelligence
Transcript from Novemer
2, 2002 9:00-10:15 PM Eastern
Copyright © by
International Society for Complexity, Information, and Design 2002.
Our guest speaker today is William A. Dembski. Dr. Dembski is associate
research professor in the conceptual foundations of science at Baylor
University and the executive director of the International Society for
Complexity Information and Design. Dr. Dembski previously taught at
Northwestern University, the University of Notre Dame, and the University
of Dallas. He has done postdoctoral work in mathematics at MIT, in physics
at the University of Chicago, and in computer science at Princeton University.
A graduate of the University of Illinois at Chicago where he earned
a B.A. in psychology, an M.S. in statistics, and a Ph.D. in philosophy,
he also received a doctorate in mathematics from the University of Chicago
in 1988 and a master of divinity degree from Princeton Theological Seminary
in 1996. He has held National Science Foundation graduate and postdoctoral
Dr. Dembski has published articles in mathematics, philosophy, and theology
journals and is the author/editor of seven books. In The Design Inference:
Eliminating Chance Through Small Probabilities (Cambridge University
Press, 1998), he examines the design argument in a post-Darwinian context
and analyzes the connections linking chance, probability, and intelligent
causation. The sequel to The Design Inference; No Free Lunch: Why Specified
Complexity Cannot Be Purchased without Intelligence is now availble
with Rowman & Littlefield and critiques Darwinian and other naturalistic
accounts of evolution.
I am now going to hand the talk over to Dr. Dembski. Participants can
start sending in questions.
Hi everyone. I'm afraid I don't have any prepared remarks, so I'll leave
it to you to get the discussion going. If I had to say that NFL is about
one thing, it is about distinguishing real possibilities from mere conceptual
possibilities. Specified complexity is the key in this regard, and things
get dicey when specified complexity becomes an issue in biology.
Bill, intelligence is a concept that is hard to pin down. Could you
give us some criteria for describing intelligence? For example, is deliberation
required for intelligent causation? Can we conceive of intelligence
Good question. It seems to me that we have some basic intuitions about
intelligence. And deliberation certainly seems part of it in our ordinary
Yet when we try to make sense of intelligence in a scientifically rigorous
way, it seems that we have to do some operationalizing of the concept.
This happens in physics, for instance, when intuitive notions of energy
get replaced with more precise notions, but also notions that may not
seem so intuitively obvious.
For me, the key defining feature of intelligence is the ability to output
specified complexity. There is some precedent for this in the literature.
Also, there is a sense in which intelligence is a primitive notion.
Indeed, we can't even do science without it, and our understanding of
the world depends on the world's intelligibility.
But here it seems we're back to a more intuitive notion.
At any rate, I'm not sure that we do ourselves much good trying to
define intelligence too closely.
I don't mean this as an evasion, but all dictionaries are finite, and
all definitions must ultimately circle back on themselves.
I think the important thing is to resist a reductive account of intelligence
where intelligence becomes something crass like computations or emergent
physical processes. And regardless what view of intelligence one takes,
specified complexity still is well-defined and a basis for scientific
Craig Venter and others are said to be attempting to pare down an organism
to a bare essential number of genes They will then reproduce this,
I guess, to claim the creation of artificial life. Whatever the outcome
the last living organism will be irreducibly complex. What do expect
the Darwinian argument will be for how this organism could "evolve".
I'm not sure that Craig V's venture is going to provide evidence for
anything other than ID.
The scientists working for him presumably are going to be using all
their knowledge and expertise -- read intelligence -- to get ALife
up and running.
What's more, they will be drawing heavily from existing life forms.
I just tried calling up an email from a friend of mine, but unfortunately
my email program just crashed... In the email my friend notes that he
could do things in his lab that a few years ago would have won a Nobel
Prize. The point is that Craig V's group will be doing reverse engineering,
not creative engineering. And nothing like a Darwinian mechanism
is going to be employed in their research, at least not in any significant
Dr. Dembski, when you speak of intelligence required to create specified
complexity, and that what we find in, say, the genetic code, is specified
complexity, do mean to say that that level of complexity...the complexity
necessary to account for the diversity we find in living systems...was
put in at the beginning by an intelligence?
When we find specified complexity, say, in the genetic code, we know
that an intelligence was involved.
But how much more do we know?
It seems the best we can do is see where the specified complexity first
But where specified complexity first becomes evident need not say anything
about where it was first inputted.
An example I give is of a computer program in which random numbers
are generated for a while and then suddenly sublime poetry gets outputted.
At what point was the specified complexity "put in" in Donald's
manner of speaking?
Well, it was put in in the writing of the program and not at the point
where the output of the program jumped from random gibberish to sublime
But without the program in hand, and if we didn't know we were dealing
with a program, we might just as well think that the specified complexity
was inputted at the point we saw the output jump from random gibberish
So it seems the best we can do scientifically is trace informational
pathways (i.e., pathways of specified complexity) back as far as
But at some point we shall always reach a discontinuity or boundary
condition at which we can't push the specified complexity further
back. The Big Bang would be the ultimate discontinuity/boundary condition
in this regard, but it may just be that in practice we can only trace
it back within the history of planet earth.
it not true that "the ability to choose with
intent so as to produce function" is fundamental to any concept
I'd say the ability to choose is fundamental, but I think requiring
a function is too specific.
Certainly in our experience intelligences act on purposes. But it seems
that purposes can be without function.
Take beauty, for instance. An artist may create an object for no function
(at least not in the mechanical sense), but simply for the joy of
There would be a purpose here, but no function, at least not in the
sense in which the term is used in biology.
And what about a capricious intelligence that does things so that they
don't seem to have any purpose.
"Darwin's God," which is getting a lot of play these days, might
be such an intelligence.
I'm not saying that natural selection is an intelligence substitute.
But Darwin's God, when cited by people like Ken Miller, is the intelligence
that set up the system of nature in which Darwin's mechanism could
play itself out.
But that mechanism is crude and wasteful and violent.
What would such a God/intelligence's purposes be? I'll leave this question
I was wondering what type of causal theory you prefer: deterministic,
probabilistic? Is there one that is more conducive to a theory of
intelligent design? The reason I ask is that my current work is focusing
on ways to understand the role of mental causation as bringing together
and directing multiple independent chains of physical causes to work
towards the intent or goal of the organism as a whole.
I'm not sure any particular causal theory is going to empirically distinguishable
from any other.
It seems that one can always get rid of probability and recover determinism
if one is willing sufficiently to expand one's ontology.
We are seeing this with the many worlds hypothesis in quantum mechanics.
This is a huge subject, and ultimately an exercise in metaphysics.
Just briefly, my own approach is to turn the problem on its head and
see the fundamental mode of causation as intelligent causation, and
the other modes of causation as byproducts of this. I hint at this
in NFL when I consider Ernest Wright's book Gadsby, and treat probabilities
of letter frequencies of English texts as
byproducts of the design of those texts.
You don't grant that natural laws and/or the constraints of environment
supply the specificity needed to produce the specified complexity
in living things. At what point do you think the necessary information
is/was entered in biological systems?
Natural laws simply don't have the information in them to produce specified
complexity. Michael Polanyi and Hubert Yockey have been making such arguments
for some time. So the issue is the contraints in terms of boundary conditions,
initial conditions, and fundamental constants. And just where those constraints
are inputted is not clear. What is clear is when we become aware that
such constraints embodying specified complexity become evident. So the
focus needs to be on where we first see specified complexity and how
far we can trace it back.
Rob Koons seem to be taking the same approach in his latest work on
mental causation and his TPE model.
Back to Micah, that's not surprising about Rob. And Chris Langan is
also taking such an approach.
The problem is that if some form of mind or intelligence is not metaphysically
basic, one never recovers it later.
First of all, thanks for the answer to my first question. If my follow-up on
the Craig Venter issue. Suppose he's successful and some artificial life form
results. And further suppose that 150 years hence, evolutionary biologists of
the time discover this life form but no nothing of its causal history. What would
tell them it was the result of ID instead of natural selection?
It seems that if Darwinists did not know about the human ability to
reverse engineer life (I don't say create it!), then they would reflexively
assume natural selection did it.
Otherwise, they would have to be more careful.
It is an interesting point, though. If Darwinian explanation becomes
implausible because we know designers are on hand who could have
engineered the systems in question, then why isn't it implausible
if we don't know whether engineers were present?
Are not natual laws a form of Kolmogorov algorithmic compression? Compression
is possible because of a high degree of ordering rather than a large amount of
I think I've seen John Barrow take that approach to natural laws.
And Ernst Mach, with his 19th century positivism also took that approach,
seeing natural laws, and mathematics in particular as a way of summarizing
(i.e., compressing) vast amounts of data.
But in the act of compression, let's not forget, a lot of information
is lost. So compression is not a way of generating novel information.
Natural laws just don't contain much information.
You can't, for instance, from a universal Turing machine reconstruct
a Shakespearean sonnet. Universal turing machines can be extremely
simple (Greg Chaitin has some particular simple ones, and Steve Wolfram
has some simple ones
in the form of cellular automata). But for UTMs to do anything interesting,
they need to be programmed.
Chris (Nov 21, 2002 9:39:24 PM)
Hi, Bill. Here's a question similar to one of Micah’s questions
above. Recently, I happened to hear somebody criticizing your work
by saying that the probabilistic approach to ID is “meaningless” given
that gene expression, protein folding and so on is deterministic, and
thus that the formation of any associated complex structure is of P=1.
To what extent is NFL dealing with “absolute probability” as
opposed to subjective probability? Do you consider specification subjective
Bill, what about the information being selected from the environment by a natural
algorithm versus an intelligent designer. Choise merely is selecting from available
options, whether it be intelligent or not. Algorithms do not have information
in them but neither does ID.
It's an interesting question just how much information is in the environment
for a Darwinian mechanism to try to exploit.
The problem at this time seems too complex even to get a handle on
That's where my appeals to conservation of information and "no
free lunch" come in -- even if the Darwinian mechanism is the
conduit for outputting specified complexity, it first had to be properly
But there's the other question of whether the Darwinian mechanism is
indeed the conduit for specified complexity witnessed in biological
systems. Here is where irreducible complexity comes in (ch. 5 of
The only way around irreducible complexity is through indirect Darwinian
pathways, but as I've argued lately on ISCID, this breaks down for
lack of causal specificity.
Hi, Bill. Here's a question similar to one of Micah’s questions above.
Recently, I happened to hear somebody criticizing your work by saying that the
probabilistic approach to ID is “meaningless” given that gene expression,
protein folding and so on is deterministic, and thus that the formation of any
associated complex structure is of P=1. To what extent is NFL dealing with “absolute
probability” as opposed to subjective probability? Do you consider specification
subjective or objective?
On to Chris.
It seems that your critic is confusing a developmental mechanism with
an evolutionary mechanism.
Sure, bacterial flagella and other irreducibly complex objects arise
in well-defined ways.
They have well understood causal pathways by which they come about.
But that says nothing about how they originated, and that's the issue.
As for probabilities being subjective or objective, that's a big topic.
Let me just say that I try to be as objective as possible with probabilities,
limiting them to probabilities of events on the basis of chance hypotheses.
This form of probability is accepted across the board. The approach
to probabilities that I find myself increasing in competition with
is a Bayesian/likelihood approach where probabilities get assigned
to hypotheses themselves. This need not be bad, but it's important
to understand that here we are dealing with degrees of belief attached
to hypotheses rather
than probabilities of events out in the world whose probabilities
are conditioned not ultimately by what we think but by the material
mechanisms operating in the world.
What, in your opinion, is the relationship between irreducible complexity
I think it's important to understand that emergence is a transitive
notion in the sense that a very is transitive if it connects a subject
to an object. To say that "X emerges" is incomplete in
the same what that "John eats" is incompletel. What is
John eating. Likewise "X emerges" needs to be completed
by "X emerges from Y".
So if X is an irreducibly complex system, sure X has emerged. But from
what has X emerged? And how did it emerge? I argue in NFL that intelligence
is indispensable in that process of emergence.
Note that this is not to preclude the activity of the Darwiniann mechanism
and natural causes more generally.
It is, however, to say that they are incomplete.
By the way, because ID says that the Darwinian mechanism and natural
causes are incomplete, it's not saying that they are "wrong." ID
is not in contradiction to the materialistic science that has preceded
it as a completion of it. It should be seen as assimilating rather
than cutting away the past.
so do processes such as symbiosis or collaborative behavior add to
this intelligence at some point?
I definitely believe that intelligence is enhanced through embodiment.
Thus when natural and intelligent causes work together, they synergize
in ways that neither left to itself could. Think of a Shakespearean
sonnet. Shakespeare can capture a mood with a phrase that has resonances
in the history of the English language
which would be lost in a realm of pure abstraction. Pure abstraction
If I might return to the earlier point you made about tracing informational
pathways back as far as possible. How can we translate this concept
in to something that would be considered scientifically useful? It
does seem that there is scientific fruit to be plucked there, but
I'd be interested in how you see that.
There's plenty of precedent for tracing informational pathways.
The example I actually know best is textual transmission of ancient
We find many manuscripts of what we take to be the same basic text.
But there are also lot's of variations introduced by scribal errors
as well as deliberate insertions by people who want to get their
views incorporated into what often are sacred texts. So there's historical
work to be done, using patterns of errors to trace history. This is going
on right now in reconstructing molecular phylogenies in biology. The
point is that this work could be done in the service
of ID. The devil, as usual, is in the details.
Assuming that intelligent causes and natural causes act together, will
it be possible to differentiate what each is responsible for in a
given system? And, perhaps a more important point, do we need to
be able to differentiate?
Hi Billy Bob.
It seems that some differentiation will be needed. Take the Shakespeare
example. We might say that natural causes here correspond to the English
language bequeathed to Shakespeare as well
as his upbringing and training. But then there is also that ineliminable
contribution by him. When we look at artifacts that have been marred
by natural causes or subverted by evil designs, we are again sifting/differentiating
original intelligent cause from subsequent causes. But we don't need
to focus purely on the differentiation. It seems that the more interesting
question will be the synergy. Right now,
however, because design is being so fiercely resisted, it seems that
the differentiation question is getting the primary attention.
Is there any base-level connection between replicative and specificative
probabilistic resources? If we regard the former as the source of
contingency, this amounts to asking if there is any fundamental relationship
between contingency and specification. Obviously, one part of the
answer is complexity; it arises from contingency and is selected
by specification. But is there anything in particular that you would
describe as relating the generative and selective aspects of specified
complexity, e.g. some form of utility or teleological attribute?
I.e., what is the goal of the specifying intelligence?
Hi Chris, I believe this will have to be the last question. You've asked a question that could have formed the basis for this whole
chat. It's clear that replicational and specificational resources are different.
Whereas replicational and specificational resources taken by themselves
are additive, taken together they are multiplicative (cf. the arrows
[replicational] and targets [specificational] example in ch. 2 of
NFL). On the replicational side we are dealing with the world -- replications
are things that the world is capable of giving us. On the specificational
side we are dealing with intelligences making cognitive identifications
of patterns. But things get more complicated still because the agents doing the
specifying are embodied forms. (We're interested in, as it were,
the number of lottery players in the universe who can be making specifications
with us.) Now contingency can arise both on the replicational and specificational
front -- natural occurences in the world could be contingent; and
the specifications generated by intelligences are contingent.
Now the design inference is triggered when specifications match up
with events in the world; in other words, when the specificational
matches up with the replicational (I'm talking loosely here for lack
of time and space). The presumption, then is that the events in the
world were themselves designed. Why? Because the best explanation
of the match up is that the teleology in coming up with the specification
was the same teleology that was behind the event in question in the
The mechanistic materialist wants to say that all the teleology in
the world is simply imposed there by us.
Specified complexity shows us that the teleology is not only in us
but also instantiated in the world.
Let's leave it there. Thanks so much for coming out tonight!
ISCID would like to thank William Dembski for the wonderful discussion.
Thanks Dr. Dembski
Thanks, Bill! Great chat.
Copyright © by
International Society for Complexity, Information, and Design 2002.