Topic: William A. Dembski: Why Natural Selection Can't Design Anything
Member # 1
posted 29. November 2001 20:36
Why Natural Selection Can't Design Anything
by William A. Dembski
ABSTRACT—In the early 1970s Leslie Orgel argued that the key problem facing origin-of-life researchers was to explain the specified complexity inherent in the first living form. Thirty years later this remains the key problem facing origin of life research. Nonetheless, the biological community is convinced that the specified complexity of living forms is not a problem once replication is in place and the Darwinian mechanism has become operative. In this paper I argue not only that we have yet to explain specified complexity at the origin of life but also that the Darwinian mechanism fails to explain it for the subsequent history of life. To see that the Darwinian mechanism is incapable of generating specified complexity, it is helpful to consider the mathematical underpinnings of that mechanism, namely, evolutionary algorithms. Roughly speaking, an evolutionary algorithm is any well-defined mathematical procedure that generates contingency via some chance process and then sifts it via some law-like process. It is widely held that evolutionary algorithms provide a computational justification for the Darwinian mechanism of natural selection and random variation as the primary creative force in biology. Nonetheless, careful examination of evolutionary algorithms and the information with which they are programmed reveals that evolutionary algorithms, far from eliminating the specified complexity problem, merely push it deeper. Indeed, the recently proven No Free Lunch theorems show that any output of specified complexity from an evolutionary algorithm presupposes a prior input of specified complexity. And since all biological design invariably exhibits specified complexity, it follows that evolutionary algorithms (and the Darwinian mechanism in particular) are incapable of resolving the problem of biological design.
To read the entire paper, please click here
-29 November 2001
[ 05 May 2002, 14:57: Message edited by: Moderator ]
Member # 6
posted 10. December 2001 23:11
In this response I am simply going to argue that Dembski, although making an in-principle impossibility argument in his paper "Why Natural Selection Can't Design Anything", would benefit from a well defined and relevant criteria of falsifiability regarding the nature of specified complexity. The reasons for this are twofold. First, it would provide scientists with a more feasible way to engage and challenge his ideas. Second, since Dembski's work is primarily conceptual, a body of research providing general confirmation of his ideas would only serve to strengthen his thesis. Ultimately, I believe that Dembski would serve himself well by opening up discussion on the nature of specified complexity, and by elicting challenges. Unfortunately, his paper gives the impression that any challenges are a "category mistake" and he leaves us wondering, "just how should we go about testing his ideas in the scientific world".
In his paper, "Why Natural Selection Can't Design Anything" Dembski provides us with the following in principle, impossibility argument:
Evolutionary algorithms...do not generate or create specified complexity but merely harness already existing specified complexity...[to]claim that natural laws, even radically new ones as Paul Davies suggests, can produce specified complexity is to commit a category mistake. It is to attribute to laws something they are intrinsically incapable of delivering.(16)
Essentially, Dembski's point is that natural laws are ontologically incapable of producing specified complexity. His reasoning behind this claim includes both 1. an inductive argument that all known specified complexity to date is associated with an intelligence (most often human intelligence) and 2. a displacement problem argument deduced from the proven NFL theorems.
In making argument one (above), Dembski is making an ontological claim about the nature of both specified complexity and intelligence: that some intelligences are capable of producing specified complexity and that all specified complexity is produced by an intelligence. In doing this, Dembski has established a potential criteria for falsifiability: find a case in which specified complexity is generated without recourse to an intelligence.
With his second argument (the displacement problem, above), Dembski makes the task of falsifying his claims regarding the nature of specified complexity, much more difficult. He begins by asking "whether...evolutionary algorithms smuggle in any hidden teleology and thus merely rework preexisting specified complexity rather than generate it de novo."(8) The NFL theorems tell us that all evolutionary algorithms are no better than blind search, and in order to be effective, must be constrained by goal oriented information. Therefore, if an evolutionary algorithm is able to solve a problem, it is because of the specified complexity that has been given by an intelligence (the programmer in many cases) in order to guide the evolutionary algorithm toward a specific goal, normally through the use of a fitness function. Ultimately, he claims that "evolutionary algorithms [merely] displace the problem of generating specified complexity but do not solve it." (13)
By introducing this second argument, Dembski has given us a game of shells in which the goal is to track down the source of the specified complexity which constrains and directs the selection of points in the phase space. If the phase space proves too improbable for blind search to find the target, then the next step up on the heirarchy of search spaces is the informational context. This heirarchy can continue upwards indefinitely. Unfortunately, at each level we move up, in general, finding the appropriate target in the search space becomes even more improbable. Thus an infinite regress, and thus the displacement problem.
So, where does the specified complexity that is needed by an evolutionary algorithm come from? Dembski of course argues that it comes from the only place he thinks it can possibly come from: an intelligence. But what if we are seeking to test Dembski's inductive claim that specified complexity can only be generated by an intelligence? How can we turn this claim into a testable hypothesis, without committing what Dembski calls a "category mistake"? Likewise, how would we know for sure that we've come upon an algorithm or a robot in AI that is capable of generating specified complexity? What if we are having trouble tracing the information pathway of what seems to be de novo CSI to any pregiven set of information, either explicit or directive. In other words, if we were to redefine the Turing Test in terms of Dembskian concepts, what would it be that convinced Dembski that a non-lving Robot had produced the specified complexity we had once thought only living intelligences could produce?
Although he uses an impossibility argument, I think Dembski would do well to establish a criteria of falsifiability for his inductive claim regarding the nature of specified complexity. What test would convince him that his inductive argument has been falsified? What goal can he set for scientists to aim for when engaging his ideas? The Turing Test has given computer scientists a very clear and concrete target to overcome. Whether it is relevant is certainly up for debate. The point is that because Turing set his test up as a challenge to philosophers, computer scientists and cognitive scientists, they have vigorously engaged his ideas.
What if the information trail ended at an unexpected place? Should we always insist that the path is there; that we just have to look a little harder? Or is there a certain event that would convince us, Dembski included, that his claims regarding specified complexity have been falsified.
Richard A. Johns
Member # 94
posted 05. February 2002 23:01
Dembski on Complexity: An apparent circularity
This isn’t so much of a reply as a request for clarification. Also, to be honest, I should admit that my own paper Dynamical Complexity and Regularity is an attempt to answer the question that I here put to Dembski.
Dembski defines the complexity of an object as its Shannon information, i.e. as minus the logarithm of the probability of the object. My main request for clarification is this: What is meant by probability here? It appears to be the probability that the object will be “found” by some stochastic process, within the first m attempts. Moreover, this stochastic process seems to represent the dynamical evolution of a physical system. This understanding leads, however, to the following circularity.
The conclusion of Dembski’s design argument, roughly speaking, is that life is very improbable. In other words, if you merely let physical matter do its own thing for a few billion years, then the probability of getting even something like a cockroach during that time is vanishingly small. Now, to say that life is highly improbable is equivalent to saying that it is complex, in the sense of Shannon information. Thus, it appears, the conclusion of the argument is that life is complex. Yet Dembski clearly takes the complexity of life as a premiss. Here is the apparent circularity.
I presume that Dembski’s argument is not such an obvious fallacy. There must be some variation in the meaning of “probability” here, or (more likely) a variation of the context in which the probabilities are defined. The stochastic process used to define complexity cannot be the same one that is referred to in his conclusion, that life is improbable. It would be helpful if Dembski were to explain more clearly what is going on here.
The need for clarification on this point is augmented by the fact that “complexity” is not always defined as Shannon information. Intuitively, we think of a complex object as heterogeneous, aperiodic, intricate, elaborate, patternless, and so on. Complexity, on this conception, is opposed to simplicity. A simple object is something like a crystal, which has a small number of basic parts, arranged in a way that is easily specified. The more symmetry, self-similarity, or repetition an object contains, the simpler (i.e. less complex) it is.
Dembski has not, as far as I am aware, argued that his sense of “complexity” coincides with the intuitive sense. Thus a Darwinist may simply deny that living organisms are “complex” in the sense of Dembski’s definition. He may admit that they are intricate, aperiodic and so on, and yet deny that they are physically improbable. After all, such structures as the Mandelbrot set are certainly aperiodic and intricate, yet they are generated by simple, deterministic rules. If such rules are allowed a small stochastic element, and trillions of iterations, then who is to say that they could not reliably produce something like a living organism? If the Darwinist can reasonably deny that life is complex, in the required sense, then Dembski’s argument doesn’t get off the ground.
William A. Dembski
Member # 7
posted 07. February 2002 23:03
It's long overdue that I respond to Micah. And given Richard Johns's recently expressed concerns, it's appropriate that I clarify some things.
First off, regarding Micah's concern over falsification for specified complexity being a reliable indicator of design, one never finds scientists who want to explain away specified complexity by appeal to some blind natural process as willing to admit that the item that exhibits specified complexity and needs to be explained is, under that blind process, improbable. Appeal to wild improbabilities for specified events simply does not wash. It does not wash in special sciences like archeology and SETI, and it does not wash when design detection techniques are applied in the natural sciences. In this regard have a look at my piece on the ISCID archive titled "The Chance of the Gaps."
Specified complexity is a reliable marker of intelligence. What does this mean? It means that when complexity and specification are both present, then one can be confident that design is present as well. It is the same confidence that casino's have of winning in the long run (Keynes may be right that in the long run we're all dead, but in the long run casinos all come out ahead). The way to falsify the claim that complexity and specification reliably take us to design is not by falsifying the underlying logic leading from complexity and specification to design. The logic is (probabilistically) valid.
The law of small probability that I develop in _The Design Inference_ is well settled in practice even if Bayesian probabilist like Elliott Sober continue to quibble about it. You can see this by how scientists deal with specified complexity in practice (Bayesians in fact tend to lament that working scientists typically ignore their techniques -- I can provide references on request). When scientists want to avoid a design inference, they argue either against specification or against complexity (qua improbability). Michael Shermer, for instance, tends to attack specification. As a psychologist and a Darwinian, he sees humans as pattern seeking animals who have been shaped by natural selection to attach special meaning to arbitrary patterns in nature. Most biologists, like Richard Dawkins, attack the complexity qua improbability side of things (hence Dawkins's book _Climbing Mount Improbable_), arguing that the Darwinian selection mechanism washes away a seemingly insurmountable improbability. No design critic leaves it at: it's specified, it's complex, indeed it's wildly improbable, that's okay, I'll invoke chance.
I'll turn to specification when I take up Richard Johns's concerns, but for the moment let's assume specification is not at issue in some phenomenon and that the phenomenon, if complex, would exhibit specified complexity. How could the design inference go wrong in this case? Well, we might be wrong about the relevant probability distributions that might characterize the phenomenon. For instance, we think that round circular craters on the moon are highly improbable (as circles they're specified), and so we, along with Johannes Kepler conclude that there are moon dwellers who have been excavating these craters. Then we find out about meteors and their ability to form circular craters, and the design hypothesis gets discarded. It's in cases like this that the design inference can go wrong -- where we are mistaken about the relevant probability distributions that apply to the phenomenon in question.
But does that mean that we can't trust the design inference? The confidence we place in its conclusion depends on the confidence we have that we've accurately assessed the relevant complexities/improbabilities. Yes, we might get those wrong (welcome to the fallible world of science). And in some cases we may be unable to form any reasonable assessments of probability. But in other cases we can form such assessments and have great confidence that those assessments are substantially correct. The case where we can form such assessments with the greatest confidence is when the laws of nature are indifferent about one arrangement of a thing over another. The sequencing of DNA and polypeptides is a case in point. There is nothing in the laws of nature that privileges one sequence over another. There is pure indifference at the level of laws of nature. Ernest Nagel referred to this as "orthogonality," Jacques Monod as "gratuity," and Michael Polanyi as "boundary conditions that transcend the laws of physics and chemistry." What you have are informational structures, some of which are functionally privileged, but which are not at all privileged as far as the lower level laws are concerned. I deal with all this in _No Free Lunch_.
Let's now turn to specification and Richard Johns's concerns. Perhaps a little history will help. I came to work on design by noting a certain pattern of inference that I saw repeated over and over, one that invoked small probabilities and independently given patterns. Moreover, this inference in some cases was used not merely to preclude a given chance hypothesis but rather to sweep the field clear of all relevant chance hypotheses and therewith infer design. Thus the issue became not just to get rid of one chance hypothesis and then allow the design critic to propose another. Rather, to make a design inference work one must get rid of all relevant chance hypotheses. Thus one must preempt the design critic by overturning the chance hypotheses that he/she might want to introduce, showing that insofar as they are relevant, they also render the item of interest wildly improbable. Thus it's not a matter of starting with something that seems improbable with respect to one probability distribution, running an evolutionary algorithm (let's say), and then seeing that with regard to the evolutionary algorithm the probabilities really aren't that small after all. In that case one must show that the evolutionary algorithm fails to locate the item it's searching for except as a wildly improbable fluke or that the construction of the evolutionary algorithm was itself cooked up to include the very information that it's claiming to find (cf. Dawkins's METHINKS IT IS LIKE A WEASEL).
As Ira Katz noted today on the ARN Bulletin Board, I never used the phrase "specified complexity" in _The Design Inference_. Nevertheless, there was a long chapter in TDI on complexity (chapter 4 -- and nonprobabilistic complexity at that) and then a chapter on specification. Throughout TDI the focus was on "small probability" and "specification." Subsequently, in order to tie my work in with Shannon's notion of information and to connect it with "specified complexity" as the term was used by origins of life researchers like Leslie Orgel, I identified the "complexity" in "specified complexity" with the "small probability" in "specified small probability" (the term specification staying the same).
There's one unfortunate aspect of the term "specified complexity," and that's the fact that the specification in specified complexity itself presupposes a notion of complexity, and one that's nonprobabilistic. In fact, the notion of complexity that inheres in specification takes its inspiration from Kolmogorov, Chaitin, and Solomonoff. The intuition is this: For an event to be specified, in addition to being independently given (and therefore not cooked up my merely reading a pattern off an event), it must be relatively easy to identify it. It can't be an arbitrarily concocted pattern. If you will, a specification is an easily identified (and thus in some sense simple rather than complex) pattern that identifies an event, but the event identified is in turn complex in the sense of being improbable. It's this juxtaposition of simple pattern in one sense and complex pattern in a probabilistic sense that gives specified complexity its traction in identifying design. But simple in what sense? Simple in the sense that relative to a subject's background knowledge (the subject attempting to draw a design inference), the pattern is readily identified. This can be characterized complexity-theoretically. Moreover, it can be done objectively (though the objectivity here is epistemic in John Searle's sense rather than ontological). I do this at length in TDI, and I review it in chapter 2 of NFL.
There is a connection here with Richard Johns's work. His notion of dynamic complexity attempts to do within nature what I'm attempting to do with respect to a design-inferring agent's background knowledge. Johns is asking how difficult is it for nature given its laws to produce some item. I'm asking how difficult it is for a subject given his/her/its background knowledge to produce some specification. Johns in my view has the far more difficult task. There's a crucial point in his paper in which he tries to enumerate the number objects in nature, and he gets something around 10^100 based on the duration of the universe and the number of elementary particles. The problem is that objects built out of elementary particles can be individuated in any number of ways and can far far exceed 10^100 (I point this out in a footnote in chapter 6 of TDI -- p. 209, n. 15). On the other hand, conscious embodied agents inhere in a given set of elementary particles and cannot be individuated willy-nilly as objects. As a consequence, the specifications of which they are capable are much more tractable.
Member # 96
posted 25. February 2002 16:37
Two Comments on Dembski paper
Hi. I'm new to this, and hope it's not too presumptuous jumping in with a long post. I have two comments, the first of which is really a point of information, but the second of which is one that I consider to be rather more important, or at least, I would like to see an answer to it. I had started considering whether No Free Lunch Theorems might be relevant to the discussion of evolutionary algorithms at about the time I heard that Bill Dembski had written a book about them. I initially thought they were highly relevant, but the comment voices some worries I have about this line of argument.
(1) I am not convinced by the statement that training neural nets falls within
the broad category of "evolutionary algorithms". Dembski states that
evolutionary algorithms are mathematical procedures that generate contingency
via chance processes, and then sift it by a law-like process. In my
understanding of evolutionary algorithms, this is an iterative procedure -
where a random number generator is invoked at each iteration, before the
application of the sifting process. By contrast, the training of neural nets
is normally a deterministic iterative process, where a suitable function
optimisation algorithm is applied, based either on a gradient descent based
procedure (such as conjugate gradients or a quasi-Newton algorithm), or via
some heuristic update procedure, loosely inspired by consideration of how the
brain is considered to work (as in the Kohonen "Self-Organizing Map"). The
only point in such algorithms where randomness is invoked is at the outset,
where a random number generator is used to specify the initial configuration of
the model parameters, prior to the "training". But in this respect, the
training of the neural net is no different from any other function optimization
(2) Concerning the main argument about No Free Lunch Theorems, I would like to
pose a "devil's advocate" question. As I understand it, the gist of the
argument is that for an evolutionary algorithm to output specified complexity,
it must receive an input of specified complexity, and hence can be seen only to
be recycling existing complexity, rather than generating complexity "de novo".
The input of specified complexity is in the design of the fitness function -
which incorporates our prior knowledge of the problem (e.g. the topology of the
While all this is true, I am not certain that this can be used as an argument
against biological evolution. The problem that occurs to me is that the NFL
theorems are based on an argument where the performance of the function is
averaged over _all possible items_ of information in the informational context
(i.e. fitness functions, where I take it that for example the informational
context might be the entire class of continuous and differentiable functions).
This is of course a truly vast class of functions, but we have to ask the
question "how many of these fitness functions in the entire class constitute
interesting problems for solution?".
Here are two examples, which are not so much related to optimisation, but to
which, I believe, the same principles of the NFL theorems apply.
(i) The first example would be in the solution of large sets of simultaneous
linear equations. As is well-known, the scaling of a solution algorithm where
all the coefficients in the matrix are non-zero goes as the cube of the number
of equations. This severely limits the size of problem that can be solved.
However, in the case of _sparse systems_ of equations, where, for example each
equation, in a system of several thousand variables, has only a few non-zero
coefficients, the solution tends to scale linearly with the number of
non-zeros, provided a suitably "smart" algorithm is supplied. There are a
number of commercially available codes that will do this efficiently. But it
is certainly the case that if we take a randomly generated sparsity pattern in
the matrix, with say only 5% of the entries non-zero, then most commercial algorithms
will fail to produce a reasonable solution, and the scaling will again be
cubic. There is therefore no "general purpose" sparse linear solver, just as
there is no "general purpose" evolutionary optimization algorithm.
(Incidentally, the solution of a simultaneous set of linear equations can be
regarded as an optimisation over a quadratic cost function - and in certain
finite elements solvers is formulated in this way). But just because there is
no general purpose algorithm for sparse equation solving in the broadest sense
does not mean that there do not exist solvers that can solve the vast majority
of problems in which we are interested. This is because most of the classes of
problem we are interested in solving (for example chemical plant simulation, or
finite element calculations), have a well-defined structure, and it is
therefore possible to come up with an algorithm to cope with most of these. But
the class of sparsity patterns for the problems we want to solve is of course
the tiniest fraction of the general class of sparsity patterns that could be
produced, using, say, a random number generator to determine if the element was
non-zero or not.
(ii) The second example concerns data compression algorithms. An NFL type
theorem for data compression algorithms would state that averaged over all
possible files, all data compression algorithms are the same (and won't on
average compress the file). In order to make a data compression algorithm that
works, we have to exploit our prior knowledge of the structure of the
information in the file. Most files we have on the computer contain redundant
information, such as text, where words and strings are repeated, or graphics,
where often there are long repeated strings of similar bit patterns. We exploit that
kind of information in designing the algorithm (such as the Lempel-Ziv
algorithm, exploited in popular programs such as pkzip). For an audio file of
music, we can design different algorithms, that exploit the smoothly varying
nature of the waver form (based on Linear Prediction models). Thus with a
handful of algorithms, we can form efficient compression of over 90% of the files
we find on a computer. However, given a file of random bits, such an algorithm
will fail miserably. But such files are of no interest to us. The vast
majority of files would not be compressed by our general-purpose algorithm, but
so what? The ones that interest us are compressed very efficiently.
The point I'm trying to draw out from this is that an evolutionist might well
argue that we know an evolutionary algorithm can't solve an arbitrary problem,
but that the problems in the arising of apparent complexity of life are also a
tiny and highly specific subset of the complete set of problems, which must be
averaged over in order to form the No Free Lunch Theorem. In other words, it
solves the problems we're interested in, even though it can't solve all
I believe there are many other problems confronting evolution, notably the
argument about irreducible complexity, so I don't think the above is a
justification for evolution, but I think the question needs to be addressed,
and that one should not place too high a reliance on the NFL Theorems to
However, I would add that I strongly agree with Dembski that it is extremely difficult to design a genetic algorithm. My own experience is that, unlike data compression algorithms, and sparse matrix solvers, there is indeed no such thing as a "holy grail" of a genetic algorithm that can be used to solve a large variety of problems that you can throw at it. In general, the determination of the parameters of the algorithm (such as population size, mutation rate, and so forth) has to be painfully optimised over many different trials. It is clear that in nature we are stuck with one "algorithm" that worked first time; something that appears highly unlikely without prior intelligent input. Maybe there are lots of failed universes out there ?!
[ 26 February 2002: Message edited by: Iain Strachan ]
[ 26 February 2002: Message edited by: Iain Strachan ]
David J. Sack
Member # 49
posted 26. March 2002 22:58
On page 13 of Dembski's paper he refers to the displacement problem and on page 15 one proposed solution, being that the information needed to explain the specified complexity of life exists in the phase space instead of the fitness function. Thus it is stated the problem becomes one of necessity rather than contingency. He illustrates this as being the approach proposed by Kauffman who has proposed self-organizing, auto-catalytic properties of nature. Dembski then refutes this proposed solution to the displacement problem stating that information exists in the constrained regularities, and that we must therefore ask where the information in these regularities came from. I have several questions about this situation:
1. I thought regularities had zero information content. Is this not correct?
2. While such regularities are specified, they are not complex, correct?
3. Why would Kauffman's proposed self-organizing scheme be considered a regularity anyway? If nature and the environment are molding objects to take on the attribute of specified complexity, then why wouldn't we refer to nature as intelligent instead? Isn't this what people like Michael Denton have basically concluded?
4. I think Dembski's point is still valid, namely that this "solution" of the displacement problem isn't really a solution. I wonder though if this isn't what Davies was referring to in his opening comments to the article. Having read other articles by Davies, it seems to me that he thinks that there is an intelligence residing in nature itself capable of making the decisions necessary to create life.
5. I expect in years to come, that this is where naturalists will refocus. In essence people like Dawkins will concede that the Blind Watchmaker (nature) is intelligent. (Dawkins has already conceded that the design is apparent, though he has been careful not to distinguish apparent design from intelligent design.) This will be a way for him to save face and claim that he was misunderstood, oh these many years. He will claim that just as Mt. Rushmore's faces were chipped away by a human hand, so also is the genetic code chipped away by mutation, natural selection, and ... well... an environment that mimics an intelligence.
6. The difficulty in this approach will be the ability to establish such a hypothesis experimentally. Most will wave their hands and say the problem is intractable (which violates the definition of a specification anyway...), but that no sane person could think otherwise - just as Behe's critics wave their hands and refer to gene duplication as the explanation for any instance of irreducible complexity. A few brave souls however will conduct experiments, just like Behe's critics regarding blood-clotting. My guess is that these new critics will either misread the literature (like Doolittle) or inject intelligence into their experiments.
Terence A-H Tan
Member # 234
posted 17. April 2002 10:57
This is my simple and brief understanding of some of the initial points raised in this forum.
Turing's test pertains to the nature of intelligence and the creation of it.
Darwin's test pertains to the nature of living biological entities and the creation of it.
From test to proven theory, neither has made the grade despite extensive efforts.
Dembski offers an explanation as to why he thinks it is a fruitless attempt and an impossibility for darwinian mechanics to arrive at specified complexity.
If Dembski were to offer a test, called the intelligent design test, that pertains to the nature of living biological entities (living specified complexities) and the creation of it, then he doesn't need to look far for evidences.
Even if he were to offer a test that pertains to the nature of non-living specified complexities and the creation of it, he too doesn't need to look far for evidences.
Everyday, living specified complexities are generated from intelligent living specified complexities.
Everyday, non-living specified complexities are generated from intelligent living specified complexities.
However, no where do we see objectively that specified complexities are being generated from non-intelligent agents or laws.
The ultimate test for Dembski to prove is hence not whether intelligence can generate specified complexity, but who generated living specified complexities in this reality. This can only come from collaboration of ideas and further research which I believe is a much easier task compared to that faced by Darwinian scientists.
Douglas D. Rudy
Member # 12
posted 24. April 2002 06:41
A comment on Iain Strachan’s remarks about neural nets not falling within the broad category of evolutionary algorithms…I’ve wondered about that too.
A question I pose to myself whenever I see a claim that a truly Darwinian process can produce something new and useful in software is whether, in the experiment, actual object code is modified. In every case with which I’m familiar, what the experiment actually involves is varying parameters to existing software routines. By a common sense understanding of information, it seems fair to conclude that in such experiments, the information was all added up front.
When you ask the same question of neural nets implemented in software, the answer in every case I’m aware of is again that no actual object code is modified. Instead, the coding is done up front. And as Iain mentions, it isn’t all that easy to implement the nodes and their interactions in such a way as to solve any useful work.
So on the question of tracing the origin of the information in such systems, it seems fair to include neural net training processes (distributed intelligence) with other more top-down evolutionary algorithms.
Member # 213
posted 24. July 2002 05:50
I have to take issue with one statement of William Dembski`s which is that Bayesian`s tend to complain that scientists do not use their methods. This may have been true 5 years ago. But not now.
In my field (population genetics/sequence evolution) there used to be some complaints voiced by Joseph Felsenstein that many scientists (cladists and those using parsimony methods) did not accept the use of genuine statistical methodology.
I think he has had to stop complaining about this a few years ago - he won. He, as an ardent frequentist, is now more likely to complain (although much less forcefully - Bayesians are at least genuine statisticians) about the rise of the Bayesian heresy.
Bayesian methods allow scientists to incorporate all the information they have when making inferences, and fits in particularly well with Markov chain Monte Carlo techniques, which allow the exploration of complex biological models and have recently come to dominate statistical computation.
At least 50% of new statistical methods in population genetics/sequence evolution are based on Bayesian principles, and the proportion is rising continuously. World class statisticians like Peter Donnelly have long ago switched from being frequentist to being Bayesian.
In fact, I`ll go a bit further than that. I`ll have to go a little bit further than that.
I make a living analyzing biological data using appropriate statistical techniques,
and although I have not been a Bayesian all that long, it is now the only methodology that I can really understand. Bayes rules!
[ 24 July 2002, 06:04: Message edited by: fish ]