from November 15, 2002 4:00-5:00 PM Eastern
© by International Society for Complexity, Information, and Design
Our guest speaker today is Stuart Kauffman.
Dr. Kauffman is MacArthur Fellow and an external professor at the Santa
Fe Institute. Twenty-five
years ago, he developed the Kauffman models, which are random networks
exhibiting a kind of self-organization that he terms "order for
free." Dr. Kauffman is the founding general partner and chief
scientific officer of The Bios Group, a company that applies the science
to business management problems. He is the author of _The Origins of
Order_ and _Investigations_ and the coauthor (with George Johnson)
_At Home in the Universe: The Search for the Laws of Self-Organization_.
I am now going to hand the talk over to Dr. Kauffman. Participants can
start sending in questions.
Hello All, fire away. Stu
Stuart, could you comment on Robert Wright's treatment of the social
complexity issue in his book, Non-Zero?
Sorry, I've not read Wright's book, so cannot comment. Stu
Dr. Kauffman, Can you explain the basic idea about how principles of
self-organization work? Thanks!
If one considers models of genetic regulatory networks, comprised of
random Boolean networks, these can exhibit astonishing order, or can
exhibit chaos. A general phase transition separates the regimes. In
the ordered regime one sees self organization
Dr. Kauffman, do you know of any computer simulations based upon your
research that we can download and run on our own machines?
Unfortunately, no. I've published extensively in my three books, and
the models are not too hard to program.
Actually, Jim Herriot at Bios Group, has a model of Boolean networks.
Does entropy decrease in a self-organizing system?
The probable answer is no. In an open thermodynamic system, for example
a Boolean network, entropy is increased in the outside world even as
the system itself converges in state space and yields order.
Freman Dyson commented many years ago that a vexing theoretical problem
for origin of life research was the lack of a rigorous definition of
metabolism. Do you think that problem still pertains, and if so, how
is that question presently being addressed?
Interesting comment by Dyson. I don't think we have a "definition"
of metabolism, although we have the well studied examples of terrestrial
metabolisms. Harold Morowitz is trying to get the core of metabolism
to "go" without enzymes.
Hi Stu, I'm wondering what you think of the current state of self-organization
theory, the progress its made since its inception, and where you see
it going in the future...in particular I'm wondering if you could shed
any light on what's happening in state of the art complex systems research
using self-organizational concepts.
One is finding increasing cases in which complex systems organize their
behavior in remarkable ways, from flocking bird agent based models to
self organized criticality with it applications to the extinction events
of the biosphere. I think we are in the early stages of integrative
science and will find lots more examples.
Is anyone, to your knowledge, currently attempting to create autocatalytic
reaction sets using organic molecules to model possible abiogenesis
Reza Ghidiri at Scripps is trying, although he is using small proteins
and protein catalytic networks.
Stuart, is there an outer limit on the level of complexity to which
biological systems move? If so, what might happen as that outer limit
I have thought a bit about this. You'll find it in At Home in the
and Origins of Order. It has to do with supracritical chemical behavior.
If a system has too great a diversity of chemicals, it should cause
novel reactions with new molecules that enter, wreaking havoc.
PS, I suspect that a natural selection argument tunes systems to the
subcritical regime, near the phase transition.
Do you see in the near future an experimental proof (not computer sim.)
that can generate complex(er) forms of life - with other words can we
"construct" life from scratch?
I hope we can construct life from scratch. In Investigations, I propose
a tentative definition of an autonomous agent. Such an agent is self
reproducing and does a thermodynamic work cycle. It may be that I've
found a definition of life, and they should be constructable. Others
are trying to make minimal cells with various techniques.
Is the organization you are talking about best understood as "order"
- that is, highly repetitive and periodic or is it something more like
"complexity" - that is, not repetitive and aperiodic-meaningful
in that it matches some external pattern? Thanks
I don't know. I think a core idea is convergence is state space of the
open thermodynamic system which yields a kind of homeostasis and order.
My Boolean nets do that, for example.
Hi Stuart. Would you comment on the role of the observer in complexity
studies. For example, in the oft noted order seen in bird flight (individuals
with no knowledge of geometry form a V), from the point of view of one
of the birds not much interesting is happening. But to an outside observer,
the V emerges. Do you think that there is a relation between the observer
problem in quantum mechanics and the observer notion I've noted here?
I doubt such a relationship. The Bboids simulation shows that three
simple rules gives flocking and V behavior. The entire system is classical,
Dr. Kauffman, would you care to comment on William Dembski's Complex
Specified Information arguments, and the extent to which you think they
are applicable to actual biological systems?
Well, I debated William. I think the basic question he asks is perfectly
reasonable. How would we recognize a signal from space as non-noise
for example. But in the biological realm, I feel he has not made his
case. There are too many alternative explanations, based on Darwinian
selection, to get such complex specified information.
Dr. Kauffman, do you understand isolated self-organizational events
as in some sense generating new information, that is, as getting information
(specified information, that is, not mere Shannon Information) "for
free"? Or do you understand such events as in some way "collecting"
information already implicit in the surrounding environment, initial
conditions, and the like.
Glad you asked. I confess to being deeply confused about what biologists
mean when they use the term "information". It seems not to
be shannon's sense, and that is the only clear definition. I actually
think what we need is a theory of organization, and badly lack itl.
Well to follow up on my earlier question, I certainly didn't mean to
imply that I thought there was any such thing as a "quantum goose",
only that it seems to me someone or something has to observe complexity.
It may not be observable from within the system.
Hm. In the boids simulation, each boid observes some of the other boids
and tries to steer an average course taken from them, but not get too
close. So an observer is critical to the formation of the flocking behavior.
Do you mean that in order to see the flock, you need an outside observer?
Stuart, regarding a theory of biological information, have you cosidered
costly signalling theory (handicap theory)?
Thanks. I don't know handicap hteory, what is it?
I'm not sure I can ask this question cogently, but how much pre-existing
order must a system have to be capable of self-organization? Is there
a sort of "bootstrap" threshold?
There are examples of bootstrap thresholds. For example, Doyne Farmer,
Norm Packard and I published papers on the possible origin of self reproducing
molecular systems where a critical diversity of molecular species was
necessary for a phase transition to form a collectively autocatalytic
set. But such thresholds may not at all be general.
Handicap theory is a theory of biological communication in which information
sent by a cell, organism, etc., must be costly to the sender in order
to be reliable to the recipient.
ps, in so far as one is studying the collective behaviorof systems with
many interacting parts, it may be that a minimum number of parts is
needed for the given collective behavior. For example, how many particles
does it take to have "temperature"?
Thanks re handicap theory.
Dr. Kauffman, does your definition of information require the presence
of an intelligent agent? That is, in a universe without people or higher
life forms, would information exist?
Well, the answer in Shannon is a hidden "no", for the intelligence,
as I understand it, is buried in the decoder. So the only place where
information is well defined seems to sneak in an intelligent agent somewhere.
It would seem that it wouldn't make much of a difference where the "information"
was in some abstract sense, but whether the increase in complexity (however
measured) was occurring as a natural process. Your comment, Stuart?
You may be right. One of the deep puzzles is why the universe has become
complex. Why has the biosphere become complex? Why has the number of
ways of earning a living increased so dramatically? We have no theory
about this overwhelming feature of our universe. I propose in Investigations
that biospheres, on average, increase the diversity of "what can
happen next", their "adjacent possible", as fast as they
can without destroying the order already achieved. At least it is a
possible start in this direction.
Stu, if one were interested in learning more about your ideas on the
web, is there a link (or two) that you would point us to?
The easiest ways include the BiosGroup web site, and my three books.
Back to bootstrap threshold -- I'm thinking of cellular automata, for
example, where each element must have a minimum level of complexity
in order to get anything going. So my general question is what do we
know about the simplest systems capable of self-organization?
We know little about the simplest systems. Chris Langton made a very
simple cellular autonomaton that was capable of a kind of self replication
- much simplier that the von Neuman machine which had 29 states per
cell. Remember, we are often asking for parts with simple behavior to
exhibit collective modes that are not readily predictable fromthe parts
themselves. What is the minimum number? No clear answer that I know
I'm new on this chatroom system and didn't realize that hitting return
amounts to sending a message. Sorry--I agree that Shannon's definition
of information is inadequate for many purposes. But I think the fact
that it requires an intelligent agent may be connected to the need for
an observer in the Copenhagen interpretation of quantum mechanics. Have
you any comment on that idea?
No clear comments, but it is an interesting idea. If one could prove
that one needed an observer for the emergence of classical behaviors
from classical systems, one could try to draw a parallel to the emergence
of classicity in the Copenhagen interpretation. But don't count on it.
The best interpretation of classicity right now is decoherence.
It may be useful to return to the issue of Robert Wright's book, raised
by the earlier questioner. For Stuart info, Wright argued that there
was a trajectory to life and eventually human culture, where "zero-sumness"
(where cooperation rather than pure competition gets by better) eventually
proliferates. Any comment on the viability of that notion of a ratchet
Well, John Maynard Smith and Eors Strathmary, in their book on the major
transitions in life, point to many examples where a mutualism becomes
a symbiotic new entity upon which selection can act. Think of the formation
of eukaryotic cells. These are ratchet events that capitalize on mutualisms.
Dr. Kauffman, thanks for being here. Would you comment on Wolfram's
book? Has he really proposed 'a new kind of science' or is it just hype?
And what do you predict its impact on 'origins of information' questions
Downward: You mean "NON-zero sumness?"
Yes ... my mind is faster than my fingers ... non-zero-sumness
YOu know, I've only read a few pages of Steve's new book. He called
me to talk about it and I'll see him in Reno next week. My impression
from talking to others is that many people have made thepoint that agent
based models can reveal novel phenomena, but are not readily captured
by equations. Wolfram may be the first to state that this is a new kind
of science, but not the first to note the phenomenon. I understand his
book is full of "I seriously suspect" without proofs. So caution.]
Dr. Kauffman, do you have anything like a weblog or website where you
post news as to your current research or thoughts?
Sorry, outside of BiosGroup's website, no.
ID theorists tend to view the sequence space of life as creating isolated
islands of functionality that usually can't be travelled to via small
steps. Darwinians tend to think of them as connected. What kind of networks
tend to be connected rather than isolated? (What general features or
metrics do they need?)
Peter Schuster, in Vienna, U vienna, has done wonderful work on this
subject. He made models of RNA sequences that folded, categorized each
RNA by the kind of fold it made and showed 1) a power law distribution
of the number of sequences that fold into each shape, 2) that thecommon
shapes each form a percolating connected web across sequence space all
of whose members are neutral mutants of one another such that one can
traverse the entire space stepping only on the same shape. Further he
and his colleagues showed that all the common shapes formed such webs,
and came very near one another.
This is perhaps a stretch, but do you suppose self-organization theory
could shed any light on a phenomenon such as the origin of language?
Liane Gabora is trying to achieve just that step with the idea of percolating
webs of associations that gel and give rise to complex language.
For those interested in research on complexity and autocatalytic set
theory, are there any particular papers / books/ websites that you'd
recommend for those who wish to learn more?
In addition to my three books, there are all the papers of the Santa
Fe Institute, plus the work of Doron Lancet at the Weitzman Institute
in Israel on autocatalytic sets, plus possible new work by Reza Ghadiri
Darwinian adaptive spaces aren't quite like a fixed landscape, to be
traversed like climbing a mountain ... the process of natural development,
adaptation, exaptation, competition, symbiosis, etc and so on generates
a shifting landscape. In that sense some of Dawkins' analogies have
got readers a slightly muddied impression of what is going on. Would
you agree, Stuart?
With respect to Dawkins, it is fair to say that coevolution or abiotic
alterations in the environment causes fitness landscapes to alter. But
a deeper question is whether organisms, their mutational and recombinational
search mechanisms, and "niches" coevolve together such that
only those niches well searched by the search mechanisms flourish to
form winning ways of making a living. The biosphere self constructs
allthree things. Do you all know the "no free lunch" theorem?
Thanks for the response re: Wolfram. You've noted in this discussion
that natural selection 'tunes' existing systems and that there is indeed
a initial bootstrapping problem to get there. If I understand you correctly,
this seems to corroborate the key issue that design theorists are raising.
For a young "tired of Darwinian dogmatism/ID seems pretty cool"
guy like myself, do you think that ID is not only a 'perfectly reasonable'
question, but maybe even one that warrants serious intellictual attention,
resources [not nec. $$], and active investigation? Specifically , is
there a reason not to give design some time? [please pardon any typos
I think the design question is legitimate. I just worry about the methodologies,
and hidden reference to a creator.
Do you think that simulation on "traditional" computers hinders
us from getting results that "living cell" computing achieves
via massive paralel computing?
Re living cells, I don't know. I do think that cells are doing something
that computers are not doing, namely doing work constructing constraints
on the release of energy which very constraints on the release of energy
creates work. So constraints are needed for work and work needed for
constraints, and that's both true, as best I can tell, and not in the
physics books. That's why we need a theory of organization. But that
does not yet answer your question about whether cells can carry out
parallel computation that we canot on traditional computers. One piece,
of course, is that cells build spatially heterogeneous structures missing
in traditional computers.
What briefly is the notion of "decoherence" in classical physics.
Or can you give a reference to the concept.
Decoherence has to do with the loss of phase information in a propagating
wave equation due to interaction with the complex environment. Once
the loss has happened, the information cannot be restored, hence quantum
interference cannot be observed.
Complexity theory, being a theoretical language, has a syntax. For complexity
theory to apply, this syntax (or distributed set of structural and dynamical
rules) must be present and active. Any idea where this syntax is located?
For example, do you envision something like the array of a cellular
automaton, with PDP support in the "spatiotemporal background"?
If so, then where did this array and/or background come from?
Don't know what the syntax is. Your where did this array come from is
part of the former question about why the universe mannages to become
complex. We just don't have a theory. In Investigations, I pose the
question how Maxwell's demon, in a non equilibrium setting, knows what
measurements to make to find a source of free energy with which to do
work. He doesn't as far as I can tell. But in the biosphere, if an organsim
finds a new source of free energy which is useful, natural selection
will graft it into the ongoing biosphere;s evolution.
Decoherence is a concept associated with the Many Worlds interpretation
of quantum mechanics. Do you subscribe to this interpretation?
No, I don't like the many worlds interpretation.
would you agree that "design" can encompass the vastly complex,
natural interactions of molecular and chemical interaction?
I doubt it, although who really knows. I search for laws of self organization,
like the emergence of collectively autocatalytic sets and metabolisms
as phase transitions in critically complex systems, as an alternative.
But it is still just theory.
Stu are you planning a new book by any chance?
roughly thinking of a book in economics challanging competitive general
equilibrium, which assumes, falsely I think, that we can prestate all
possible dated contingent goods at the outset.
Could a self-organizing systems ever self-de-organize?
Sure, orgtanisms die. We're near the end. I'd like to thank all of you
for provocative and in many cases, novel questions. Hope this has been
Well, its about time to wrap things up. ISCID would like to thank Stuart
Kauffman for the thought provoking and stimulating discussion. If you
would like to continue chatting after the event, feel free to move over
into the General Discussion room.
Copyright © by
International Society for Complexity, Information, and Design 2002.