Topic Closed Topic Closed my profile | search | faq | forum home
 » ISCID Forums   » General   » Brainstorms   » Simulating Self Assembly (Page 1)

 This topic is comprised of pages:  1  2  3  4  5
Author Topic: Simulating Self Assembly
 warren_bergerson Member Member # 262 posted 02. February 2003 11:13                     The purpose of this thread is to discuss the experimentally verifiable hypothesis that "biological development is a biological design process which can not occur without the presence of information generating ‘intelligent design processes’". The hypothesized existence of scientifically measurable and verifiable intelligent design processes as part of biological development processes would appear to contradict most of the widely held beliefs in both biology and ID. The fact that the hypothesis is unconventional should not make it an inappropriate topic for discussion here. To start the discussion, let us consider the following concepts and definition. ASSEMBLY AND SELF ASSEMBLY PROCESS- A series of steps, ‘state changes’, or operations which produce some type of defined transformation. An assembly process may involve an intelligent assembler such as a human who modifies or interprets the assembly instructions so that the desired end product is produced. The term self assembly is used to refer to processes that operate over a period of time without interference by an external intelligent assembler. ASSEMBLY INSTRUCTION- An assembly process is defined and modeled by a set of assembly instructions operating in a defined environment. An assembly instruction can be characterized as a ‘change of state operation’ or a causal relationship of the form ‘stimulus or trigger S causes response R’. In a mechanical assembly mechanisms, for example, "‘temperature drops below X’ causes ‘turn on heating element’" would be an example of assembly instruction. DESCRIPTIVE VERSUS COMPLETE OR CAUSAL INSTRUCTIONS- A set of self instructions is complete or ‘causal’ if and only if the assembly instructions, applied in defined environment can actually produce the defined result. An incomplete set of instructions describing selective key steps are descriptive instructions. ‘DYNAMIC AND TELEOLOGICAL’ VERSUS ‘FIXED’ INSTRUCTIONS - An assembly instruction is defined as ‘dynamic and teleological’ or ‘programmable’ if 1)the instruction can take many different forms, and 2)only a portion of the possible forms are compatible with the defined final assembly objective. The complexity or ratio of possible forms to teleological forms defines the amount of information needed to code an assembly instruction. An assembly instruction with a complexity of 1 is a fixed instruction. An instruction can be fixed with respect to a particular assembly process and/or fixed with respect to all assembly process that could evolve. A permanent assembly process which does not evolve would typically be considered part of the environmental conditions. INTELLIGENT DESIGN PROCESS- As defined here, an intelligent design process is a complex set of operations capable of 1)finding the teleological form of a causal relationship such as an assembly instruction, 2) adding to the set of possible instructions by adding to the lists of recognized inputs, stimuli or causes, 3)adding to the set of possible outputs, responses or effects (capabilities 1, 2, and 3 make it possible to produce creative teleological assembly instructions), 4)creating new instructions, and 5)increasing the processing capacity of the design process. It is possible to define abstract mathematical processes(logic machines) which have all five capabilities. Such logic machines are used here to model intelligent design processes in self assembly processes. Given the above definitions we can suggest at least four types of experimental design.1. Evaluate the possibility of simulating complex assembly without dynamic and teleological instructions. 2. Evaluating the possibility of simulating complex assembly without dynamic and teleological instructions that take new teleological forms during the assembly process. 3. Evaluating the possibility of simulating complex assembly without intelligent design processes. 4. Evaluating the possibility of simulating the evolution of any of the identified assembly instructions using only natural selection and random mutation. The hypothesis I proposed at the beginning predicts that 1)any clearly defined developmental processes can be modeled and simulated using intelligent design processes, 2) the evolution of any defined developmental process can be modeled and simulated using intelligent design processes, and 3) there will be clearly defined developmental processes which it will not be possible to model without intelligent design processes. This is an admittedly quick and dirty overview of the proposed hypothesis and experimental design. I will gladly entertain any questions or comments before proceeding. IP: Logged
 Evan Member Member # 164 posted 02. February 2003 13:14                    Warren begins his post by writing "The purpose of this thread is to discuss the experimentally verifiable hypothesis that "biological development ..."and he later writes,"Given the above definitions we can suggest at least four types of experimental design.",He then lists the following: quote:1. Evaluate the possibility of simulating complex assembly without dynamic and teleological instructions. 2. Evaluating the possibility of simulating complex assembly without dynamic and teleological instructions that take new teleological forms during the assembly process. 3. Evaluating the possibility of simulating complex assembly without intelligent design processes. 4. Evaluating the possibility of simulating the evolution of any of the identified assembly instructions using only natural selection and random mutation. I would like to note that all the items in the list involve ?evaluating the possibility of simulating? something.It seems to me this formulation points to a problem with most ideas for ID-related research: there is no actual experiment proposed. An experiment involves making an hypothesis about the real world, gathering data about the real-word, and evaluating whether the data supports the hypothesis."Evaluating the possibility of simulating" something is not an experiment - it is a stretch, in my opinion, to even call it research. Actually simulating something can be a useful exercise, but even that is one step removed from actual research (as only real research can test whether the simulation was a valid model of reality.) "Evaluating the possibility of simulating" something is two large steps removed from real research.If Warren wants to claim that his ideas are "experimentally verifiable," then he needs to describe an experiment that might verify them. Abstract attempts to "evaluate the possibility of simulating" certain processes does not verify anything. [ 02. February 2003, 13:16: Message edited by: Evan ] IP: Logged
 warren_bergerson Member Member # 262 posted 03. February 2003 08:11                     Evan, Quote: If Warren wants to claim that his ideas are "experimentally verifiable," then he needs to describe an experiment that might verify themWhat I an describing is a very rigorous ‘standard’, ‘hypothesis testing’, scientific, experimental paradigm. You may not recognize this experimental paradigm because it comes from engineering and does not appear to be used widely in biology. The engineering or reverse engineering paradigm is a techniques for evaluating the validity of mathematical models or theories of the operation of complex causal processes or machines. In order to apply the paradigm you need 1. A complex real world phenomena- in the application here biological self assembly2. Mathematical techniques for precisely modeling the complex phenomena- as outlined, we have techniques for modeling, a)fixed assembly instructions, b)dynamic and teleological assembly instructions, c)intelligent design processes, and d)environmental conditions. 3. The ability to artificially simulate the process being analyzed and modeled. (There is extensive experience with simulating assembly or manufacturing processes both with and with the use of intelligent design processes). and 4. A objective, verifiable standard for evaluating and ranking the validity of models- in the example here, robustness of the simulation. It will be noted that the experimental design being proposed is a multi-discipline approach as it draws on substantial existing bodies of knowledge - a)biology for knowledge of biological assembly processes, b)computer science- for techniques for modeling complex assembly and c)manufacturing/engineering for knowledge of assembly processes and the engineering paradigm. The feature of the proposed approach that might be considered ‘new’ is the recognition of the importance in assembly processes of mechanical intelligent design processes to produce dynamic and teleological assembly instructions. To repeat, the ‘experimental design’ being discussed here is a standard, rigorous, scientific, engineering paradigm. In may not be readily recognized because the experimental design being proposed in based on techniques from several disciplines and because it involves techniques and concepts not routinely used in biology. IP: Logged
 warren_bergerson Member Member # 262 posted 03. February 2003 10:42                     Given the nature of Evan’s comments, it will probably be useful to review some of the basic concepts of the experimental paradigm being discussed. ROBUSTNESS- As used here, an assembly model and a simulation produced from a model is robust if the assembly being studied is actually produced by the simulation. Rex and I had a difference of opinion on constituted a causal model of an assembly process. Robustness provides an objective, verifiable, and rigorous criteria for differentiating a descriptive and a causal model. The specific criteria used is that an model or theory is valid is the simulation is at least as robust as the real world assembly process. Note that robustness can also be used to compare the relative strength or validity of different models and simulations. ENVIRONEMENTAL CONDITIONS- As is fairly obvious, robustness varies with environmental conditions. A relatively simple set of assembly instructions can be robust under very rigidly controlled conditions. In general, the more volatile the environment, the greater the need for dynamic and teleological assembly instructions. As should be obvious, a simulation is robust or completely robust only if successful assembly is achieved under the same range of environmental conditions applicable to biological self assembly. REDUCTIONISM- As should be obvious, our current knowledge of biological development processes and our current simulation technologies are inadequate to simulate the assembly of an entire organism. However, given current technology it is possible to isolate, model, and simulate relatively simple sub-components of a complex self assembly process. Based on the results obtained from analyzing sub-components, we can then use computers simulations to analyze the behavior of very complex assembly processes. TELEOLOGY AND INTELLIGENT DESIGN PROCESSES- In biology, teleology and intelligent design processes seem to have taken on almost mystical or at least metaphysical meaning. Although the terminology may be different, in complex computer modeling and the analysis of assembly processes, scientific teleology and scientific intelligent design processes are real working concepts. An assembly instruction is dynamic and teleological if it can be modified into a form which increases the likelihood of a successful assembly. An intelligent design process is a process which modifies an assembly process in order to increase the likelihood of a successful assembly. Intelligent design processes can be extremely complex. There are, however, some relatively simple, well known and highly effective intelligent design processes. The best known is the feed back loop which maintains equilibrium. An assembly instructions such as ‘turn on heat’ or ‘turn off heat’ can be continuously modified by a simple feed back loop that measures temperature. A common problem in the analysis of biological design processes, is that observers often incorrectly assume that complex information processing must involve complex physical mechanisms. If, it is incorrectly assumed, a physical mechanisms is simple then the volume of information processing must be small. This is particularly true when talking about a physical mechanism everybody is familiar with. One of the advantages of defined techniques for measuring volumes of information and information generating capacity, is having objective standards for evaluating ‘simple mechanisms’. I hope this background material will be helpful in understanding the experimental design being discussed. IP: Logged
 Rex Kerr Member Member # 632 posted 03. February 2003 16:51                    quote:A set of self instructions is complete or ‘causal’ if and only if the assembly instructions, applied in defined environment can actually produce the defined result.What do you mean by "defined result"? Green fluorescent protein is greenly fluorescent regardless of whether the second codon is GTA, GTC, GTG, or GTT, since they all code for valine.It is certainly not the case that you get GFP iff you have a single DNA sequence. quote:An assembly instruction is defined as ‘dynamic and teleological’ or ‘programmable’ if 1)the instruction can take many different forms, and 2)only a portion of the possible forms are compatible with the defined final assembly objective.I'm not quite sure how one would rigorously define the "assembly objective". But, anyway, there are 64 possible codons, of which only 4 code for valine, so trivially protein translation is 'dynamic and teleological'.I'm not sure this is at all instructive, though. I was well aware of this before. Throwing 'teleological' in just makes me think of philosophers, and adds nothing to my understanding of the science or mechanism. quote:As defined here, an intelligent design process is a complex set of operations capable of 1)finding the teleological form of a causal relationship such as an assembly instruction, 2) adding to the set of possible instructions by adding to the lists of recognized inputs, stimuli or causes, 3)adding to the set of possible outputs, responses or effects (capabilities 1, 2, and 3 make it possible to produce creative teleological assembly instructions), 4)creating new instructions, and 5)increasing the processing capacity of the design process.I have no idea what (1) means.(2) is easily done via RM&NS--antibodies do this all the time.(3) is easily done via RM&NS--any alteration in regulatory control fits this description.(4) if a protein counts as an instruction, this is trivially done by RM&NS (specifically via gene duplication).(5) I'm not sure what you mean here. Does it count if you make more ribosomes so you can make proteins faster? Or if you make more organisms, so you have more opportunties for mutation and selection?I'm afraid I am having trouble distinguishing your intelligent design process from the known properties of genetic material, transcription, and translation.If your mapping was not bijective but instead was only onto, and the claim was that f(x)->{y} out of which very few z in {y} were "good", yet those z were reliably achieved anyway, then there might be a distinction. I don't think there is any evidence that this happens (despite your claim that it must), but this may be the way you want to set up the problem. IP: Logged
 Frances Member Member # 169 posted 04. February 2003 11:12                    Warren states: quote:This logic is fundamentally flawed. There are causal relationships other than ‘functional mappings’ which can explain the observed pattern. The experimental design proposed here is designed to demonstrate that 1)the one to one mapping type of explanation does not fit the data and 2)there are other types of complex causal relationships which will fit the observed data. You seem to be jumping to conclusions before the experimental data has even been collected. I would be interested to see when you have performed your experiments, if the data still support your conclusions.I doubt btw that Rex is arguing a one-to-one mapping of genotype to phenotype and vice versa. Rex is merely pointing out that RM&NS maps quite nicely to your definition of intelligent design process. IP: Logged
 RBH Member Member # 380 posted 04. February 2003 13:18                    In the OP warren wrote quote: The purpose of this thread is to discuss the experimentally verifiable hypothesis that "biological development is a biological design process which can not occur without the presence of information generating 'intelligent design processes'". The hypothesized existence of scientifically measurable and verifiable intelligent design processes as part of biological development processes would appear to contradict most of the widely held beliefs in both biology and ID. The fact that the hypothesis is unconventional should not make it an inappropriate topic for discussion here. (emphasis added)My mention of Avida, with its distinction between genotypes and phenotypes, and with the ability to closely analyze the developmental mapping between them, was intended to move toward the "experimentally verifiable hypothesis" warren writes of in his first sentence. Rather than merely discuss it in an empirical vacuum, as we have so many such ideas, I would strongly prefer to see some systematically gathered data to be able to evaluate the degree and force of the experimental verification.In the OP warren specifies "four types of experimental design," all requiring a simulation model to execute. The simulation model has to come from somewhere, and until it exists and is running, discussion of the proposed hypothesis is unlikely to be fruitful. Then one might experimentally test the hypothesis; whether it would be verified is an empirical question, not an a priori given.RBHEdited to note that I added the emphasis to the quotation from warren's OP. [ 04. February 2003, 15:11: Message edited by: RBH ] IP: Logged
 Evan Member Member # 164 posted 04. February 2003 14:05                    RBH has found Avida, and listed its many features. Warren says that Avida would need "many enhancements" to model his view of the design process.Warren also, however, makes it clear that simulating the design process is what he is interested in.So my questions to Warren are: 1) what specific enhancements would need to be made to Avida (both additions and subtractions) to make an accurate model of your concepts? - and 2) what parts of Avida would be common to both the existing program and a program that simulates your version of the design process?Please be specific. I don't think we need any further review of the concepts. [ 04. February 2003, 14:06: Message edited by: Evan ] IP: Logged
 warren_bergerson Member Member # 262 posted 05. February 2003 09:45                     Francis- Common interpretations of neo-Darwin concepts suggest the existence of both a ‘one to many genotype to phenotype map’ and a ‘1 to many phenotype to genotype map’. This would typically be interpreted to suggest a one to one genotype to phenotype map if you take neo-Darwinian concepts literally. If neo-Darwinian concepts are simply metaphysical or descriptive or non-causal, then of course mathematical principles need not apply. RBH- Quote: My mention of Avida, with its distinction between genotypes and phenotypes, and with the ability to closely analyze the developmental mapping between them, was intended to move toward the "experimentally verifiable hypothesis" Avida ‘assumes’ a simple one to one mapping from simple code to simple phenotypes. Hardly an ability to closely analyze developmental mapping. Quote: Rather than merely discuss it in an empirical vacuum, as we have so many such ideas, I would strongly prefer to see some systematically gathered data to be able to evaluate the degree and force of the experimental verification.While your sentiments may be admirable, two point need to be kept in mind. First, it must be recognized that ‘existing genetics and evolutionary biology’ can not generate testable, predictive models of evolutionary processes. It is possible to generate models which appear to simulate some small components of evolutionary change under limited and highly dubious assumptions. Proposing techniques to produce predictive models and testable simulations is something new in evolutionary biology. Second, the models and simulations being proposed are based on engineering standards and concepts. Not the standards and concepts typically associated with theoretical science. A biological system is viewed here as a complex machine. The experimental paradigm being proposed is designed 1)to test the adequacy or inadequacy of certain types or classes of models or simulations and 2)to develop by successive approximation models and simulations of how the biological machines actually function. Using engineering standards, a model of evolution is not a one step pass fail process. On a gradual basis we develop knowledge of 1)what doesn’t work and 2)what actually does or could work. As we start this gradual engineering process, we do not need to gather new information so much as analyze and interpret the existing body of knowledge. IP: Logged
 Frances Member Member # 169 posted 05. February 2003 12:41                    Dear WarrenI fail to see how you conclude that from a one-to-many and many-to-one mappings one would 'typically interpret' to suggest a one genotype to one phenotype map. Perhaps you could explain to me the mathematical foundations for such a conclusion?Are you also saying that one-to-many or many-to-one mappings cannot be mathematical?Please explain these 'common interpretations' since they seem to be all but. IP: Logged
 gedanken Member Member # 594 posted 05. February 2003 15:49                 Remember that Warren claims that a single genetic code does not map to a single phenotype. I think this is the basis of the issue as Warren brings up. (Of course supporting his claim is a separate issue -- but Warren's comments seem to be based on this).Without looking at an evolutionary algorithm in detail (Avita) I can't say for sure, but I would assume that the algorithm does not also simulate develoopmental processes that (Warren claims) could change the "phenotype" after the establishment of the genetic code. Does it do so?(I'm just trying to establish the basis of what Warren is saying -- not commenting on its validity.) IP: Logged
 RBH Member Member # 380 posted 05. February 2003 16:53                    Gedanken wrote quote: Remember that Warren claims that a single genetic code does not map to a single phenotype. I think this is the basis of the issue as Warren brings up. (Of course supporting his claim is a separate issue -- but Warren's comments seem to be based on this).Without looking at an evolutionary algorithm in detail (Avita) I can't say for sure, but I would assume that the algorithm does not also simulate develoopmental processes that (Warren claims) could change the "phenotype" after the establishment of the genetic code. Does it do so?Ahhh. Well, if that's what warren is saying I'm just going to watch a while, until it's clearer how he plans to implement his model in order to experimentally verify it. As I understand Avida so far, it does not implement developmental variability in the mapping from gene to phene. I thought one question was whether there was a many to one mapping from gene strings to expressed phenotypes, which there is in Avida. In other words, several different genotypes can produce the same phenotype (to which the selective pressure applies), where "phenotype" is defined as the expressed behavior of the code. "Species" in Avida are typically (rather than atypically) composed of bunches of individual critters with several different genotypes. Avida's critters can (among other things) carry "junk" code along with the executing code. (Added in edit: I should add that a "gene" in Avida is not a single assembly language instruction; it is a sequence of instructions that execute some function, such as copy self. Individual instructions are more analogous to bases or perhaps DNA triplets.)By the way, on another thread you were trying to convince warren that populations can adapt to average trends underlying high-frequency variability. This paper that I ran onto in the course of a search for something else speaks directly to that issue. quote:Dynamic Fitness Landscapes in Molecular EvolutionAuthors: Claus O. Wilke (1), Christopher Ronnewinkel (2), Thomas Martinetz (2) ((1) Caltech (2) Medizinische Universitaet zu Luebeck)Comments: LaTeX, 60 pages, 14 eps figures, expanded introduction, minor corrections, submitted to Phys. RepSubj-class: Biological Physics; Adaptation and Self-Organizing Systems; Soft Condensed Matter; Statistical MechanicsJournal-ref: Phys. Rep. 349:395-446 (2001)We study self-replicating molecules under externally varying conditions. Changing conditions such as temperature variations and/or alterations in the environment's resource composition lead to both non-constant replication and decay rates of the molecules. In general, therefore, molecular evolution takes place in a dynamic rather than a static fitness landscape. We incorporate dynamic replication and decay rates into the standard quasispecies theory of molecular evolution, and show that for periodic time-dependencies, a system of evolving molecules enters a limit cycle for $t\to\infty$. For fast periodic changes, we show that molecules adapt to the time-averaged fitness landscape, whereas for slow changes they track the variations in the landscape arbitrarily closely. We derive a general approximation method that allows us to calculate the attractor of time-periodic landscapes, and demonstrate using several examples that the results of the approximation and the limiting cases of very slow and very fast changes are in perfect agreement. We also discuss landscapes with arbitrary time dependencies, and show that very fast changes again lead to a system that adapts to the time-averaged landscape. Finally, we analyze the dynamics of a finite population of molecules in a dynamic landscape, and discuss its relation to the infinite population limit. (emphasis added)I commend it to warren's attention both because it shows adaptation to time-averaged high frequency environmental changes, and because it is an example of formal mathematical modeling in evolutionary research.RBH [ 05. February 2003, 17:16: Message edited by: RBH ] IP: Logged

 All times are East Coast This topic is comprised of pages:  1  2  3  4  5
 Topic Closed
 Printer-friendly view of this topic Hop To: Select a Forum: Category: General -------------------- The Archive Brainstorms Literature Review News & Features Writers Workshop Category: Member Services -------------------- Members Discussion Private Resources Job Postings Category: Conferences and Symposia -------------------- The Teleological Origin of Biological Information Category: Reading Groups -------------------- Life's Solution: Inevitable Humans in a Lonely Universe Nature, Design and Science No Free Lunch The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory Are We Spiritual Machines? Reading Group Archives Category: Workshops -------------------- 2003 Student Workshop 2002 Student Workshop Category: Dialogues -------------------- Dialogue Archives Category: Private -------------------- Projects Organizational Theory ISCID - Share - 1 Un Foro En Español (Spanish Forum)