Why no intermediaries (cont)

This post is to allow comments from this earlier post to continue.

Advertisements

397 Responses to “Why no intermediaries (cont)”


  1. 1 Petrushka December 1, 2010 at 4:39 pm

    Actually, performance was bad only on Internet Explorer. It works fine on FireFox and Chrome.

  2. 4 Zachriel December 1, 2010 at 5:54 pm

    gpuccio: Any explanatory theory of evolution must explain the genomic changes, because it’s the genome that evolves. The genome is largely the cause of the phenotype. If you don’t know or can’t explain the cause of genomic variation, you haven’t an explanatory theory of evolution.

    Are you actually suggesting Darwin, or anyone until the discovery of modern genetics, couldn’t have proposed a scientific theory of evolution? Of course, any theory has to be consistent with what is known about genetics, but that’s different than saying we have to know everything.

    Consider a simple example. We have bacteria. Some are more resistant than others to antibiotics. We show that this trait is hereditary. We then show that this trait arises spontaneously in non-resistant strains, and that whether this occurs is uncorrelated with the presence of antibiotics. We have therefore shown that the variation is random with respect to fitness, even though we have no idea what is occurring within the bacteria. We may not even understand the distinction between genome and phenome. Yet, we can observe and study this evolution occurring before our eyes.

    So, no. To propose a testable and scientific theory of evolution does not require knowing anything about genomes. However, and to reiterate, any valid theory has to be consistent with what we then discover about genetics.

    Zachriel: Yet there is ample evidence of incremental and selectable evolution in macroscopic structures, e.g. the mammalian middle ear.

    gpuccio: But we have no idea of what genomic variation is the basis for that.

    We don’t know everything, but what we do know about developmental heterochrony and paedomorphosis of Meckel’s cartilages is entirely consistent with the Theory of Evolution.

  3. 5 Zachriel December 1, 2010 at 6:29 pm

    Mathgrrl: Actually, it’s a lot more complex than that. The real world has all the laws of physics and chemistry that are not typically modeled in a simulation.

    That’s right. The environment includes not just the ocean of air, but the terrestrial landscape, the food sources, the reproductive means, the competitors. And air is not so simple either. Drag changes dramatically depending on scale, so the wings of a bee and the wings of a bat work quite differently.

    Just as importantly, each organism brings to the table a whole panoply of existing traits, not least of which are how it acquires resources and reproduction. Evolution works by variation of existing structures, so that puts significant constraints on any adaptation.

    Mathgrrl: An interesting point you {gpuccio} raise is the relationship between complexity of the environment and complexity of the functions.

    It is an interesting question.

    Take a random protein sequence that has a minimal function. Send it through a few generations of evolution selecting for function. All we’ve added is simple sieve-like selection, yet the result is a highly specific and complex three-dimensional structure. Certainly the selection process hasn’t added significant information. Nor do we have to even know anything about the molecules involved. The complexity is driven by the intricate shape of the target molecules and by the intrinsic properties of the evolving protein, the environment.

  4. 6 Petrushka December 1, 2010 at 6:31 pm

    This seems to point to a longstanding trend amongst creationists.

    Find the most undeveloped niche in the science and find the gaps.

    In Darwin’s time it was the lack of transitional fossils and the apparent lack of sufficient time.

    The time problem was solved by the discovery of radiation, except in the minds of a few belligerently ignorant individuals.

    We’re still working on the fossil record, but occasionally we get a nice sequence showing the evolution of a complex structure. We’re beginning to get that withe birds and feathers.

    The fossil record can no longer be counted on to support creationism, so we next turn to molecular biology and assert the lack of a complete record of every transition leading from inorganic atoms to the current ecosystem.

  5. 7 Petrushka December 2, 2010 at 3:11 am

    I generally use IE. At home I tested Chrome and forgot to go back.

    I tested FireFox today. It has its merits.

  6. 8 gpuccio December 3, 2010 at 2:40 pm

    I have been busy. I will try to catch up.

  7. 9 gpuccio December 3, 2010 at 2:42 pm

    Toronto:

    “Could you please explain the repeated use of the phrase, “like us”?”

    Us human designers. Human design is the best model we can use to understand design when the identity of the designer is not yet known in detail.

  8. 10 gpuccio December 3, 2010 at 2:51 pm

    Mathgrll:

    “Before addressing your other points, I would like to clarify what you mean by this. Are you asserting that the mechanisms of modern evolutionary theory, including mutation, crossover, and variable reproductive success have not been observed to change the allele frequency in populations?”

    I think I have always been clear on those points.

    Darwinian mechanisms can, and have been observed doing it, cause microevolutionary variation, some of which can be selected. That can ceratainly result in changes of allele frequency (as for that, even mere genetic drift can do that).

    But I suppose that, when we debate “evolution” here, we are referring to macroevolution, that is the emergence of new complex functions, of new species, of new phyla, and so on.

    The difference is not in possible selection mechaniusms: NS can act both on simple or on complex traits, provided they are naturally selectable. NS is not specially interested in how complex a trait is.

    But we are. Functional complexity is exactly the point of this discussion, as far as I can understand.

    I have never doubted NS. I just believe that its role is small, because only a small munber of functions are naturally selectable, and anyway only very simple ones can be offered to NS by RV mechanisms.

    An interesting point could be the possible role of ND for designed functions. That could be an interesting field of research.

    Intelligent selection remains however the most powerful tool in bottom up biological engineering.

  9. 11 gpuccio December 3, 2010 at 2:56 pm

    Zachriel:

    “Are you actually suggesting Darwin, or anyone until the discovery of modern genetics, couldn’t have proposed a scientific theory of evolution?”

    I am saying that darwinian theory was vague and scarcely explicatory at the causal level, because he (like everyonse else) had no idea of how variation arised and where.

    It’s the neo darwinian theoryt, the so called modern synthesis, that we really have to address.

    Now that we know much (certainly nor all) about the molecular basis of heredity, we have to address the theory at that level. That is the only way we can build a satisfying explanation.

    There is no doubt that proposing a scientific theory of something about which most of the details are unknown is of scarce explanatory value, although it is a fair way to stimulate debate and research.

  10. 12 gpuccio December 3, 2010 at 3:01 pm

    Zachriel:

    “Yet, we can observe and study this evolution occurring before our eyes.”

    What we study is only the emergence of some selectable tract, and its natural selection in some specific environment.

    We still have no cognition of the nature and cause of the variation. We cannot explain it.

    If, on the other hand, we go to molecular level, we understand that antibiotic resistance is of at least two broad types:

    a) Simple mutations of existing structures which alter the target points of the antibiotic

    b) Already existing complex molecules, like penicillinases, whih are shared by HGT

    Both mechanisms do not require the emergence of new complex functions. That’s why they happen, they are observed, and can be satisfactorily explained by known molecular mechanisms of RV.

  11. 13 gpuccio December 3, 2010 at 3:05 pm

    Zachriel:

    “To propose a testable and scientific theory of evolution does not require knowing anything about genomes.”

    Itr’s rather strange to observe how all of you seek refuge in the vague meaning of the word “evolution”, when you have no other argument.

    Please, try to specify each tiume what you mean by “evolution”, and many of your “arguments” will melt away by themselves-

    We have no need to know anything about genomes to observe the process of NS of existing variation. We have to know a lot about genomes to explain how naturally selectable variation can emerge.

    Can you point to any of my posts where i have denied that a naturally selectable trait can be naturally selected?

  12. 14 gpuccio December 3, 2010 at 3:07 pm

    Zachriel:

    “Evolution works by variation of existing structures, so that puts significant constraints on any adaptation.”

    That has often been one of my main points.

  13. 15 gpuccio December 3, 2010 at 3:09 pm

    Zachriel:

    “Send it through a few generations of evolution selecting for function.”

    You mean artificial designed evolution, I suppose, a la Szostak.

  14. 16 gpuccio December 3, 2010 at 3:11 pm

    Petrushka:

    “I generally use IE. At home I tested Chrome and forgot to go back.

    I tested FireFox today. It has its merits.”

    Different niches? 🙂

  15. 17 Petrushka December 3, 2010 at 3:29 pm

    I suspect the reason you don’t want to discuss the mammalian middle ear is that you know that it mostly involves the regulation of bone length. Not entirely unlike the modifications of facial bones we produce in wolves/dogs through selection.

    In fact most of the evolution observed through the fossil record is like this.

  16. 18 gpuccio December 3, 2010 at 3:39 pm

    Petruahka:

    “I suspect the reason you don’t want to discuss the mammalian middle ear is that you know that it mostly involves the regulation of bone length.”

    Don’t suppose.

    I know nothing of the mammalian middle ear, and am not interested in it.

    If you know something about the molecular basis for the regulation of bone length, that would be an interesting argument.

  17. 19 Petrushka December 3, 2010 at 4:35 pm

    If you know something about the molecular basis for the regulation of bone length, that would be an interesting argument.
    ______________________________

    Explain how the designer knows ho to do this — in incremental step, no less.

    Is the information content of the designer’s knowledge of sequences, proteins, and functionality greater or lesser than what would be needed to produce change through incremental steps?

    I hear the phrase Just So Story used to describe incremental evolution, but somehow the act of inventing entities having no assignable attributes, no capabilities or limitations, strikes me as comical. Time Cube comical.

    Perhaps you could point to one spot in any genome that was definitely designed, and tell us when it was implemented. and tell us how the designer knew what effect it would have, both on the organism, and on the ecosystem.

    This is the kind of detail that Behe is asking biologists to provide in the case of the flagellum.

    Now we can prove mathematically that the flagellum could evolve incrementally; the only issue is likelihood.

    Again, evolution provides a means that is observable. ID provides imagined entities that have never been observed.

  18. 20 Petrushka December 3, 2010 at 4:55 pm

    I’m going to risk making an analogy, however imperfect it may be.

    The genome is sometimes described as a recipe rather than a blueprint. Unlike a blueprint, the genome contains no measurements, no one to one correspondence between points on the instructions and the final object. DNA enables the sequential unfolding of the object.

    So consider something far simple, but made from a recipe. How about an angle food cake. I don’t know the answer to this, but I’m curious how the recipe for this would be designed except through trial and success.

    Perhaps you can explain how a physicist would conceive of this cake in the absence of similar objects, or how a designer would go about designing the recipe. What process is involved in anticipating the emergent properties of a good cake from first principles?

  19. 21 Toronto December 3, 2010 at 4:56 pm

    gpuccio: Us human designers. Human design is the best model we can use to understand design when the identity of the designer is not yet known in detail.

    But a designer who is like us, has the same restrictions we do, and that is an inability to see the future.

    Your designer must know specifically what a future environment must be like in order to design something successful and we can’t do that.

    We don’t even know what a future economic evironment will be like 5 years in the future, and yet, that environment is a product of us!

    Your designer can’t be like us, or else he is guaranteed to fail, just “like us”.

  20. 22 Petrushka December 3, 2010 at 5:24 pm

    On the subject of recipes, I’m wondering how the first yeast leavened breads arose. Do you suppose they were designed, or were the result of accident and selection?

    You claim that the Designer might be “like us.” I assume by that you mean a limited rather than omnipotent entity.

    I still think, rather “like Behe,” that ID needs to provide, at least as a thought experiment, a process other than evolution whereby a designer would acquire the knowledge of the emergent properties of matter that would be necessary in order to make recipes for complex beings.

  21. 23 gpuccio December 3, 2010 at 8:05 pm

    Petrushka and Toronto:

    I am really amazed, Your main, shared, argument now seems to be that RV + NS can do better than intelligent design.

    I will never understand darwinists. How can you not see how ridiculous such a point is?

    Pwetrushka debates recipes and cakes, and not for a moment she stops to consider that all recipes and cakes in the world are designed things.

    Toronto is worried that a designe could not know want will be there in 5 years, while it seems that unguided processes know all, and regularly win lotteries betting on what they already know.

    A very trivial question: if a designer cannot know in advance what will be there in 5 years (which is possible), what prevents him from reassing his design plans periodically, according to what happens, and he sees happening?

    Toronto says:

    “Your designer can’t be like us, or else he is guaranteed to fail, just “like us”.”

    Well, sometimes we fail, sometimes we don’t. Maybe the same is true also of the designer. Who says the designer can’t fail? Maybe the ediacara explosion was a failure in the end, and the cambrian explosion a new fresh attempt.

    One comment about how the designer should be “like us”.

    First of all, there are a few basic requirements that the designer must have, otherwise he would not be a designer. The first is that he must be a conscious intelligetn being. Design is by definition the output of cognitive representations into matter.

    A second point is that he must have purposes associated to his cognitive representations. Purpose is the real mark of design.

    Buit other things can be different. When I say that human design is the best model we have, I mean that we can start from human design to hypothesize strategies, tools of implementation and so on. That does not mean that, in the end, the strategies or tools of the biological designer must be the same as in human design. But we start with what we know, and look at the facts.

    That is exactly the opposite of what darwinists do. They become suddenly great fans of an omnipresent, omnipotent god, and are ready ro reason: “such a god would never do that”. Maybe they identify a little with their model, and believe they are themselves omniscient.

    As for me, I stay empirical. I know human design, and I am sure that biological information is designed too. So, I start from what I know, and try to build models. From human design. And see how those models can explain, or not explain, what we know.

  22. 24 MathGrrl December 3, 2010 at 8:57 pm

    gpuccio,

    I have never doubted NS. I just believe that its role is small, because only a small munber of functions are naturally selectable, and anyway only very simple ones can be offered to NS by RV mechanisms.

    I want to get back to the discussion of GAs (after all, I’m MathGrrl, not BioGrrl), but you’ve piled a number of implicit and explicit claims in a very short paragraph.

    You accept that the mechanisms identified by modern evolutionary theory can result in “small” changes. However you seem to think there is some limit to how many of these small changes can build up until they result in “large” changes. Do you have any evidence that such a limit exists? If so, do you have any proposed mechanism that enforces that limit?

    Further to your quoted paragraph, do you have any evidence to support your claim that “only a small munber of functions are naturally selectable”? How about for the immediately following claim that “only very simple ones can be offered to NS by RV mechanisms”?

  23. 25 Toronto December 3, 2010 at 9:38 pm

    gpuccio: I am really amazed, Your main, shared, argument now seems to be that RV + NS can do better than intelligent design.

    I think the main thrust of my argument is that ID can’t do it at all.

    Your designer has an explicit target, but has no way of knowing what that target should be. How can you design “like us”, when “we”, use a spec, to tell us what that target is?

    Where is the designer’s spec?

    Evolution is not even on the table yet to compare your process with since you have yet to show us your process.

    Where does the designer get his information of the future?

    Whether evolution is true or not, your problem still remains.

    Where the designer gets his information is part of your process, not ours.

  24. 26 Zachriel December 3, 2010 at 10:16 pm

    gpuccio: I am saying that darwinian theory was vague and scarcely explicatory at the causal level, because he (like everyonse else) had no idea of how variation arised and where.

    You may as well say Newton didn’t propose a scientific theory of gravity as he didn’t even have a vague idea of gravity’s cause.

    We can observe variation within and between generations. It doesn’t require knowledge of molecular genetics to form valid scientific generalizations about these observations or about evolution.

    Zachriel: Yet, we can observe and study this evolution occurring before our eyes.

    gpuccio: What we study is only the emergence of some selectable tract, and its natural selection in some specific environment.

    Yes.

    Zachriel: To propose a testable and scientific theory of evolution does not require knowing anything about genomes.

    gpuccio: It’s rather strange to observe how all of you seek refuge in the vague meaning of the word “evolution”, when you have no other argument.

    There are a great deal of research in evolutionary biology that doesn’t involve studying molecular genetics (though that facet is becoming increasingly important). When scientists study the changes in finch beaks over generations, they are studying evolution. When they study the origin of antibiotic resistant bacteria, they are studying evolution. When they search for intermediate organisms in the rocks, they are studying evolution. When they sequence genomes and refine our understanding of phylogeny, they are studying evolution.

    gpuccio: Please, try to specify each tiume what you mean by “evolution”, and many of your “arguments” will melt away by themselves-

    Evolution is the change in the heritable traits of organic populations. The Theory of Evolution is an explanatory framework that encompasses a number of interrelated claims, including the mechanisms of evolution and its history.

  25. 27 Zachriel December 3, 2010 at 10:37 pm

    gpuccio: You mean artificial designed evolution, I suppose, a la Szostak.

    How does the scientist add information sufficient to account for the complex result? Please show the math.

    gpuccio: I know nothing of the mammalian middle ear, and am not interested in it.

    That’s fine. Morphological evolution has always been an important component of the evidence supporting the Theory of Evolution. Just don’t then claim there’s no evidence.

    The reason why mammalian ossicles are pertinent and interesting is because the embryonic data predicted the fossil data. The fossil data is particularly interesting because the transitionals show how an irreducibly complex structure can evolve in selectable increments.

    gpuccio: If you know something about the molecular basis for the regulation of bone length, that would be an interesting argument.

    Even without genetics, we can observe natural variation, its limits, and its novelties. Hence, we can make and form valid scientific theories. Though not everything is known, there is genetic support for the evolution of mammalian ossicles.

    Mallo, Formation of the Middle Ear: Recent Progress on the Developmental and Molecular Mechanisms, Developmental Biology 2001.

    And a little more recently,

    Martin & Ruf, On the Mammalian Ear, Science 2009: The partial resorption of Meckel’s cartilage and disconnection of the middle ear ossicles from the mandible in modern mammals are controlled by complex regulatory networks; mutant mouse studies have shown that changes in these networks can alter the timing of resorption and ossification, causing morphological transformations such as the permanent connection of middle ear ossicles and mandible.

  26. 28 Petrushka December 3, 2010 at 10:38 pm

    You say cake recipes are designed. I ask: do you think there has ever been a prize winning recipe designed without an iterative process of variation and selection?

    I ask, it there are 10 the the gazillion protein coding sequences, how doe the designer know which ones are best for the current need? Where does this information come from?

    Why is the source of information a problem for evilutionists, but not for Designers?

    Why is the time required to build an information database not a problem for the Designer?

  27. 29 Petrushka December 4, 2010 at 1:26 pm

    In the absence of any activity on the thread, let see if I can summarize some of gpuccio’s position.

    1. The Designer is allowed to acquire information about functional sequences via experimentation (i.e., directed evolution).

    2. This information is then employed to engineer organisms or adaptive changes in organisms.

    3. Interventions, even multiple interventions, are employed to tweak the genomes of organisms.

    The intervention temptation is as old as science, and affected even Newton:
    ________________________________________
    Newton’s law of gravity enables you to calculate the force of attraction between any two objects. If you introduce a third object, then each one attracts the other two, and the orbits they trace become much harder to compute. Add another object, and another, and another, and soon you have the planets in our solar system. Earth and the Sun pull on each other, but Jupiter also pulls on Earth, Saturn pulls on Earth, Mars pulls on Earth, Jupiter pulls on Saturn, Saturn pulls on Mars, and on and on.

    Newton feared that all this pulling would render the orbits in the solar system unstable. His equations indicated that the planets should long ago have either fallen into the Sun or flown the coop–leaving the Sun, in either case, devoid of planets. Yet the solar system, as well as the larger cosmos, appeared to be the very model of order and durability. So Newton, in his greatest work, the Principia, concludes that God must occasionally step in and make things right:

    The six primary Planets are revolv’d about the Sun, in circles concentric with the Sun, and with motions directed towards the same parts, and almost in the same plane…. But it is not to be conceived that mere mechanical causes could give birth to so many regular motions…. This most beautiful System of the Sun, Planets, and Comets, could only proceed from the counsel and dominion of an intelligent and powerful Being.
    _________________________________________

    This is where I assert that ID as a theory of intervention is an old standby position, with a long history. Unfortunately it is a history of failure. It is based on ignorance and gaps.

    Of course, when I argued this at UD, I was placed in moderation.

  28. 30 MathGrrl December 4, 2010 at 1:54 pm

    Petruska writes:

    This is where I assert that ID as a theory of intervention is an old standby position, with a long history. Unfortunately it is a history of failure. It is based on ignorance and gaps.

    Of course, when I argued this at UD, I was placed in moderation.

    And this is a good time to let gpuccio know that, while I believe him to be incorrect on several key points, I appreciate that he is willing to discuss the issues in a neutral forum. I hope others can be encouraged to venture outside of UD and discuss their positions where they can be challenged without those challenges being censored. gpuccio is showing much more confidence in his ideas than do the majority at UD.

  29. 31 Petrushka December 4, 2010 at 2:07 pm

    I appreciate that he is willing to discuss the issues in a neutral forum.

    ___________________

    Definitely.

    There are very few reasoned critiques of evolution, and those who are best qualified seldom venture into the realm of open discussion.

  30. 32 Toronto December 4, 2010 at 2:15 pm

    MathGrrl: I hope others can be encouraged to venture outside of UD and discuss their positions where they can be challenged without those challenges being censored. gpuccio is showing much more confidence in his ideas than do the majority at UD.

    Agreed. An open debate, which is what the ID side claims it wants, is just not possible if one side is being held to different standards.

  31. 33 gpuccio December 4, 2010 at 2:52 pm

    MathGrrl:

    “You accept that the mechanisms identified by modern evolutionary theory can result in “small” changes. However you seem to think there is some limit to how many of these small changes can build up until they result in “large” changes. Do you have any evidence that such a limit exists? If so, do you have any proposed mechanism that enforces that limit?”

    There are two reasons to believe there are very strict limits to what random variation can do.

    The first is mathemathical. RV abeys the laws of statistics. In a digital string of nucleotides or aminoacids, the combinatorial space (the search space) becomes quickly so huge that no random search can realistically find functional targets in it. That’s the whole substance of the concept of dFSCI.

    Therefore, no complex function requiring substantial functional information (I have suggested 150 buts as a threshold, and that is very very generous indeed) can be found by a mere random search, whatever the toos of variation may be.

    IOWs, if you need 35 specific aminoacids in specific positions for a function to arise, that function will never be found by RV.

    Moreover, there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations. That is simply not true, neither in informatics nor in biology.

    The second reason is empirical: in no biological observational model has a complex function ever arised. The rerasoning of Behe in TEOE shows clearly that a coordinated two aminoacid functional variation is alreday extrmely difficult to achieve, even in models with very high reproductive rate, very high population numbers, and extreme environmental pressure (see cloroquine resistance).

    3 or 4 coordinated aminoacid changes are probably out of the game.

    My threshold is of a coordinated variation of 35 aminoacids. It is definitely beyond what RV can ever achieve.

  32. 34 gpuccio December 4, 2010 at 3:03 pm

    Mathgrrl:

    “Further to your quoted paragraph, do you have any evidence to support your claim that “only a small munber of functions are naturally selectable”? How about for the immediately following claim that “only very simple ones can be offered to NS by RV mechanisms”?”

    It is rather obvious that “only a small munber of functions are naturally selectable”.

    Just think to the definition of “naturally selectable”. A function must be such that it confers a definite reproductive advantage. Even then, as Zachriel correctly pointed out, it will not always be selected.

    Isolated functions (deriving for instance from a single new proteins) which can confer a reproductive advantage are really rare. Especially when the replicator is already very complex and strucured and optimized (as it is the case even for the simplest replicators we know of, bacteria and archea). I have often argued that, the more complex a system is, the more difficult it is to improve it by simple additions. Usually, you need a complex set of changes to achieve a meaningful transormation.

    That’s why it is obvious that, while an almost infinite number of interesting functions can be defined, and most them, if not all, can certainly be intelligently measured and selected, the subset of naturally selectable functions is certainly a minuscule subset of all possible useful functions.

    The concept that “only very simple ones can be offered to NS by RV mechanisms” derives directly from my considerations in my previous posts. And, is the subset of naturally selectable functions is certainly small, the sub subset of simple naturally selectable functions is really tiny. Indeed, even if simple variation occurs all the time, we have really few examples of naturally selectable simple variation.

    Remember the recent study of single nucleotide mutations which found no increase of fitness in any case, a minority of neutral mutations, and a vast majority of slightly detrimental variations. I am sure those results will be confirmed even in larger experimental studies.

  33. 35 gpuccio December 4, 2010 at 3:06 pm

    Petrushka:

    “You say cake recipes are designed. I ask: do you think there has ever been a prize winning recipe designed without an iterative process of variation and selection?”

    An iterative process of intellugently decided, guided, observed and intepreted variation and selection. QED.

    As you correctly state in your following post (about my positions):

    “The Designer is allowed to acquire information about functional sequences via experimentation (i.e., directed evolution).”

    Strange your use of the capital letter for the designer.

  34. 36 gpuccio December 4, 2010 at 3:10 pm

    Zachriel:

    “How does the scientist add information sufficient to account for the complex result? Please show the math.”

    By measuring the function he is interested in, even at trivial levels, and by amplifying it before a further mutation round, and so on.

    The “reward” implicit in amplification is the key point. What is recognized, trough intelligent measure, is amplified. Its probabilistic resources are amplified of many many orders of magnitude. Therefore the following mutation round has many orders of magnitude more probabilities to find some result.

  35. 37 gpuccio December 4, 2010 at 3:12 pm

    Guys:

    I appreciate the discussion too. Especially when it touches significant points, and does not focus on mere antagonism between the two positions.

    After all, we are not at a political debate here (thanks God!).

  36. 38 gpuccio December 4, 2010 at 3:14 pm

    Petrushka:

    “Of course, when I argued this at UD, I was placed in moderation.”

    I am sorry that you were put in moderation, and I am happy that you will not here.

    But excuse me if your historical metaphors don’t move me to tears…

  37. 39 Zachriel December 4, 2010 at 3:44 pm

    Mathgrrl: I appreciate that he is willing to discuss the issues in a neutral forum.

    Petrushka: Definitely.

    Heartily agreed.

  38. 40 Zachriel December 4, 2010 at 3:58 pm

    gpuccio: In a digital string of nucleotides or aminoacids, the combinatorial space (the search space) becomes quickly so huge that no random search can realistically find functional targets in it.

    That’s rather easy to show is not correct. Evolutionary algorithms can search vast and complex spacially-ordered spaces much quicker than random search.

    gpuccio: IOWs, if you need 35 specific aminoacids in specific positions for a function to arise, that function will never be found by RV.

    Not necessarily. It may be coopted from other, simpler functional sequences. In any case, we know that more than 1 in 10^11 random sequences form functional proteins, so at least some functional proteins are available.

    gpuccio: The second reason is empirical: in no biological observational model has a complex function ever arised.

    Not sure what you mean by “observational model,” but we have transitional fossils for the evolution of the mammalian middle ear showing how incremental, selectable changes resulted in an irreducibly complex structure. This is supported by embryonic and genetic data.

    gpuccio: The rerasoning of Behe in TEOE shows clearly that a coordinated two aminoacid functional variation is alreday extrmely difficult to achieve, even in models with very high reproductive rate, very high population numbers, and extreme environmental pressure (see cloroquine resistance).

    Yes, rare events are rare. That’s not news. But that’s not what evolutionary theory posits. Do you know how many mutations are actually involved in chloroquine resistance? Ten in the pfcrt gene of Plasmodium falciparum.

  39. 41 Zachriel December 4, 2010 at 4:08 pm

    gpuccio: How does the scientist add information sufficient to account for the complex result? Please show the math.

    gpuccio: By measuring the function he is interested in, even at trivial levels, and by amplifying it before a further mutation round, and so on.

    Measuring? They merely took a sampling of those that bound to the target molecule. It’s sort of like they were pressing clay into a mold so that it fit.

    Consider a little girl. She has a sea shell with a very pretty (complex) pattern. She lays it down and notices that it leaves an impression of the pattern on the wet sand. So she pushes it down onto the sand and makes a more perfect impression. Where is the information coming from to form the pattern in the sand? Certainly not the little girl.

    What if instead of the little girl, the shell is washed up on the beach and its weight makes the impression? Where does the information come from? From the shell, of course. From the ‘environment’.

    In Szostak’s case, the information comes from the complex shape of the target molecule, and essentially, they are molding a protein to fit the target by progressively pressing it into place. When they are done, it fits like a key, just like a shell fits its impression in the sand.

  40. 42 Petrushka December 4, 2010 at 4:47 pm

    It is nonsense to assert that evolutionary algorithms cannot deal with large search spaces.

    They can routinely deal with the travelling salesman problem where the number of stops is 10,000. I believe that makes the number of possible routes 10,000!

    There are two important aspects of their solutions. First, they are never guaranteed to be the best, just incrementally better than the previous iteration.

    Second, they do not “search” the space; they try small variations of the current generation and preserve the best. This means that the history of generations will form a nested hierarchy.

    Biological evolution does not explore every possible viable construction, only those reachable through small, incremental changes, that is why fossil evidence is important.

    I find your belligerent disdain for the history of science rather sad. You cannot point to a single instance in the history of science where the assumption of invisible entities has produced any positive results.

    What kind of assumptions would you make if asked to figure out how the pyramids were constructed, or Stonehenge? What assumptions do you make about stage magic, in the absence of explanation?

    My point is that one set of assumptions leads to testable hypotheses, and one leads to nothing.

    You cannot calculate the probability of an event, such as the evolution of a protein domain, unless you know the actual history. You can assert that it could not arise incrementally, but unless you know the actual history, you are just imagining stuff.

    Scientists also imagine scenarios. The difference is that they imagine scenarios that require regular processes. Which limits their imagination, and requires them to demonstrate that regular processes are sufficient. This can be difficult, and in the case of astronomy, it can take centuries.

    It requires little knowledge or effort to do a literature search and tease out gaps in knowledge. Much more effort to add to that knowledge.

  41. 43 gpuccio December 4, 2010 at 6:38 pm

    Zachriel:

    “That’s rather easy to show is not correct. Evolutionary algorithms can search vast and complex spacially-ordered spaces much quicker than random search.”

    But, as I have tried to argue, evolutionary algorithms are no example of RV + NS. Therefore, what I said is correct.

    “Not necessarily. It may be coopted from other, simpler functional sequences.”

    But, as I have said in the immediately following paragraph:

    ‘Moreover, there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations. That is simply not true, neither in informatics nor in biology.’

    You go on:

    “In any case, we know that more than 1 in 10^11 random sequences form functional proteins, so at least some functional proteins are available.”

    Again you repeat your cathechism. What a pity that the “functional proteins” you are referring to are in no way naturally selectable.

    “Not sure what you mean by “observational model,””

    Something which has been observed happenign in nature or in the lab.

    “but we have transitional fossils for the evolution of the mammalian middle ear showing how incremental, selectable changes resulted in an irreducibly complex structure. This is supported by embryonic and genetic data.”

    And telling us nothing about the cause or explanation of changes.

    “Yes, rare events are rare. That’s not news. But that’s not what evolutionary theory posits. Do you know how many mutations are actually involved in chloroquine resistance? Ten in the pfcrt gene of Plasmodium falciparum.”

    Rare events are rare. And even rapidly replicating huge populations are rare. What about mammals?

    About the ten mutations, reference please?

    “Measuring? They merely took a sampling of those that bound to the target molecule. It’s sort of like they were pressing clay into a mold so that it fit.”

    No. They separated the molecules which stuck to fixed ATP. That’s a common way of separating molecules from a population through some property of the molecules. It is, in all respects, a way to measure a property at some (very low) threshold. With a binary result.

    “In Szostak’s case, the information comes from the complex shape of the target molecule, and essentially, they are molding a protein to fit the target by progressively pressing it into place. When they are done, it fits like a key, just like a shell fits its impression in the sand.”

    The ATP molelcule is the information used by Zvostak to select molecules which stick to ATP. The designer (Szostak) has chosen the ATP molecule becasue he want to find molecules that bind ATP. That is a first important choice. ATP is not there naturally. The designer chooses a biologically active molecule (one which can be useful for a propaganda paper) and fixes it to a system so that he can separate molecules which present an even low biochemical binding.

    Then he takes out, from the 10^11 random molecules, those few which bind loosely to ATP. Then he amplifies them to study them better (correct). Then he sees that they are really trivial molecules, with very loose binding to ATP: nobody would really be impressed by that finding.

    So he amplifies them and mutates the, for a few rounds, each time swelecting those which stick better to ATP (again measuring the expected function, each time at a higher threshold), and again isolating and amplifying and mutating the best results.

    Finally he gets a more presentable molecule (not so much presentable, after all, but he is contented just the same), and can write his paper to “demonstrate” that functional sequences are spontaneously present in random libraries, and get rid of all those silly ID people who say the contrary.

    So that people like you may repeat that single, dumb phrase as a credo to all non believers.

  42. 44 gpuccio December 4, 2010 at 6:42 pm

    Petrushka:

    “It is nonsense to assert that evolutionary algorithms cannot deal with large search spaces.”

    Sometimes I wonder if your misrepresantations of what I say are in good faith or not.

    I have never asserted that. I have rather asserted, for a lot of posts here, and many times elsewhere, that “evolutionary algorithms” are not examples of RV + NS, and that’s the reason why, sometimes, they can really deal with large search spaces.

    Like many other designed algorithms.

  43. 45 gpuccio December 4, 2010 at 6:59 pm

    Petrushka:

    “I find your belligerent disdain for the history of science rather sad.”

    No disdain at all. My only disdain is for your bad philosophy of science, and for your gross exploitation of the history of science for your ideological purposes.

    “You cannot point to a single instance in the history of science where the assumption of invisible entities has produced any positive results.”

    There is always a first time. In science, many “first times” have built up our present knowledge.

    “What kind of assumptions would you make if asked to figure out how the pyramids were constructed, or Stonehenge?”

    That intelligent designer built them. And I would build models of how they were built, verifying them with the data.

    “What assumptions do you make about stage magic, in the absence of explanation?”

    That an intelligent designer (the magician) effects it. And I would build models to try to understand how.

    “My point is that one set of assumptions leads to testable hypotheses, and one leads to nothing.”

    Let me guess… I suppose your set if the good guy, isn’t it?

    “You cannot calculate the probability of an event, such as the evolution of a protein domain, unless you know the actual history.”

    I can calculate it in a model. Give an explicit model, supported by facts, and we can calculate the probability of the random parts of the model.

    Non explicit models, models which rest on unproved possibilities (“after all functional intermediaties could exists even if nobedy has ever seen one”) and similar just so stories are simply non scientific models.

    “Scientists also imagine scenarios. The difference is that they imagine scenarios that require regular processes. Which limits their imagination, and requires them to demonstrate that regular processes are sufficient. This can be difficult, and in the case of astronomy, it can take centuries.”

    I really hope we must not give the absurd darwinian scenario a few more centuries, before we understand it is wrong!

    “It requires little knowledge or effort to do a literature search and tease out gaps in knowledge. Much more effort to add to that knowledge.”

    What a pity that the problem is not in gaps, but in the inadeguate nature of the explanatory model.

    Why don’t you read Abel’s paper about controls and constraints, and try to understand why non designed processes can never, never produce that kind of prescribed information described by Abel, which is the same as my dFSCI.

    Darwinist must really be intellectually blind, if they cannot understan such a simple concept as “functional information”: the amount of specific information necessary to achieve a function.

  44. 46 Petrushka December 4, 2010 at 7:39 pm

    I have never asserted that. I have rather asserted, for a lot of posts here, and many times elsewhere, that “evolutionary algorithms” are not examples of RV + NS, and that’s the reason why, sometimes, they can really deal with large search spaces.

    Pardon my laughter, but Dembski spent ten years misunderstanding and misrepresenting 10 lines of BASIC code. I’m not sure he understands it yet.

    Certainly no one in the ID movement understands what an evolutionary algorithm is or what its capabilities are.

    Here’s a simple test: write out in pseudo code, an evolutionary algorithm that models evolution as biologists understand it. Point out the differences between your algorithm and and the algorithms used by population geneticists.

  45. 47 Petrushka December 4, 2010 at 7:52 pm

    Ah yes, Abel and the Origin of Life Foundation:

  46. 48 Toronto December 4, 2010 at 8:18 pm

    Petrushka: “You cannot point to a single instance in the history of science where the assumption of invisible entities has produced any positive results.”

    gpuccio: There is always a first time. In science, many “first times” have built up our present knowledge.

    gpuccio,

    I think this statement verifies what I think the ID movement is actually trying to do and that is protect your Intelligent Designer, the Christian God.

    You have just waved your hand at science because it risks your religious investment.

    Show me I’m wrong by coming back with a scientific reason why science should pre-suppose invisible intervention.

  47. 49 Petrushka December 4, 2010 at 8:43 pm

    Actually we have thousands of years of history before the invention of science in which attempts were made to establish the existence of invisible entities.

    But maybe if we keep putting our hands on the hot stove, the next time it won’t burn.

    I’m still waiting for GP’s description of how he would investigate the construction of the pyramids or Stonehenge, or what assumptions he would make regarding stage magic.

    Is there some heuristic value in assuming, in the absence of any at-hand explanation, that space aliens were involved, or that stage magic is real magic?

    Suppose many people put forth conventional rational explanations over many years and they all prove to be inadequate or incomplete?

  48. 50 Toronto December 4, 2010 at 8:46 pm

    gpuccio,

    I thank you for a lively discussion, but when what is supposed to be a scientific conversation has one side allow for invisble entities, it’s no longer a scientific debate at all.

  49. 51 Zachriel December 4, 2010 at 9:41 pm

    gpuccio: But, as I have tried to argue, evolutionary algorithms are no example of RV + NS.

    This is your claim.

    gpuccio: In a digital string of nucleotides or aminoacids, the combinatorial space (the search space) becomes quickly so huge that no random search can realistically find functional targets in it.

    Evolution is not a random search, nor does it resemble a random search in the way it performs. Evolutionary search is limited in some fundamental ways. For instance, much of the search space may be unavailable to an evolutionary search.

    Leaving that aside, your argument is that the space is large, so evolution won’t work. The actual answer depends on the quality of the search space, not the quantity. An evolutionary algorithm is a valid test of that claim.

    gpuccio: Moreover, there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations.

    Yes. You repeat the claim, but that doesn’t strengthen your position. There is ample reason to think that complex functions can incrementally and selectively evolve.

    gpuccio: And telling us nothing about the cause or explanation of changes {regarding the evolution of the mammalian middle ear}

    Small genetic changes bring about a change in the timing of developmental events. Each step is selectable and incremental. We have the fossil evidence to support the transition. Molecular, developmental, fossil evidence provide ample support, and considering the developmental data predicted the fossil data, and did so generations before the fossils were discovered, it’s quite a phenomenal example.

    Keep in mind that in science, the data trumps. When scientists predict the contents of rocks from the examination of developing embryos, it lends immense credibility to the science.

    gpuccio: What about mammals?

    Rare events are rare in mammals, too. But the Theory of Evolution doesn’t depend on implausibly improbable events, but on selection from among natural variations.

    gpuccio: About the ten mutations, reference please?

    Lim et al., pfcrt Polymorphism and Chloroquine Resistance in Plasmodium falciparum Strains Isolated in Cambodia, Antimicrobial Agents and Chemotherapy 2003: The detection of an intermediate haplotype from a susceptible area with 76T/220A, suggests that acquisition of chloroquine resistance might be a stepwise process, during which accumulation of point mutations modulates the response to chloroquine.

    Chen et al., pfcrt Allelic Types with Two Novel Amino Acid Mutations in Chloroquine-Resistant Plasmodium falciparum Isolates from the Philippines, Antimicrobial Agents and Chemotherapy 2003: Mutations in the pfcrt and pfmdr1 genes have been associated with chloroquine resistance in Plasmodium falciparum. Ten and five mutations, respectively, have been identified in these genes from chloroquine-resistant parasites worldwide.

  50. 52 Zachriel December 4, 2010 at 9:46 pm

    gpuccio: The ATP molelcule is the information used by Zvostak to select molecules which stick to ATP. The designer (Szostak) has chosen the ATP molecule becasue he want to find molecules that bind ATP. That is a first important choice. ATP is not there naturally.

    ATP is not only a natural biological molecule, but ATP binding is an important biological function.

    gpuccio: Then he sees that they are really trivial molecules, with very loose binding to ATP: nobody would really be impressed by that finding.

    Except other biologists, of course.

    gpuccio: Finally he gets a more presentable molecule (not so much presentable, after all, but he is contented just the same), and can write his paper to “demonstrate” that functional sequences are spontaneously present in random libraries, and get rid of all those silly ID people who say the contrary.

    You really don’t understand scientists very well. Szostak did it because it illuminates an important problem, artificial evolution allows the exploration of the fitness landscape, the hypothesis follows from abiogenetic theory, and the technique may even have therapeutic uses.

  51. 53 Petrushka December 4, 2010 at 9:47 pm

    Science deals with the invisible quite well. The problem is the lack of attributes. Unless you consider having the capabilities to do whatever it is that was done an attribute.

  52. 54 Petrushka December 5, 2010 at 12:12 am

    and get rid of all those silly ID people who say the contrary.
    ____________________________

    Are ID people prone to say things contrary to fact? You seem to have a low opinion of them.

  53. 55 gpuccio December 5, 2010 at 4:25 am

    Petrushka:

    “Here’s a simple test: write out in pseudo code, an evolutionary algorithm that models evolution as biologists understand it. Point out the differences between your algorithm and and the algorithms used by population geneticists.”

    I have repeatedly stated the necessary properties that would make an algorithm similar to RV + NS:

    a) Complete independence of the environment and the replicators (environment programmed in blind.

    b) True replicators, which replicate using the environment resources.

    c) Development of true naturally selectable complex functions in the replicators: new complex functions which really enhance the replication rate of the replicators, without being measured and actively rewarded by the environment.

  54. 56 gpuccio December 5, 2010 at 4:32 am

    Petrushka:

    “Ah yes, Abel and the Origin of Life Foundation:”

    Thank you for the photos. I am not sure what they mean.

    But maybe I understand. Maybe a rich building would have impressed you more.

    Instead, I give you a link:

    http://www.bentham.org/open/tocsj/articles/V004/14TOCSJ.pdf

    But maybe you are more interested in Abel’s car than in his thought.

    I must still get accustomed to the vast range of darwinist arguments.

  55. 57 gpuccio December 5, 2010 at 4:37 am

    Toronto:

    “I thank you for a lively discussion, but when what is supposed to be a scientific conversation has one side allow for invisble entities, it’s no longer a scientific debate at all.”

    I had never known that science was only interested in visible objects. Was that bizarre concept revealed to you by God himself?

    And your definition of “visible”, please? What is this, anew advancement in the modern scientistic dogma?

    No more “methodological naturalism”: from now on, “methodological visibilism”?

    I must have missed something, in my philosophy of science.

  56. 58 gpuccio December 5, 2010 at 4:51 am

    Zachriel:

    “This is your claim.”

    Yes, it is. Discussed in detail here.

    “Evolution is not a random search, nor does it resemble a random search in the way it performs.”

    The neo-darwinian model (which is the only thing I am debating here) is RV + NS. RV is a random search.

    “Evolutionary search is limited in some fundamental ways. For instance, much of the search space may be unavailable to an evolutionary search.”

    That is a strange concept (and IMO completely wrong). Do you mean that not any mutation may happen? Would you elaborate, please?

    ” your argument is that the space is large, so evolution won’t work. The actual answer depends on the quality of the search space, not the quantity.”

    And what do you think of the “quality” of the proteome search space?

    “An evolutionary algorithm is a valid test of that claim.”

    Only if it satisfies the requirements I have detailed.

    “Yes. You repeat the claim, but that doesn’t strengthen your position. There is ample reason to think that complex functions can incrementally and selectively evolve.”

    You repeat the claim, but that doesn’t strengthen your position.

    I said that “there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations.”

    You smartly bypass “practical evidence”, and affirm “ample reason”, without ever detailing it. I suppose you are strengthening my position.

    ” it’s quite a phenomenal example.”

    Of what? Of molecular variation explained by the neo darwinian model? I can’t see how or why.

    ” it lends immense credibility to the science.”

    I have never doubted the credibility of science. I do doubt the credibility of the neo darwinian model.

    “Rare events are rare in mammals, too. But the Theory of Evolution doesn’t depend on implausibly improbable events, but on selection from among natural variations.”

    Is that an answer? I suppose my argument was: if an event is rare in protozoa, how much more rare will the same type of event be in mammals? Is that more clear now?

    I will come back tomorrow about the references.

  57. 59 gpuccio December 5, 2010 at 4:59 am

    Zachriel:

    “ATP is not only a natural biological molecule, but ATP binding is an important biological function.”

    That makes Szostak’s deliberate intelligent choice even more deliberate and intelligent.

    “Except other biologists, of course.”

    Maybe not even them.

    “You really don’t understand scientists very well. Szostak did it because it illuminates an important problem, artificial evolution allows the exploration of the fitness landscape, the hypothesis follows from abiogenetic theory, and the technique may even have therapeutic uses.”

    You really don’t understand scientists very well. If Szostak’s purposes had been those you state, he would have made a study about protein engineering.

    And you don’t understand my argument, too. I am not saying that exploring artificial evolution is useless. Indeed, it is one of our main sources of information.

    Even what Szostak did is an useful source of information.

    My point is completely different:

    Szostak declared an explicit aim for his study (exploring functional molecules spontaneously occurring in random libraries), and then made a study using artificial evolution, which had nothing to do with his explicit aim, and then declared wrong conclusions about the original aim of the study. That is intellectual bias, at least.

  58. 60 gpuccio December 5, 2010 at 5:03 am

    Petrushka:

    “Science deals with the invisible quite well.”

    Please, explain that to Toronto.

    “The problem is the lack of attributes. Unless you consider having the capabilities to do whatever it is that was done an attribute.”

    Attributes are describable in “what was done”. dFSCI is an attribute.

    Moreover, being conscious is an attribute (from our empirical experience). The process of design has a lot of attributes in our experience: representing conscious cognitions and purposes, inputting those representations into matter through what Abel calls “configurable switches”, and so on.

  59. 61 gpuccio December 5, 2010 at 5:11 am

    Petrushka:

    “Are ID people prone to say things contrary to fact? You seem to have a low opinion of them.”

    ID people, in my phrase, are saying things contrary to the concept that “functional sequences are spontaneously present in random libraries”, obviously with specific definition and quantification of the concept of function.

    It’s exactly the manipulation of that concept that is at the basis of Szostak’s “trick”: he manipulates the spontaneous “function” to make it appear a little more “function” than it really is. That is simply wrong. That is simply unfair.

    If you want to limit your statements to the concept that simple, trivial and useless biochemical bindings can easily be found in a random library, please be my guest. I have no objection to that.

  60. 62 Petrushka December 5, 2010 at 5:38 am

    Moreover, being conscious is an attribute (from our empirical experience).

    +++++++++++++++++++++

    OK, so your invisible friend has the attribute of consciousness. And you know this how?

    Doesn’t it strike you as even a little bit odd that the entity you have invented has exactly the attributes — intelligence, knowledge, super powers — that are required to implement your imaginary program of intervention on a global scale. Maybe on a universal scale?

    How about an attribute that can be observed and tested? Even dark matter and dark energy have testable attributes, even though invisible.

  61. 63 Toronto December 5, 2010 at 1:24 pm

    gpuccio,

    I am talking about “your” invisible entity, the “conscious intelligent” one.

    Why would you think I am talking of anything else?

    Those things not visible to the naked eye, can be “seen” in other ways, by the use of microscopes, particle accelerators, etc.

    We are talking about your invisible intelligent designer who somehow manages to elude being tested by us.

    If he were “like us”, we would have been able to detect his presence with processes that can detect intelligent beings with our attributes.

    But he has attributes “unlike us”, that enable him to live for thousands of years and have the foresight to design for a future known to him but not us.

    If he instead used feedback from the environment, then he would use the same process evolution uses.

    How could you tell the difference then?

  62. 64 Zachriel December 5, 2010 at 2:42 pm

    gpuccio: a) Complete independence of the environment and the replicators (environment programmed in blind.

    Virtually all evolutionary algorithms separate the environment from the replicators. That’s rather the point.

    Your real misunderstanding is “environment programmed in the blind”. Random, chaotic landscapes are not amenable to evolutionary exploration. But the world is not mathematically chaotic, but exhibits a great deal of order, e.g. spatially.

    gpuccio: b) True replicators, which replicate using the environment resources.

    Yes. We might put the food on hills, so that the higher on the hill the replicator is, the more resources it has access to. We might call this a “fitness landscape.” Or we might simulate a world with food resources scattered about, and a replicator with primitive, uncoordinated motility. There are endless possibilities.

    Of course, most evolutionary algorithms don’t have a high degree of verisimilitude. We can, however, apply evolutionary algorithms to various environments to determine their limitations, and we can explore particulars of evolution, such as showing how complexity can evolve.

  63. 65 Zachriel December 5, 2010 at 3:05 pm

    gpuccio: The neo-darwinian model (which is the only thing I am debating here) is RV + NS. RV is a random search.

    Not exactly. RV explores only within the ‘locality’ of its starting point. A random seaerch works quite differently.

    Zachriel: Evolutionary search is limited in some fundamental ways. For instance, much of the search space may be unavailable to an evolutionary search.

    gpuccio: That is a strange concept (and IMO completely wrong). Do you mean that not any mutation may happen? Would you elaborate, please?

    This is fundamental. A mutation is a change to an existing, working replicator. Evolution won’t explore a complete rearrangement of a sequence, but only those that are ‘nearby’ in sequence space.

    A couple of examples (assuming only point-mutations): A replicator that reaches a fitness peak and stops. It never explores all the other possibilities, even if there is a giant peak right next door.

    Or, consider simple word evolution: Starting from a single letter word; o, or, ore, ere, were, ware, war, wan, man, mean, bean, bear, beer, bee. But no matter how long this evolution continues, it will never find zzzzz, even if that sequence is the bestest possible sequence in the entire universe. Do you see why?

    Indeed, even with recombination, evolution will never explore the vast majority of most landscapes. Evolution is severely constrained.

    gpuccio: And what do you think of the “quality” of the proteome search space?

    We know quite a bit. On the small, we know that a small percentage of random sequences replacing an essential domain provides a selectable benefit, that is quickly improved by simple point-mutation and selection (without even considering homlologous recombination). And on a historical scale, we have entire families of proteins that have apparently evolved from common ancestors.

    Hayashi et al., Experimental Rugged Fitness Landscape in Protein Sequence Space, PLoS One 2006.

    gpuccio: Only if it satisfies the requirements I have detailed.

    That is incorrect. Your claim is the evolution generally is not capable of exploring larges spaces. It’s possible (though still unsupported) that protein space is not amenable to evolutionary exploration, but not simply because it is large or vast. We can show that vast spaces are amenable to evolutionary exploration by testing the model against various vast spaces. We can then classify the various spaces, and reach some general conclusions.

    gpuccio: I said that “there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations.”

    Yes, that’s what you said. And you haven’t provided any support.

    gpuccio: Of what? Of molecular variation explained by the neo darwinian model? I can’t see how or why.

    The observations include developmental, fossil, and yes, molecular.

    gpuccio: I suppose my argument was: if an event is rare in protozoa, how much more rare will the same type of event be in mammals? Is that more clear now?

    It comes down to your statement as to whether evolution can proceed incrementally. There is ample evidence of this, developmental, fossil, and yes, molecular. Hayashi et al. is one such study.

  64. 66 Zachriel December 5, 2010 at 3:19 pm

    gpuccio: The ATP molelcule is the information used by Zvostak to select molecules which stick to ATP. The designer (Szostak) has chosen the ATP molecule becasue he want to find molecules that bind ATP. That is a first important choice. ATP is not there naturally.

    Zachriel: ATP is not only a natural biological molecule, but ATP binding is an important biological function.

    gpuccio: That makes Szostak’s deliberate intelligent choice even more deliberate and intelligent.

    Heh. Of course it’s deliberate. They’re testing whether random sequences can bind to natural molecules. That’s the whole point of the experiment.

    gpuccio: Then he sees that they are really trivial molecules, with very loose binding to ATP: nobody would really be impressed by that finding.

    Zachriel: Except other biologists, of course.

    gpuccio: Maybe not even them.

    It was published by the journal Nature. It’s been cited hundreds of times by other scientific papers. Szostak won the Nobel Prize in Medicine. Of course it has impressed other biologists.

    gpuccio: Szostak declared an explicit aim for his study (exploring functional molecules spontaneously occurring in random libraries),

    Yes. It is a straightforward hypothesis from abiogenetics; random sequences with weak function that then evolve greater specificity.

    gpuccio: and then made a study using artificial evolution, which had nothing to do with his explicit aim,

    He selected for function, again, exactly what he should do to test the hypothesis.

  65. 67 gpuccio December 5, 2010 at 5:56 pm

    Zachriel:

    I have read the two papers. The forst is more about polymorphisms in different resions in Cambodia, and their statistical association with cloroquine resistance: Interesting, but not very informative for our discussion.

    The second is more pertinent. It seems, from its results, that the minimum aminoacid change associated with cloroquine resistance is four. From the paper:

    “The minimum number of mutations previously reported in
    pfcrt of chloroquine-resistant parasites was four: C72S, K76T,
    N326D, and I356L (21).”

    and:

    “In the current study, we identified two novel allelic types of
    the pfcrt gene in chloroquine-resistant P. falciparum isolates
    collected from Morong, Philippines. These pfcrt allelic types
    include two novel mutations (A144T and L160Y) in combination
    with two or three mutated codons (K76T/N326D or C72S/
    K76T/N326D) from within the previously reported repertoire
    of 10 codons.”

    While this is not evidence that all four (or more) mutations are functionally necessary to the resistance (there can be other reasons for some associations, such as local aplotypes not necessarily connected to the function), I think we could for the moment accept the hypothesis of a 4 AAs variation as the minimum mutation conferring cloroquine resistance. That’s about 17 bits.

    Indeed, as there is polymorphism, it would be more correct to apply Durston’s method to the family of existing variants, which would give a lower Fits value. But I will accept the 17 bits value, for the moment.

    Further studies, maybe relating the exact relationship of each mutation to the resistance, could contribute to fix better the measurement. But I doubt that, in the end, it will be significantly different from that.

    So, we can conclude that a 17 bits functional defense transition, in populations with huge numbers and very high reproductive rate, and under an extremely strong selective pressure, is a rare event.

    Can you agree on that?

  66. 68 gpuccio December 5, 2010 at 6:08 pm

    Petrushka:

    “OK, so your invisible friend has the attribute of consciousness. And you know this how?”

    I don’t “know” it. I “infer” it. I am a scientist, not a prophet. And I infer a designer for biological information from the abundant presence in it of dFSCI, and we know empirically that dFSCI is always the product of conscious intelligent beings. Therefore I infer a conscious intelligent designer.

    “Doesn’t it strike you as even a little bit odd that the entity you have invented has exactly the attributes — intelligence, knowledge, super powers — that are required to implement your imaginary program of intervention on a global scale. Maybe on a universal scale?”

    Not odd at all. As explained, it’s the attribute of dFSCI which leads to the inference of a designer who is able to design. Why should we infer a designer who cannot design?

    “How about an attribute that can be observed and tested? Even dark matter and dark energy have testable attributes, even though invisible.”

    I am not aware of any known “attribute” of dark energy. Dark energy (of whose nature we have no idea) is at present an inference of something not yet explained, an inference made necessary, even in that vague form, by specific observable attributes in its consequences (the acceleration in shift at a specific time). The situation is pretty much the same as for biological information, only we do know much more about the process of design than astrophysics know about “dark energy”.

  67. 69 gpuccio December 5, 2010 at 6:15 pm

    Toronto:

    “If he were “like us”, we would have been able to detect his presence with processes that can detect intelligent beings with our attributes.”

    Design detection is such a process. A process you deny for mere ideology.

    “But he has attributes “unlike us”, that enable him to live for thousands of years and have the foresight to design for a future known to him but not us.”

    That’s just a reasonable hypothesis made form what we observe in his designed things. It’s the scenario which best explains them.

    Why do you find so unacceptable that some conscious being may have a better knowledge than we have? Racial pride?

    “If he instead used feedback from the environment, then he would use the same process evolution uses.

    How could you tell the difference then?”

    Darwinian evolution does not “use” anything. The feedback is algorithmic (based on strict necessity), and can happen only for naturally selectable function.

    In design, the feedback is cognized and interpreted according to purposes, and the designer can effect any type of artificial intelligent selection for any type of useful function.

    That is a big difference.

  68. 70 Toronto December 5, 2010 at 6:22 pm

    gpuccio:The situation is pretty much the same as for biological information, only we do know much more about the process of design than astrophysics know about “dark energy”.

    We know about how “we” design. We don’t know how an “intelligent designer” of life would design since ID proponents say it is not neccessary to investigate the designer.

    His process, if it used feedback instead of foresight, would look a lot like an evolutionary process, basically, trial and error.

    How do you tell?

  69. 71 gpuccio December 5, 2010 at 6:24 pm

    Zachriel:

    “Your real misunderstanding is “environment programmed in the blind”. Random, chaotic landscapes are not amenable to evolutionary exploration. But the world is not mathematically chaotic, but exhibits a great deal of order, e.g. spatially.”

    Still more equivocation. “Environment programmed in the blind” does not mean “random, chaotic landscapes”. It just mean that the person who programs the environment, with all its order of any possible kind, does not know that it will be used to test replicators. That’s the meaning of “blind”. That prevents the cognitive bias of “programming the environment to achieve the desired results”, either consciously or subconsciously. IOWs. it prevents the addition of hidden information, in any form. You cannot add information you don’t have.

    The person who programs the replicators, instead, is perfectly aware of the environment and can input as much information in the replicators as he likes. That’s not a problem, because the results will be judged form the additional information generated by the process of RV + NS. The original information in the replicators, therefore, can be as big as desired: only the functional variation will be considered.

    “Of course, most evolutionary algorithms don’t have a high degree of verisimilitude.”

    On that, at least, we agree.

    “We can, however, apply evolutionary algorithms to various environments to determine their limitations, and we can explore particulars of evolution, such as showing how complexity can evolve.”

    Evolve it may, but not through NS: unless you satisfy the requirements I have outlined.

  70. 72 gpuccio December 5, 2010 at 6:32 pm

    Zachriel:

    “Not exactly. RV explores only within the ‘locality’ of its starting point. A random seaerch works quite differently.”

    As I have debated many times, if the starting point is unrelated to the final point (as in basic protein superfamilies), that is a random search.

    “This is fundamental. A mutation is a change to an existing, working replicator. Evolution won’t explore a complete rearrangement of a sequence, but only those that are ‘nearby’ in sequence space.”

    So how did protein superfamilies arise? Thousands of them?

    “A couple of examples (assuming only point-mutations): A replicator that reaches a fitness peak and stops. It never explores all the other possibilities, even if there is a giant peak right next door.”

    That’s exactly my point about rugged landscapes, where local peaks (or better holes) are an obstacle to fitness evolution, and not a helping step. One more motive not to believe the darwinian model.

    “Or, consider simple word evolution: Starting from a single letter word; o, or, ore, ere, were, ware, war, wan, man, mean, bean, bear, beer, bee. But no matter how long this evolution continues, it will never find zzzzz, even if that sequence is the bestest possible sequence in the entire universe. Do you see why?”

    Yes, because you have an english dictionary as an oracle. Speaking of added information…

    “Indeed, even with recombination, evolution will never explore the vast majority of most landscapes. Evolution is severely constrained.”

    That neo darwinian evolution is severely constrained is another point we agree upon. It is so constrained, that it certainly cannot generate new complex functions.

  71. 73 Toronto December 5, 2010 at 6:37 pm

    Design detection is such a process. A process you deny for mere ideology.

    I don’t deny design at all, you just haven’t made a case that it was used for biological creation.

    You might be able to nake a case, if you investigated the designer, but your side refuses to.

    For instance, let’s say I wanted to know what guitar player “designed” song X.

    I’ve never heard the song before, but it’s blusey, like Eric Clapton and yet there is an Eastern influence in the choice of notes. George Harrison is a good guess as he is a friend of Clapton’s and Ravi Shankar.

    I’ve come up with a plausible explanation because I did NOT leave the designer out of the equation, and it proved to be useful information.

    If you start investigating the designer, you will come to one of two conclusions, either he designs “like us” or he doesn’t.

    If he doesn’t, all the “like us” design implications you have made are not useful.

    Why are you reluctant to find out whether you are right?

  72. 74 Petrushka December 5, 2010 at 6:37 pm

    Dark matter and dark energy have quantifiable gravitational effects, and an absence of other kind of interactions with ordinary matter. They also have characteristic distributions in space.

    scientists make inferences all the time, but in order to be science and not religion, they have to be bounded by constraints. Your inference is no different from animism. You assign ghost and spirits as causes of unexplained phenomena.

    That in itself is not unscientific. what makes it outside science is the fact that your spirits have no measurable attributes. You assert intelligence, but assign no capabilities or limitations.

    You infer knowledge but assign no bounds.

    You have made up an ad hoc explanatory entity out of nothing. No observations, no evidence, no way of testing.

    The biggest problem is that mainstream science iteratively nibbles away at the gaps that inspire your inference, just as fossil evidence nibbles away at the claim of no intermediate fossils.

    Now we know that RNA bases can self assemble under realistic natural conditions.

    http://journalofcosmology.com/Abiogenesis106.html

  73. 75 gpuccio December 5, 2010 at 6:46 pm

    Zachriel:

    “And on a historical scale, we have entire families of proteins that have apparently evolved from common ancestors.”

    Thank you for the “apparently”.

    Indeed, if you are talking of basic superfamily, it’s not apparent at all, indeed the contrary.

    If you are if function differentiation in a specific family, which we have already discussed, I do accept the common descent inference, while I do believe we need more data to decide about explanatory causal models.

    And I have already pointed to some peculiarities of the rugged landscape model (existing function preserved at low level in a super structure, partially retrieved after a random modification of part of it) which make it not the best model for what you are trying to assert.

    “That is incorrect. Your claim is the evolution generally is not capable of exploring larges spaces.”

    My claim is that evolution is not capable of exploring larges spaces by RV and NS.

    “It’s possible (though still unsupported) that protein space is not amenable to evolutionary exploration, but not simply because it is large or vast.”

    Well, also because you (and darwinian evolution) cannot use a “functional proteins dictionary” as an oracle.

    “We can show that vast spaces are amenable to evolutionary exploration by testing the model against various vast spaces. We can then classify the various spaces, and reach some general conclusions.”

    Yes, but if you want to test that by RV + NS only, you have to satisfy the requirements I have outlined.

    “gpuccio: I said that “there is neither any practical evidence, nor any theoretical reason, to think that complex functions are the sum of incremental simple selectable variations.”

    Yes, that’s what you said. And you haven’t provided any support.”

    The best support is that you have not been able to provide any practical evidence, nor any theoretical reason. The burden of proof for your model is on you.

    “The observations include developmental, fossil, and yes, molecular.”

    Molecular what?

    “There is ample evidence of this, developmental, fossil, and yes, molecular. Hayashi et al. is one such study.”

    Fossils: already commented. Hayashi: already commented. Nothing there about steps leading to complex functions. Quite the opposite.

  74. 76 Petrushka December 5, 2010 at 6:56 pm

    I’m not sure what the problem is with an oracle. Are you denying that chemistry provides an objective and unvarying set of rules by which the characteristics of any molecule can be judged?

    How is that conceptually different from a dictionary?

  75. 77 gpuccio December 5, 2010 at 6:57 pm

    Zachriel:

    “Heh. Of course it’s deliberate. They’re testing whether random sequences can bind to natural molecules. That’s the whole point of the experiment.”

    So why not test that in a natural replicator with all possible natural molecules in a natural environment? Or just in a lab simulation with many natural molecules in a pseudo natural environment?

    Why test that with one specific molecule fixed in a lab tool, so that binding molecules can be isolated for further (uncorrect) processing?

    That is artificial selection, followed by artificial amplification and variation, and then again by artificial selection. Whatever you can say.

    “It was published by the journal Nature. It’s been cited hundreds of times by other scientific papers. Szostak won the Nobel Prize in Medicine. Of course it has impressed other biologists.”

    The artificial final protein has impressed other biologists. The sense of my phrase was that the poor original proteins would have done that much less.

    And I do hope he won the Nobel prize for other merits.

    “Yes. It is a straightforward hypothesis from abiogenetics; random sequences with weak function that then evolve greater specificity.”

    Everybody knows that a weak functional sequence can evolve greater specificity by bottom up protein engineering. That is not even mentioned as an aim of the study. So my arguments are absolutely valid.

    “He selected for function, again, exactly what he should do to test the hypothesis.”

    No. Simply not true. The hypothesis was the occurrence of functional proteins in random libraries. Not their evolution through protein engineering. You are only losing any credibility, if you deny what is evident.

  76. 78 gpuccio December 5, 2010 at 7:07 pm

    Toronto:

    “I’ve come up with a plausible explanation because I did NOT leave the designer out of the equation, and it proved to be useful information.”

    I have never leaved the designer out of the equation. I have debated many times how research can help us understand many things about the designer. I have suggested many different possible scenarios, even here.

    My “like us” refers to two different types of things:

    1) Things where the designer must be “like us”, otherwise he would not be a designer: that includes conscious representations, cognition, purpose.

    2) Methods of implementation, cognitive styles, degree of knowledge, and other things: these are aspects where the designer “could” be similar to us in part, and different for other aspects. I have suggested that we start our inquiry form models built up from our human experience, and test those models against fact, both already known and future facts. Your example about the musical piece is a very good example of how I think we should proceed.

    “Why are you reluctant to find out whether you are right?”

    I am not reluctant at all. I have no greater desire. And I try to be aware of all possible advancements which can help us understand, and which can put to test my ideas. And I try every day to reason about my models against facts, and discussing with you is part of that.

    Why do you say I am reluctant? Only because I don’t agree with your points, when I don’t find them convincing? What is that, methodological “Torontism”?

  77. 79 gpuccio December 5, 2010 at 7:18 pm

    “Dark matter and dark energy have quantifiable gravitational effects, and an absence of other kind of interactions with ordinary matter. They also have characteristic distributions in space.”

    I don’t believe that is true of dark energy.

    Of cause, anyway, dark energy has quantifiable effects (the shift acceleration). But that’s practically all we know. Indeed, the concept of dark energy has been created because scientists do think that an observable effect must have a cause (at least physicists, thanks God, do still believe that).

    Design has quantifiable effect too: dFSCI. And we do know something about the process of design.

    “scientists make inferences all the time, but in order to be science and not religion, they have to be bounded by constraints. Your inference is no different from animism. You assign ghost and spirits as causes of unexplained phenomena.”

    So, what kind of methodological “ism” is this, now?

    “That in itself is not unscientific. what makes it outside science is the fact that your spirits have no measurable attributes. You assert intelligence, but assign no capabilities or limitations.”

    I have no reason to “assign” anything a priori (except the fundamental properties of a designer, see previous post). I try to infer capabilities and limitations from facts (the observable designed things).

    “You have made up an ad hoc explanatory entity out of nothing. No observations, no evidence, no way of testing.”

    dFSCI is observed evidence. As for testing, I have argued many times that the patterns we are going to discover about natural history (and, I hope, in a short time) will be a severe test for both ID and the neo-darwinian model. Or any other model which may come.

    “The biggest problem is that mainstream science iteratively nibbles away at the gaps that inspire your inference, just as fossil evidence nibbles away at the claim of no intermediate fossils.”

    But my inference is not about gaps, but about explanatory powers. Not about lack of detail, but about lack of credibility of the whole system.

  78. 80 Toronto December 5, 2010 at 7:30 pm

    gpuccio: What is that, methodological “Torontism”?

    I like that, my very own ideology!

    Yes, Torontism is “To search for an answer wherever the evidence may lead, even it it proves our most deeply held prior assumptions wrong.”

    Toronto: “Why are you reluctant to find out whether you are right?”
    gpuccio: I am not reluctant at all. I have no greater desire.

    Why would you not want to investigate your designer since he is central to your theory?

    Without him, without the powers you are granting him, your whole theory disappears.

    It would be like investigating a death to determine if it was possibly murder, but not caring about interviewing any suspects.

    You have a possible suspect for biological fuction, but you refuse to investigate him to see if you are right.

    Show me one other branch of science that refuses to investigate a part of their own theory.

  79. 81 gpuccio December 5, 2010 at 7:30 pm

    Petrushka:

    “I’m not sure what the problem is with an oracle. Are you denying that chemistry provides an objective and unvarying set of rules by which the characteristics of any molecule can be judged?

    How is that conceptually different from a dictionary?”

    Completely different. Obviously.

    The “objective and unvarying set of rules” knows nothing, and can provide no information about which sequence will provide a function. That knowledge is not in the simple laws of chemistry.

    It can be certainly “computed” from the laws of chemistry through intelligent algorithmic computation, so complex that at present we are still very far from those results of top down protein engineering.

    A “dictionary” of protein sequences can however be easily built, from what we observe in living beings. We can take known functional sequences and arrange them in a dictionary (protein databases are more or less that).

    Then you can build an “evolutionary simulation” where you vary random sequences by random variation, and expand and fix any sequence which becomes more similar to one existing functional protein in your “dictionary” (which is now your oracle). You will surely find a lot of perfect functional sequences in a short time. Oh, the power of evolutionary algorithms!

    You know, it is rather easy to find a complex target by random variation and intelligent selection, if you already know the target. Easy and useless.

    That is a dictionary. That is the most obvious type of oracle (but not the only one).

    If you cannot understand this simple difference, I suggest you join Dawkins and his “weasel” club.

  80. 82 gpuccio December 5, 2010 at 7:36 pm

    Toronto:

    “Why would you not want to investigate your designer since he is central to your theory?”

    Whoever said I don’t want? Must I still defend myself from things I have never said?

    “Without him, without the powers you are granting him, your whole theory disappears.”

    And so? My theory is “about” a designer. Of course it disappears if there is no designer. What verbal tricks are these?

    “It would be like investigating a death to determine if it was possibly murder, but not caring about interviewing any suspects.”

    I am ready to interview anybody. I am a fan of mysteries.

    “You have a possible suspect for biological fuction, but you refuse to investigate him to see if you are right.”

    In your fantasy, maybe. I have never refused such a thing.

    “Show me one other branch of science that refuses to investigate a part of their own theory.”

    Well, it would be easy to say that the neo darwinian model has always refused (and still refuses) to seriously investigate the probabilistic credibility of the random part of the model itself. It would be easy…

    And yes, I do say it.

  81. 83 Toronto December 5, 2010 at 7:39 pm

    gpuccio: But my inference is not about gaps, but about explanatory powers.

    You are drawing your inference from improbability, not explanatory powers.

    Show me the “specific powers” of the designer and demonstrate how you came to that conclusion.

    If you can’t, it’s not science.

    That’s your claim about evolution, that it’s powers have not been demonstrated to your satisfaction.

    I’m making the same demand. Show me your designer does have the powers to do what you are suggesting.

  82. 84 Petrushka December 5, 2010 at 7:59 pm

    http://amesteam.arc.nasa.gov/Research/proteins.html

    All your concerns are shared by mainstream science. The difference is that if you look for regular processes you find them.

    What is the heuristic value of assuming an invisible designer?

  83. 85 Petrushka December 5, 2010 at 8:04 pm

    The invisible Designer is not “like us.”

    We know, for example, that when humans engineer living things they violate the nested hierarchy, taking genes from one kingdom and inserting them in others.

    Harnessing microbes to make insulin, for example. Or copying a natural pesticide gene from one plant family to another.

    These are the kinds of fingerprints that would allow you to infer intervention in an otherwise incremental process.

  84. 86 Petrushka December 5, 2010 at 8:27 pm

    And yes, I do say it.

    And you would either be incredibly ignorant, or lying.

  85. 87 Zachriel December 5, 2010 at 9:10 pm

    gpuccio: So, we can conclude that a 17 bits functional defense transition, in populations with huge numbers and very high reproductive rate, and under an extremely strong selective pressure, is a rare event.

    Calculating the “functional complexity” of a sequence. usually includes the entire length of the sequence.

    According to Behe, the odds of even two requisite mutations are something on the order of 10^-20, so four mutations would be 10^-40, or virtually impossible. But it’s a nonsense calculation. It may only take two mutations, but the other two or more may simply follow as easily selectable, single-step mutations. When we look at the natural population, we may not see the intermediates because it occurs rapidly, but only the final result.

  86. 88 Zachriel December 5, 2010 at 9:19 pm

    gpuccio: It just mean that the person who programs the environment, with all its order of any possible kind, does not know that it will be used to test replicators.

    You do realize that’s rather silly. You are going to instruct someone to create an ‘environment’ without giving them any idea what that means. And if they come back with a chaotic function, that wouldn’t be a meaningful test for our purposes. A more useful test would be to study many different types of landscapes in order to categorize those which are amenable to evolutionary exploration and which are not.

    gpuccio: The person who programs the replicators, instead, is perfectly aware of the environment and can input as much information in the replicators as he likes.

    Not sure why that would be so. The replicators of all things should be blind to the specifications of the environment.

    gpuccio: Evolve it may, but not through NS: unless you satisfy the requirements I have outlined.

    But your conditions don’t make sense. Most landscapes are not amenable to evolutionary search. Most landscapes are chaotic.

  87. 89 Zachriel December 5, 2010 at 9:27 pm

    gpuccio: As I have debated many times, if the starting point is unrelated to the final point (as in basic protein superfamilies), that is a random search.

    Absolutely not. And because you don’t understand this fundamental fact about evolutionary algorithms, it should call into question your understanding of this entire discussion.

    A random search, given sufficient trials, will always find a target on a finite landscape, even if that landscape is chaotic. This is not true of an evolutionary algorithm. You need to grasp this point.

    Zachriel: Do you see why {word evolution will never find “zzzzz”}?

    gpuccio: Yes, because you have an english dictionary as an oracle. Speaking of added information…

    That doesn’t answer the question. A random search, given sufficient trials, will *always* find the target. But incremental word evolution will *never* find it. That’s because there are no valid precursors to “zzzzz”. There are vast regions of sequence space that will never be explored by evolution.

    Try to grasp this essential point.

  88. 90 Petrushka December 5, 2010 at 9:57 pm

    The whole concept of calculating the probability of the current configuration is flawed.

    You can’t say a configuration could not have evolved incrementally unless you have the full history and can point to a “hopeful monster” somewhere in the lineage.

    Attempts to assert such monsters include the flagellum, but continuing research reveals that there are many different flagella having many different subsets of the One True Flagellum. there are also bits and pieces of flagellar code distributed among critters that are not motile. There’s even a non-functional plasmid having the code for a flagellum.

  89. 91 gpuccio December 6, 2010 at 8:16 pm

    Well, just to save time, I think I will skip the ideological observations about invisible beings, which anyway are quite repetitive and don’t seem to be productive.

    I will try instead to concentrate on the technical issues.

  90. 92 gpuccio December 6, 2010 at 8:27 pm

    Petrushka:

    “We know, for example, that when humans engineer living things they violate the nested hierarchy”

    It seems that the designer did the same, when he conceived HGT.

    “Harnessing microbes to make insulin, for example. Or copying a natural pesticide gene from one plant family to another.”

    Different purposes. The main purpose of the designer was evidently to implement life and ever new complex functions, rather than producing commercial drugs.

    “All your concerns are shared by mainstream science.”

    I am happy of that. I believe in science, be it mainstream or not. Especially when it shares my concerns :).

    “And you would either be incredibly ignorant, or lying.”

    Why not both?

    “You can’t say a configuration could not have evolved incrementally unless you have the full history and can point to a “hopeful monster” somewhere in the lineage.”

    It’s exactly the opposite. You can’t say a configuration could have evolved incrementally unless you have the history, or at least a valid detailed model. Otherwise, it’s just myth and not science.

    “These are the kinds of fingerprints that would allow you to infer intervention in an otherwise incremental process.”

    If the incremental process were real.

  91. 93 gpuccio December 6, 2010 at 8:37 pm

    Zachriel:

    “Calculating the “functional complexity” of a sequence. usually includes the entire length of the sequence.”

    Not in a transition, where the rest of the sequence remains the same for functional constraints.

    “According to Behe, the odds of even two requisite mutations are something on the order of 10^-20, so four mutations would be 10^-40, or virtually impossible.”

    Behe’s model in TEOE is empirical, the calcultaions are given only as possible intepretations. The empirical fact is the cloroquine resistance is rare, while sinlge mutation resistance is common.

    Behe does hypothesize that cloroquine resistance is due to a two AAs mutation. Maybe now more data are known, and we can certainly review the evidence.

    According to the paper you linked, known mutations present a minimum of 4 AAs variation, and a maximum of 9 mutations.

    The situation, however, is complex, and requires further data. Some of the mutations could obviously be unrelated to the resistance. There are different polymorphism in different regions, which could have nothing to do with the resistance.

    As I have said, I can accept for the moment a tentative value of 17 bits for the minimum functional transition, but we will see.

    Anyway, we are always dealing with rather simple transitions.

  92. 94 gpuccio December 6, 2010 at 8:47 pm

    Zachriel:

    “You are going to instruct someone to create an ‘environment’ without giving them any idea what that means.”

    No. The environment must obviously have its reasons and functionality. That’s why I had suggested existing operating systems. The only important point is that th environment must not be designed for the purpose of investigating the evolution of functions in replicators. That would obviously open the gates to cognitive bias.

    And it is certainly possible to investigate different environments, but certainly not selecting them a priori to have more chances to get a result. That, again, would be cognitive bias.

    “Not sure why that would be so. The replicators of all things should be blind to the specifications of the environment.”

    No. The information in the replicators is the equivalent of the information in natural replicators before they evolve. After all, we are not investigating OOL here, but only neo darwinian evolution (lucky you!).

    In neo darwinian evolution, complex functional replicators which can use the resources of the environment already exist. They just have to evolve new, better complex functions.

    “But your conditions don’t make sense. Most landscapes are not amenable to evolutionary search. Most landscapes are chaotic.”

    Again, I was in mo way suggesting chaotic landscapes. I was suggesting functional, ordered landscapes, not random ones. They must be random only in respect to the evolution of replicators, but for the rest they can and must obey specifical laws: for instance, in an operating system there must obviously be rules to copy information, write it in different locations, use the RAM, start programs, and so on. Nothing chaotic there. But the order has not been planned to make replicators “evolve”, but for independent purposes. That is my concept of a “blind” landscape.

  93. 95 gpuccio December 6, 2010 at 9:00 pm

    Zachriel:

    “And because you don’t understand this fundamental fact about evolutionary algorithms, it should call into question your understanding of this entire discussion.”

    Maybe I understand it better than you think.

    “A random search, given sufficient trials, will always find a target on a finite landscape, even if that landscape is chaotic.”

    I suppose you are including in those “sufficient trials” numbers like 10^10000 or more?

    “This is not true of an evolutionary algorithm. You need to grasp this point.”

    I grasp it very well. I have made that point myself, writing about the rugged landscape paper.

    Indeed, this point is a severe limit of evolution. You are just introducing the effect of Negative Selection.

    Just to be clear, evolution “can” reach any target, given sufficient trials (with numbers like above). The effect of Negative Selection will only be to eliminate the targets which are incompatible with life, or reproduction.

    So yes, many random walks become absolutely unlikely (but not impossible: random mechanisms like big insertions, deletions, inversions, frameshift mutations, and so on, can certainly reach distant points of the search space in one event: randomly!).

    That only means that most “incremental” walks towards different functional structures are almost impossible.

    Moreover, if some functional selectable peak is reached (even if very limited in its functionality), Negative Selection will act “against” further transitions, in a rugged landscape, making the finding of highly functional peaks even more difficult than by mere chance.

    And anyway, RV always acts independently from NS. IOWs, of all the mutations which happen, the vast majority can be negative. They will be eliminated. but they happen. They are events. They are part of the probability resources. They are random walks, even if many of them are stopped at the beginning (but certainly not all: there are always neutral mutations, or simply negative mutations which are not so serious that they will be eliminated by NS. Human genetic diseases are an example of that).

  94. 96 gpuccio December 6, 2010 at 9:04 pm

    Zachriel:

    “But incremental word evolution will *never* find it.”

    Without a dictionary, incremental word evolution will find nothing important. Are you just exploiting the difference between “never” and “practically never”? In empirical science, you know, that is no difference of sort.

    “There are vast regions of sequence space that will never be explored by evolution.”

    Yes. And in them, most of the functional spaces.

  95. 97 Zachriel December 6, 2010 at 9:09 pm

    gpuccio: Not in a transition, where the rest of the sequence remains the same for functional constraints.

    Perhaps you could point to a resource on that.

    In any case, if ‘fits’ can increase 17 bits, then maybe they can increase another 17 bits.

    gpuccio</b.: The environment must obviously have its reasons and functionality.

    Reasons and functionality? It’s hard to even know what that means. In biology, the environment includes the sun, ground, water, sky, predators, prey, etc.

    gpuccio</b.: That’s why I had suggested existing operating systems.

    Operating systems are designed to avoid errant operation, including evolving replicators. Your suggestion makes no sense.

    gpuccio: And it is certainly possible to investigate different environments, but certainly not selecting them a priori to have more chances to get a result.

    The point was to categorize environments to find attributes that make them amenable to evolutionary exploration. Not too surprisingly, it turns out that local structure is an important quality.

    Better yet, you could design a simulacrum with food resources and replicators with primitive, uncoordinated motility.

    gpuccio: The information in the replicators is the equivalent of the information in natural replicators before they evolve.

    Yes, the primitive replicators have to be able to acquire the resources necessary for replication.

    gpuccio: I>Again, I was in mo way suggesting chaotic landscapes. I was suggesting functional, ordered landscapes, not random ones.

    What you were suggesting, that people design an ‘environment’ without knowing why or what it is supposed to represent, is nonsensical. They may very well propose a chaotic environment. It wouldn’t tell us anything, that is, unless we try various ‘environments’ in order to categorize them.

  96. 98 Toronto December 6, 2010 at 9:31 pm

    gpuccio: Well, just to save time, I think I will skip the ideological observations about invisible beings, which anyway are quite repetitive and don’t seem to be productive.

    I will try instead to concentrate on the technical issues.

    This invisible being, is a key part of your process, ID. That is what the ‘I’ refers to.

    Are you now saying that intelligence is NOT a “technical issue” for your side?

  97. 99 Zachriel December 6, 2010 at 9:42 pm

    gpuccio: I suppose you are including in those “sufficient trials” numbers like 10^10000 or more?

    It depends on the size of the search space. There are about 10^7 five letter sequences.

    gpuccio: I grasp it very well.

    If you did, then you wouldn’t have said “if the starting point is unrelated to the final point (as in basic protein superfamilies), that is a random search.” Evolutionary algorithms simply do not perform like random searches. You repeat the error.

    gpuccio: Just to be clear, evolution “can” reach any target, given sufficient trials (with numbers like above).

    Stepwise evolution will not explore the vast majority of the vast majority of landscapes, even if we include recombination and other such mechanisms.

    gpuccio: So yes, many random walks become absolutely unlikely (but not impossible: random mechanisms like big insertions, deletions, inversions, frameshift mutations, and so on, can certainly reach distant points of the search space in one event: randomly!).

    Unless you mean to include complete randomization of the sequence, which is not what we mean by evolution, then no. Though the mechanisms you mention can move the population off of local peaks, they still will not explore the vast majority of the vast majority of landscapes.

    Evolutionary algorithms behave very differently from random searches. This is trivial to show. Try to think through a couple of examples; a population crowded around a local fitness peak, or one bouncing around the entire landscape randomly.

    gpuccio: Without a dictionary, incremental word evolution will find nothing important.

    Word evolution is *defined* by only allowing valid words to enter a population; o, or, ore, ere, mere, mire, mired. A random search will find words much more slowly than incremental word evolution.

    gpuccio: Are you just exploiting the difference between “never” and “practically never”? In empirical science, you know, that is no difference of sort.

    In empirical terms, a random search and an evolutionary search will respond differently and reach a target at different rates. The difference is determined by the quality of the landscape. And the vast majority of the vast majority of landscapes remain forever outside the reach of incremental evolution.

  98. 100 Petrushka December 6, 2010 at 10:28 pm

    “There are vast regions of sequence space that will never be explored by evolution.”

    Yes. And in them, most of the functional spaces.

    possibly. But definitely not all possible spaces.

  99. 101 Petrushka December 6, 2010 at 11:02 pm

    Evolution only test the space in the immediate vicinity of what’s working now. You claim there are gaps in the historical record that could not be bridged incrementally.

    But you suffer from the same lack of historical detail that mainstream biologists encounter.

    So mainstream biologists extrapolate known and observable processes, and you infer an invisible buddy.

    It worked with thee fossil record, so why not try it again?

    dFSCI – élan vital. The magic stuff that makes living things different from mere chemistry.

  100. 102 gpuccio December 8, 2010 at 8:27 am

    Zachriel:

    “Perhaps you could point to a resource on that.”

    Durston, again.

    “In any case, if ‘fits’ can increase 17 bits, then maybe they can increase another 17 bits.”

    You, of all people, will certainly understand the difference between increasing twice of 17 bits (for two unrelated functional changes), and increasing of 34 bits (for one functional change).

    “Reasons and functionality? It’s hard to even know what that means. In biology, the environment includes the sun, ground, water, sky, predators, prey, etc.”

    Are you saying that the natural environment is chaotic?

    You must make up your mind. You cannot have it both ways. Either you complain that a computer environment could be chaotic, or you complain that a computer environment has reasons and functionality.

    You can certainly have a balanced middle way. Just take an operating system, and have a programmer introduce some chaotic elements in it. Always in blind.

    “Operating systems are designed to avoid errant operation, including evolving replicators. Your suggestion makes no sense.”

    No. Operating systems may be designed that way, bit replicators which exploit them are programmed all the time: they are called viruses, spyware, and so on.

    That they may evolve or not is exactly the issue. Operating systems are not programmed, as far as I know, to prevent the “evolution” of a replicator (at most, its replication). So, if the programmer of the replicators just introduces in their code the variation mechanism, there is no reason why an already successful replicator in an operating system should not “evolve”.

    I do believe my suggestion makes much sense. So much so, that all of you, perfectly aware that such a fair computer model would easily prove your model of evolution of complex functions by RV and MS false, are fighting really hard to find reasons to state that it cannot work.

    Well, I agree with you. It cannot work. But not in the sense you imply. It cannot work because your model is wrong, and complex functions can never evolve by RV + NS. Not in computers, not in any other situation. Least of all in the natural environment.

    ” Not too surprisingly, it turns out that local structure is an important quality.”

    If you try really hard, you can certainly select some specific environment with enough added information in its structure that it can bring about, in reasonable time, some specific expected function by RV. Maybe even partially complex, if the selection is not really true NS. That is called intelligent design.

    Or do you suppose that the big bang conditions selected an environment especially “amenable to evolutionary exploration”? Are you a fan of TE? Or of the multiverse / anthropic principle game?

    “What you were suggesting, that people design an ‘environment’ without knowing why or what it is supposed to represent, is nonsensical. They may very well propose a chaotic environment. ”

    That’s not a problem. You can specify that you don’t want a chaotic environment. You can even give generic guidelines on the environment to be programmed. Guidelines which can be reviewed by ID experts (shall we call it “un peer review?) to be sure that you have not passed any relevant information about the purpose of the test.

  101. 103 gpuccio December 8, 2010 at 8:29 am

    Toronto:

    “Are you now saying that intelligence is NOT a “technical issue” for your side?”

    No. I am just saying that your objections about this issue are not technical, and that they are ideological and repetitive. That’s why I have lost interest.

    No offense intended.

  102. 104 gpuccio December 8, 2010 at 8:53 am

    Zachriel:

    “It depends on the size of the search space. There are about 10^7 five letter sequences.”

    And I suppose you can easily explain how you can go from 5 AAs peptides to a 150 AAs protein by RV and NS, isn’t it?

    “You repeat the error.”

    No error. The RV part in the neo darwinian model “is” a random search, provided it starts from an unrelated state. The NS part, as I have always clearly stated, is a necessity mechanism which interferes with the random walk. I have always kept the two concepts well separated (unlike you, who continuously try to conflate them). The RV part must be evaluated in relation to its probabilistic resources, while the necessity part must be evaluated in relation to credible, explicit algorithms.

    “Stepwise evolution will not explore the vast majority of the vast majority of landscapes, even if we include recombination and other such mechanisms.”

    That is only a limit imposed on the search by the necessity algorithm (negative NS). It certainly does not increases its probabilistic resources. Each variation event is anyway random, and it can move in any direction. And, as you well know, there are random events which can change a whole sequence in one event (such as your cherished frameshift mutations). Therefore, no part of the search space is really “out of reach” for the neo darwinian model. As usual, it’s just a question of probabilities…

    And you are not considering an importan “pet toy” of the neo darwinian model: RV is supposed to happen often in duplicated, inactive genes. That rules out negative NS, and opens a completely free random walk to RV. I am amazed that you did not consider that, being it one of the milestones of darwinian thought. And there are neutral mutations too, and genetic drift, which can certainly help allow a free random walk.

    Can you grasp that?

    “Unless you mean to include complete randomization of the sequence, which is not what we mean by evolution, then no. Though the mechanisms you mention can move the population off of local peaks, they still will not explore the vast majority of the vast majority of landscapes.”

    If well trained by evolutionsts…

    “Evolutionary algorithms behave very differently from random searches. This is trivial to show. Try to think through a couple of examples; a population crowded around a local fitness peak, or one bouncing around the entire landscape randomly.”

    You still linger on the effect of negative selection. I have never denied them. Their only influence is to keep most searches inside an already found functional island. Nothing else. That us an important part of my model, as I have tried to detail in my posts about the “big bang theory” of protein evolution.

    That does not mean that darwinian evolution is not a random walk. It is a random walk limited (to a point) by negative NS.

    Only positive NS, however, can add to the probabilistic resources, by the expansion phase. But positive NS is exactly dependent on already established naturally selectable functions.

    So, we are again at the same points.

    “Word evolution is *defined* by only allowing valid words to enter a population; o, or, ore, ere, mere, mire, mired.”

    IOWs, it needs a dictionary. Are you saying that a dictionary is not “added information”?

    “A random search will find words much more slowly than incremental word evolution.”

    Or never, if they are complex enough. QED.

    “The difference is determined by the quality of the landscape. And the vast majority of the vast majority of landscapes remain forever outside the .”

    IOWs, only an intelligently designed landscape, with lots of added information, can reach the “reach of incremental evolution” (if it exists, which I doubt). QED.

    Have you ever read Dembski and Marks’ points about the “search for a search” and its cost?

  103. 105 gpuccio December 8, 2010 at 9:02 am

    Petrushka:

    “possibly. But definitely not all possible spaces.”

    All functional spaces unrelated to an existing function, at least according to Zachriel.

    “Evolution only test the space in the immediate vicinity of what’s working now. You claim there are gaps in the historical record that could not be bridged incrementally.”

    I also claim that there is no logic reason, and no empirical example, of complex functions being attainable by simple naturally selectable functional steps. Not only in biology, but also in informatics, or in any other field. That has nothing to do with history.

    “dFSCI – élan vital. The magic stuff that makes living things different from mere chemistry.”

    You win another prize for the best out of order metaphor of the year.

    dFSCI is an informational property. It has nothing to do with life. It is found in many non living things (eg a computer program). It is just empirically found to be the product of conscious intelligent beings.

    The “élan vital”, whether existing or not, is a completely different concept. As you should know, if you have any historical, scientific and philosophical culture.

  104. 106 Toronto December 8, 2010 at 3:23 pm

    gpuccio: No. I am just saying that your objections about this issue are not technical, and that they are ideological and repetitive. That’s why I have lost interest.

    No offense intended.

    1) I and everyone you are debating want to see your process.

    2) Burn all evolutionary data.

    3) Now, present your technical process.

    4) After you can do that, I’ll present my objections.

    5) If you can’t, you have nothing worth presenting to students.

  105. 107 Petrushka December 8, 2010 at 3:31 pm

    IOWs, it needs a dictionary. Are you saying that a dictionary is not “added information”?

    ______________________________________

    Are you saying the environment in which an organism exists is not a dictionary?
    _______________________________

    “It cannot work because your model is wrong, and complex functions can never evolve by RV + NS. Not in computers, not in any other situation. Least of all in the natural environment.”

    ________________________________

    Assuming your conclusion.

    The guru of “The Edge” cited a few knockout experiments and concluded that any change to flagellar genes would render it completely non-functional. Hence it could not have evolved incrementally.

    But nature itself seems to have done a more comprehensive set of experiments, and indeed there are many functional variations and subset of the flagellum, and many non-motile bacteria using subsets for other purposes.

    This kind of ignorance was once applied to the blood clotting system. And to CQ resistance. But of course it has taken malaria parasites just a few decades to develop multi-drug resistance. Not only that, but some have developed compensating genes to overcome any adverse affects of the initial resistance mutations. Resistant parasites are now stable in the absence of CQ use.

    The whole approach is flawed. Obviously there are areas of the landscape amenable to traversing by incremental change.

  106. 108 Petrushka December 8, 2010 at 4:33 pm

    “It cannot work because your model is wrong, and complex functions can never evolve by RV + NS. Not in computers, not in any other situation. Least of all in the natural environment.”
    ________________________________

    Chemistry itself is a dictionary that defines every compound that can exist, and the relative stability of every compound.

    The properties of chemistry define what compounds can self replicate and under what conditions.

    What you seem to be saying is that molecules have more information than atoms and that self replicating molecules have more information than non replicators.

    Which is why I say that information has subsumed the role formerly played by elan vital. A placeholder for the mysterious invisible stuff that makes life different from non-life.

  107. 109 Petrushka December 8, 2010 at 5:23 pm

    1) I and everyone you are debating want to see your process.
    __________________________________________________

    I think it’s time to realize that gp can claim the existence of any entity or process required to build his world, without having to have supporting evidence.

    He is allowed to assert that living things can’t evolve incrementally, without citing reasons.

    His foundation is an experiment by Douglass Axe, the biological equivalent of building a house that falls down and claiming that wood doesn’t have sufficient strength to support housed.

    It is no wonder that he hates and fears arguments from historical precedent. He hates any notice that the goalposts keep shifting.

  108. 110 Petrushka December 8, 2010 at 5:31 pm

    When I was in high school I had an English class taught by a very bright man. He was born in rural Mississippi and somehow lifted himself out of that environment. He graduated from Columbia University at age 17. He went on to become the headmaster of the school (a private school).

    One day a discussion started about why he hadn’t studied science. He said science was a doomed field, that within ten years, everything that could be learned by empiricism would be known within ten years.

    Dembski has made similar claims. We are well within on of his ten year periods. GP hasn’t set a timeline, but it’s obvious he thinks the future will see biology hitting a brick wall, and all the remaining unknowns will be attributable to demiurges.

    My discussion with my English teacher was in 1960.

  109. 111 Petrushka December 8, 2010 at 5:32 pm

    Need preview or edit button.

  110. 112 Zachriel December 8, 2010 at 5:42 pm

    gpuccio: Durston, again.

    Durston et al., Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling 2007.

    Durston’s measure depends on choice of function. An enzyme may have many functions, and its sequence constraints may vary, depending on the function involved. More particularly, when evolving a new function, the new enzyme may have different sequence constraints than the old function, but you assume it will have the same functional constraints. This is an empirical question. But let’s accept 17 ‘fits’ for now.

    gpuccio: You, of all people, will certainly understand the difference between increasing twice of 17 bits (for two unrelated functional changes), and increasing of 34 bits (for one functional change).

    Yes, if they are selectable, then they are additive, not multiplicative. If it takes so many generations for the first adaptations to fix in the population, and then a second comparable adaptation becomes available, then it will then fix in so many generations. Additive.

    Zachriel: “Reasons and functionality?” It’s hard to even know what that means. In biology, the environment includes the sun, ground, water, sky, predators, prey, etc.

    gpuccio: Are you saying that the natural environment is chaotic?

    The opposite of functional is not chaotic.

    gpuccio: Operating systems may be designed that way, bit replicators which exploit them are programmed all the time: they are called viruses, spyware, and so on.

    You are seriously confused on what constitutes a valid model. A model runs independently of the substrate. As we are interested in the general principle, you should propose an algorithm that is independent of the substrate. But you haven’t been able to do that.

    gpuccio: Or do you suppose that the big bang conditions selected an environment especially “amenable to evolutionary exploration”?

    Locality and directionality are among the most important characteristics of the natural world, and important to the success of biological replicators competing for local resources.

    Zachriel: What you were suggesting, that people design an ‘environment’ without knowing why or what it is supposed to represent, is nonsensical. They may very well propose a chaotic environment.

    gpuccio: That’s not a problem. You can specify that you don’t want a chaotic environment. You can even give generic guidelines on the environment to be programmed. Guidelines which can be reviewed by ID experts (shall we call it “un peer review?) to be sure that you have not passed any relevant information about the purpose of the test.

    You would want an environment that exhibits locality and directionality, resources required for replication, and replicators competing for those resources in order to reproduce. That’s pretty much what you need.

  111. 113 Zachriel December 8, 2010 at 5:55 pm

    gpuccio: And I suppose you can easily explain how you can go from 5 AAs peptides to a 150 AAs protein by RV and NS, isn’t it?

    Hayashi et al. showed how a random sequence with minimal function can evolve much greater specificity. Evolutionary algorithms provide additional support.

    gpuccio: The RV part in the neo darwinian model “is” a random search, provided it starts from an unrelated state.

    No, it’s not. It only varies and existing sequence, so the choices are limited to nearby points on the landscape (nearby being multidimensional due to the various mechanisms of variation involved).

    gpuccio: It certainly does not increases its probabilistic resources.

    No, it severely restricts it!

  112. 114 Zachriel December 8, 2010 at 5:59 pm

    Zachriel: Word evolution is *defined* by only allowing valid words to enter a population; o, or, ore, ere, mere, mire, mired.

    gpuccio: IOWs, it needs a dictionary. Are you saying that a dictionary is not “added information”?

    Yes, it’s information! It’s the environment to be explored.

    Zachriel: A random search will find words much more slowly than incremental word evolution.

    gpuccio: Or never, if they are complex enough. QED.

    Random sampling will always find words eventually. Word evolution will only find words that are available to stepwise evolution. But if you want to find any twelve-letter word, then word evolution will proceed much faster.

    gpuccio: Have you ever read Dembski and Marks’ points about the “search for a search” and its cost?

    It’s trivial. No search algorithm will work better than any other over the universe of search landscapes. A search for a search is just a complex search algorithm, so it adds nothing to the description.

  113. 115 Petrushka December 8, 2010 at 6:02 pm

    I’m a bit confused about the ID take on modeling. When chemists experiment with molecules using computer models, does the construction of the computer have to change in order for the model to be valid?

    How about the operating system?

    How about models of weather and storms, or of airflow in the design of cars and airplanes?

  114. 116 Toronto December 8, 2010 at 10:01 pm

    gpuccio: No. I am just saying that your objections about this issue are not technical, and that they are ideological and repetitive.

    No one on the evolution side has any “technical objections” to ID, since ID has no “technical process” for us to make an objection to.

  115. 117 gpuccio December 9, 2010 at 10:33 am

    Toronto:

    “1) I and everyone you are debating want to see your process.

    2) Burn all evolutionary data.

    3) Now, present your technical process.

    4) After you can do that, I’ll present my objections.

    5) If you can’t, you have nothing worth presenting to students.”

    I really don’t understand what you mean. Please, be more clear.

    I have answered you onbections, IMO. If you consider my answers not worth, I cannot do anything about that.

  116. 118 gpuccio December 9, 2010 at 10:41 am

    Petrushka:

    “Are you saying the environment in which an organism exists is not a dictionary?”

    Sure. It is not.

    A dictionary is some place where information is written, so that other external information may be read, compared to the information written in the dictionary, and judged for further processing.

    The environment is not a dictionary. It has not information about protein sequences. None at all. In NS, replicators self select themselves according to their ability to reproduct, given the environment characteristics. That has nothing to do with a dictionary, or an oracle in a search.

    “The whole approach is flawed.”

    Your whole approach regarding Behe’s ideas is not only flawed, but also boring.

  117. 119 gpuccio December 9, 2010 at 10:45 am

    Petrushka:

    “Chemistry itself is a dictionary that defines every compound that can exist, and the relative stability of every compound.”

    You are really confused. Chemistry is, at most, a set of laws, not certainly a dictionary.

    “What you seem to be saying is that molecules have more information than atoms and that self replicating molecules have more information than non replicators.”

    That’s true. Replicators have the information in their sequemces and structure that allows them to replicate. Atoms don0t have that information.

    “Which is why I say that information has subsumed the role formerly played by elan vital. A placeholder for the mysterious invisible stuff that makes life different from non-life.”

    The only role information has assumed in your posts is the role of something you don’t understand, and refuse to acknoweldge.

  118. 120 gpuccio December 9, 2010 at 10:48 am

    Petrushka:

    “I think it’s time to realize that gp can claim the existence of any entity or process required to build his world, without having to have supporting evidence.

    He is allowed to assert that living things can’t evolve incrementally, without citing reasons.

    His foundation is an experiment by Douglass Axe, the biological equivalent of building a house that falls down and claiming that wood doesn’t have sufficient strength to support housed.

    It is no wonder that he hates and fears arguments from historical precedent. He hates any notice that the goalposts keep shifting.”

    I certainly “hate” your style and behaviour. Which, by the way, is rapidly deteriorating while your “arguments” vanish into thin air.

  119. 121 gpuccio December 9, 2010 at 10:50 am

    Petrushka:

    “My discussion with my English teacher was in 1960.”

    He was obviously wrong. Empiricism will always rule. My views are completely empirical.

    You just don’t understand what “empirical” means.

  120. 122 gpuccio December 9, 2010 at 10:59 am

    Zachriel:

    “An enzyme may have many functions”

    What do you mean exactly?

    “Yes, if they are selectable, then they are additive, not multiplicative.”

    You know I agree on that. I obviously meant a 34 bit functional mutations, without any intermediate functional step.

    Two steps are additive only if the first 17 bits mutation is necessary for the final mutation of 34 bits, and provides the first 17 bits necessary for the final function. That’s exactly what does not happen, and what you cannot show.

    “The opposite of functional is not chaotic.”

    You used the word “chaotic” for possible computer environments. So, you choose the word which defines the natural environment, and we can have computer environments in line with that word.

    “You are seriously confused on what constitutes a valid model. A model runs independently of the substrate. As we are interested in the general principle, you should propose an algorithm that is independent of the substrate. But you haven’t been able to do that”

    You are confused. In my model, the computer environment is part of the model. It is not a substrate.

    “Locality and directionality are among the most important characteristics of the natural world, and important to the success of biological replicators competing for local resources.”

    Then we can have a local and directional computer environment (whatever that means). Always in blind.

    “You would want an environment that exhibits locality and directionality, resources required for replication, and replicators competing for those resources in order to reproduce. That’s pretty much what you need.”

    Yes. With the environment set up in blind.

  121. 123 gpuccio December 9, 2010 at 11:09 am

    “Hayashi et al. showed how a random sequence with minimal function can evolve much greater specificity. Evolutionary algorithms provide additional support.”

    Wrong. They showed how a random sequence without any function, incorporated in a functional complex, reduced the function of the complex to a minimum, without completely eliminating it, and how thorough a process of mutation and NS the complex could retrieve partial functionality, but not its original functionality. And there is no documentation of how complex the “partial retrieval” was. And the partial retrieval was unrelated to the original functional sequence, and not a step towards it (indeed, rather an hindrance).

    I will not repeat myself on evolutionary algorithms, unless new arguments are provided by you.

    “No, it’s not. It only varies and existing sequence, so the choices are limited to nearby points on the landscape (nearby being multidimensional due to the various mechanisms of variation involved).”

    That’s exactly what a random search is. Maybe we can call it more precisely a random walk, because obviously it starts form an existing state. So, conceded: a random walk. You know that the calculation of probabilities does not vary significantly.

    “gpuccio: It certainly does not increases its probabilistic resources.

    No, it severely restricts it!”

    Well, it seems we do agree on that. Although I cannot understand why you seem so happy of that.

  122. 124 gpuccio December 9, 2010 at 11:15 am

    Zachriel:

    “gpuccio: IOWs, it needs a dictionary. Are you saying that a dictionary is not “added information”?

    Yes, it’s information! It’s the environment to be explored.”

    It’s added information. Not the environment. The environment is the environment of words, both meaningful and non meaningful. A dictionary is an ordered catalogue of meaningful words. To be used to compare results with pre existing information.

    I am amazed that you can confuse the two concepts.

    “Random sampling will always find words eventually. Word evolution will only find words that are available to stepwise evolution. But if you want to find any twelve-letter word, then word evolution will proceed much faster.”

    Thanks to the added information of the dictionary used in the search.

    “It’s trivial. No search algorithm will work better than any other over the universe of search landscapes. A search for a search is just a complex search algorithm, so it adds nothing to the description.”

    Indeed, truth is sometimes trivial. But even then people can refuse to see it. So, repeating trivial things is sometimes worthwhile.

  123. 125 gpuccio December 9, 2010 at 11:36 am

    Petrushka:

    “I’m a bit confused about the ID take on modeling. When chemists experiment with molecules using computer models, does the construction of the computer have to change in order for the model to be valid?

    How about the operating system?

    How about models of weather and storms, or of airflow in the design of cars and airplanes?”

    The models you refer to are models of necessity algorithms, more or less coupled to RV. The problem with modeling neo darwinism is that you have to “model” NS. Which si a very special necessity algorithm, vastly misunderstood.

    As I have said many times, NS is indeed a mechanism implicit in replicators: if a replicators can replicate better than others, in an environment, it will replicate better. Speaking of trivial concepts, NS is really trivial. But it has been made complex and ambiguous and misleading by a very bad use of metaphors. Of bad metaphors.

    So, let’s avoid the bad metaphors. What is the only way we can model NS? It’s simple: we must observe replicators which can replicate better. Of themselves. Anything else is not NS, nor a model of it.

    The “model” I have suggested is more a lab test than a “simulation”. A computer environment is the “environment”. Replicators are the replicators. And we wait for RV and NS to produce new complex functions. If it happens, it happens.

    You cannot “simulate” NS. Any simulation will be intelligent selection of some kind. You can, however, test NS. You can test it in a biology lab, or in a computer lab. The two things are different, but the logical principles at stake are the same.

  124. 126 gpuccio December 9, 2010 at 11:40 am

    Toronto:

    “No one on the evolution side has any “technical objections” to ID, since ID has no “technical process” for us to make an objection to.”

    Design detection is a technical process. Please, object to it, if you can.

    Inference of methods of implementation of design from facts is a technical process. Do you agree that we can distinguish between directed variation and intelligent selection of random variation, for example?

  125. 127 Zachriel December 9, 2010 at 12:47 pm

    gpuccio: A dictionary is some place where information is written, so that other external information may be read, compared to the information written in the dictionary, and judged for further processing.

    A fitness landscape is an abstraction, of course. However, we can represent the sum of all the interactions involved in evolution as a fitness landscape. Indeed, you cited such a landscape in support of your own position.

    Hayashi et al., Experimental Rugged Fitness Landscape in Protein Sequence Space, PLoS One 2006.

  126. 128 Zachriel December 9, 2010 at 12:59 pm

    gpuccio: The environment is not a dictionary. It has not information about protein sequences. None at all.

    Not necessarily true, as per the seashell analogy above. But generally not.

    gpuccio: In NS, replicators self select themselves according to their ability to reproduct, given the environment characteristics. That has nothing to do with a dictionary, or an oracle in a search.

    They don’t self-select. Rather some are more successful at competing for local, limited resources required for reproduction. We can avoid your conundrum by providing a simulacrum of an environment set in a dimensional world.

  127. 129 Zachriel December 9, 2010 at 1:23 pm

    gpuccio: I obviously meant a 34 bit functional mutations, without any intermediate functional step.

    Two steps are additive only if the first 17 bits mutation is necessary for the final mutation of 34 bits, and provides the first 17 bits necessary for the final function. That’s exactly what does not happen, and what you cannot show.

    Yes, we agree that implausible events are implausible. We don’t have to argue about ‘fits’ or dFSCI or whatever the latest ID Incarnation happens to be.

    So it comes down to incremental evolution, which it always did (Darwin 1859). You seem to think you can show there is some sort of barrier to incremental evolution. Nearly every working biologist disagrees.

    gpuccio: You used the word “chaotic” for possible computer environments.

    You are very confused. The environment may be chaotic, not the computer.

    A computer is just a tool for implementing a model, but the computer is not the model. A model is an abstraction. With regards to modeling a biological environment, we might have a space with a renewable food resource that ‘animals’ compete to find. There are a number of ways to construct such a model, but the model is independent of the system on which we represent it. We could use paper and pencil, for that matter.

    gpuccio: In my model, the computer environment is part of the model.

    Yes, that’s why it’s useless for modeling a biological environment. It’s like putting your computer in the rain to model weather, then when it doesn’t work, saying that computers can’t model the weather.

    gpuccio: Then we can have a local and directional computer environment (whatever that means). Always in blind.

    The model has nothing to do with the computer. It has to be a simulacrum of the biological environment.

    gpuccio: With the environment set up in blind.

    It can’t be a useful model if it’s ‘blind.’

  128. 130 Zachriel December 9, 2010 at 1:25 pm

    Zachriel: Hayashi et al. showed how a random sequence with minimal function can evolve much greater specificity.

    gpuccio: Wrong.

    Of course they did.

    gpuccio: They showed how a random sequence without any function, incorporated in a functional complex, reduced the function of the complex to a minimum, without completely eliminating it, and how thorough a process of mutation and NS the complex could retrieve partial functionality, but not its original functionality.

    Yes, a random sequence evolved to greater specificity, and improved function.

  129. 131 Zachriel December 9, 2010 at 1:36 pm

    gpuccio: That’s exactly what a random search is. Maybe we can call it more precisely a random walk, because obviously it starts form an existing state.

    But evolution is not a random walk. Though it may take short random walks, that is not the primary mechanism of adaptation.

    gpuccio: It certainly does not increases its probabilistic resources.

    Zachriel: No, it severely restricts it!

    gpuccio: Well, it seems we do agree on that. Although I cannot understand why you seem so happy of that.

    Because if evolution had to try every possible sequence, it would, indeed, be no better than a random sampling. But evolution doesn’t try every possible sequence.

    When you said that evolution was the same as a random search, it showed you really didn’t understand anything about the mathematics of evolutionary search. That you refuse to consider evolutionary algorithms, not as representations of biology, but as abstractions of the evolutionary process, makes it difficult for you to understand these questions. Not only are you confused about the stringent limitations of evolution, but you don’t comprehend how much recombination changes how an evolutionary algorithm works compared to point-mutation alone.

    The success of evolutionary algorithms depends on the landscape. Evolution won’t work with chaotic or perverse landscapes. But evolution can navigate many landscapes of interest, in particular, those that show locality and directionality. This include protein evolution, which largely concerns spatial interrelationships subject to incremental optimization.

  130. 132 Zachriel December 9, 2010 at 1:41 pm

    gpuccio: The environment is the environment of words, both meaningful and non meaningful. A dictionary is an ordered catalogue of meaningful words. To be used to compare results with pre existing information.

    This is the model. We have a pond of words. At first, it is filled with just a single short word. We mutate and recombine words in the pond. If the new sequence forms a valid word (hence the dictionary), it enters the pond, otherwise, it’s still-born. We continue this process, adding new words as they are discovered. We might limit the number of word species in the pond. If this limit is exceeded, we might select word length, or some other characteristic. This is the basics of word evolution. The question is how long it takes to discover words of a particular length or other selectable characteristic, especially when compared to random sampling.

  131. 133 Toronto December 9, 2010 at 3:33 pm

    gpuccio: Design detection is a technical process. Please, object to it, if you can.

    We say “We detect evolution”.

    You say “We detect design”.

    We reply with, “Here’s a technical process we think explains evolution”.

    You reply with, “We detect design”.

    Every technical argument you have made is an objection to our model.

    Where is your technical process so that we can make a technical objection?

    Everybody including us, school boards, parents, students and government agencies are waiting to see what technical model students will be taught if the subject of evolution is replaced.

    Show us what it is, since you will have to anyway when your side has to teach it in front of a class.

  132. 134 Petrushka December 9, 2010 at 7:57 pm

    You just don’t understand what “empirical” means.
    __________________

    Apparently “emoirical” means that a scientist can postulate the existence of demiurges as causes.

  133. 135 Petrushka December 9, 2010 at 8:01 pm

    This is the model. We have a pond of words. At first, it is filled with just a single short word. We mutate and recombine words in the pond. If the new sequence forms a valid word (hence the dictionary)
    ________________________________

    The dictionary is an abstraction, a model of the environment that causes one variation to be more successful than others.

  134. 136 Petrushka December 9, 2010 at 8:48 pm

    It’s added information. Not the environment.

    _______________________

    It’s a stand in for the environment. Sort of like the way numbers can be a stand in for planets when predicting their positions.

    If you have a problem with the algorithms used in population genetics, by all means feel free to code your own, or at least describe it.

  135. 137 gpuccio December 10, 2010 at 8:39 am

    Zachriel:

    “A fitness landscape is an abstraction, of course. However, we can represent the sum of all the interactions involved in evolution as a fitness landscape. Indeed, you cited such a landscape in support of your own position.”

    Hatashi’s fitness landscape is certainly an abstraction, but his experiment is real. That is the correct procedure: facts first, and good, credible abstract models to explain them.

    On the contrary, fitness landscapes in evolutionary algorithms are intelligently designed abstractions whose purpose, explicit or implicit, is to generate pseudo-facts.

  136. 138 gpuccio December 10, 2010 at 8:44 am

    Zachriel:

    “They don’t self-select. Rather some are more successful at competing for local, limited resources required for reproduction.”

    What’s the difference? However, the concept is correct.

    I would mention here that competition for resources, however important, is not the only way a replicator can become more efficient than another one. Big transitions, like prokaryote -> eukaryote, or single celled -> multicellular, or basic body plans, have probably deeper reasons than mere exploitation of resources.

    “We can avoid your conundrum by providing a simulacrum of an environment set in a dimensional world.”

    Not sure what you mean, but sounds promising. Do you care to elucidate?

  137. 139 gpuccio December 10, 2010 at 9:00 am

    Zachriel:

    “Yes, we agree that implausible events are implausible. We don’t have to argue about ‘fits’ or dFSCI or whatever the latest ID Incarnation happens to be.”

    Thanks God for that. You seem to be one of the few darwinists who at least understand that simple point.

    “So it comes down to incremental evolution, which it always did (Darwin 1859).”

    Sure. Do you see that we can agree on something, if we stick to the true points?

    “You seem to think you can show there is some sort of barrier to incremental evolution. Nearly every working biologist disagrees.”

    And no working biologists can show any molecular evidence of their belief.

    It is typical to recur to arguments from authority, when true arguments are lacking.

    “Of course they did.”

    No, they didn’t. Their random sequence had no minimal function. It was the complex which retained a minimal function, even with a completely random sequence substituted to part of it.

    To state that the random sequence had minimal function, you should demonstrate that other random sequences exist which hamper the function of the complex completely. While the authors did not test that, the most natural explanation of the starting condition is that the complex can work even with a completely random sequence substituted. I would call that “resilience of the complex protein”, rather than “minimal function of the rndom sequence”. But I understand you have your preferences with words, when you need to suggest concepts which are not objectively real.

    “Yes, a random sequence evolved to greater specificity, and improved function.”

    Again, it was the complex protein which retrieved part of an already existing, and still partially surviving, function. You can change words, but that is exactly what happened.

    That is completely different from a random sequence that acquires some function of its own.

  138. 140 gpuccio December 10, 2010 at 9:14 am

    Zachriel:

    “But evolution is not a random walk.”

    The RV part is a random walk.

    “Because if evolution had to try every possible sequence, it would, indeed, be no better than a random sampling. But evolution doesn’t try every possible sequence.”

    I think you are confused here. The RV part of “evolution” can potentially try anything. And it uses its probabilistic resources for that.

    Each new variation event can go in any possible direction, according to the variation potential of the event (one or many contemporary mutations).

    The fact that some walks are interrupted by negative NS does not restrict the search space: if it were the case, then mutations which are potentially useful would become more probable. But that is not what happens.

    Let’s make an example. If protein A becomes protein B through, say, 5 mutations, each of those mutations in itself can be neutral, or negative, or positive, just like any other possible mutation. Therefore, the 5 AAs change is a random walk, exactly like all other 5 AAs changes form the starting state.

    Only random walks including stronlgy negatively selectable mutations will be stopped, and the random walk which can generated the final uselful change can include negative mutations exactly like all others. (I am obviously assuming that the individual mutations are not positively selectable. The chance of negative mutations could be lower in small transitions, which don’t change the 3D structure, but it obviously becomes random for big transitions, which change the domain structure significantly; that’s why microevolution is sometimes possible, and macroevolution never).

    “you don’t comprehend how much recombination changes how an evolutionary algorithm works compared to point-mutation alone.”

    You have said that many times, but never argued for that. Recombination is random. If you mean recombination of existing functional alleles, I can partially agree. But that cannot explain basic protein structures.

    “The success of evolutionary algorithms depends on the landscape. Evolution won’t work with chaotic or perverse landscapes. But evolution can navigate many landscapes of interest, in particular, those that show locality and directionality. This include protein evolution, which largely concerns spatial interrelationships subject to incremental optimization.”

    I think I have already answered all those points. I will not repeat myself.

  139. 141 gpuccio December 10, 2010 at 9:26 am

    Zachriel:

    “You are very confused. The environment may be chaotic, not the computer.”

    You are confused, then. In my model, the computer, with all its informational content (including the operating system and any other software) is the environment.

    “There are a number of ways to construct such a model, but the model is independent of the system on which we represent it.”

    You are confused again. With “computer” I don’t mean the hardware, but the informational system, including the operating system. Being a software, that is independent of the specific hardware (except for implementation details). In theory, you could well implement that situation on an abacus, or by paper and pen.

    My point, however, is that the whole informational system is the environment.

    “Yes, that’s why it’s useless for modeling a biological environment. It’s like putting your computer in the rain to model weather, then when it doesn’t work, saying that computers can’t model the weather.”

    Wrong. What we are trying to model here is the concept that NS can generate new complex functions. We are not modeling the biological environment (which would anyway be impossible). We are modeling the logical assumption of neo darwinism.

    “The model has nothing to do with the computer. It has to be a simulacrum of the biological environment.”

    No. that would be useless. You cannot create a simulacrum of a biological environment. Better to test things in a true biological environment, then

    But you can certainly test the assumption that RV and NS can generate new complex functions, given a blind environment, with any characteristics you like, and replicators which can vary randomly, the way you like.

    That you can test. And you will find it false.

    “It can’t be a useful model if it’s ‘blind.’”

    It’s not the model that is blind. The model requires an environment blind to the replicators, to prevent cognitive bias in the programming of the environment. That is a very simple scientific concept, why can’t you grasp it?

  140. 142 gpuccio December 10, 2010 at 9:35 am

    Zachriel:

    “This is the model. We have a pond of words. At first, it is filled with just a single short word. We mutate and recombine words in the pond. If the new sequence forms a valid word (hence the dictionary), it enters the pond, otherwise, it’s still-born. We continue this process, adding new words as they are discovered. We might limit the number of word species in the pond. If this limit is exceeded, we might select word length, or some other characteristic. This is the basics of word evolution. The question is how long it takes to discover words of a particular length or other selectable characteristic, especially when compared to random sampling.”

    For example, for tis model to mean anything about RV and NS, valid words should be “valid” only if they increase the survival (or reproduction) of the word itself in the pond, without any dictionary required.

    Being “valid” because the word is recognized by a dictionary which already has all the information about which words are valid, and which are not, is only a very trivial demonstration of intelligent selection of what is already known.

    And however, a very long word, unrelated to other words, would be extremely difficult to find (or empirically impossible, if long enough), even in that model, unless you select individual letters, and not only whole words. For instance, a very long english word, of non latin origin, would be extremely difficult to find, if you use an italian dictionary, even if that single word were included in the dictionary. That is exactly the case with basic protein domains. Each of them is an unrelated, very long word.

    If you select individual letters by a dictionary, you can obviously get anything in a short time. Bravo!

  141. 143 gpuccio December 10, 2010 at 9:52 am

    Toronto:

    “Every technical argument you have made is an objection to our model.”

    Design detection is not an objection to “your” model. It is a positive theory, based on empirical observations and good reasoning.

    And frankly I cannot understand your obsession with teaching. I have never thought that ID should be taught in classes instead of darwinism. Darwinism is the mainstream theory at present, and it is perfectly right that it be taught in classes. At the same time, it should be known, at least in general terms, that competing theories, like ID, do exist.

    Anyway, my interest is in science, not in school programs.

  142. 144 gpuccio December 10, 2010 at 9:54 am

    Petrushka:

    “Apparently “emoirical” means that a scientist can postulate the existence of demiurges as causes.”

    Apparently for you.

  143. 145 gpuccio December 10, 2010 at 9:55 am

    Petrushka:

    “The dictionary is an abstraction, a model of the environment that causes one variation to be more successful than others.”

    It is certainly an abstraction. A very useful, ad hoc, information rich “abstraction”.

  144. 146 gpuccio December 10, 2010 at 9:59 am

    Petrushka:

    “It’s a stand in for the environment. ”

    With completely different formal properties. My compliments.

    ” or at least describe it”

    I suppose that’s exactly what I have been doing here.

  145. 147 Zachriel December 10, 2010 at 1:38 pm

    gpuccio: Hatashi’s fitness landscape is certainly an abstraction, but his experiment is real.

    Ah! So we can model evolution using fitness landscapes.

    gpuccio: That is the correct procedure: facts first, and good, credible abstract models to explain them.

    Not necessarily. We might explore various non-Euclidean geometries, learn all sorts of neat stuff about them, then discover how to apply them to science at a later point. We can try to understand evolutionary algorithms and various classes of fitness landscapes to draw general principles.

    gpuccio: On the contrary, fitness landscapes in evolutionary algorithms are intelligently designed abstractions whose purpose, explicit or implicit, is to generate pseudo-facts.

    Mathematicians study all sorts of landscapes. In terms of this discussion, it means we can understand general principles of evolutionary algorithms. So, when you make sweeping generalizations of your own, we can refer to this body of mathematics to show that you are wrong. In reply, you simply wave your hands. The proper argument would be to show how the natural environment is not amenable to evolutionary search, not that we can’t make any generalizations.

  146. 148 Zachriel December 10, 2010 at 1:39 pm

    Zachriel: They don’t self-select. Rather some are more successful at competing for local, limited resources required for reproduction.

    gpuccio: What’s the difference? However, the concept is correct.

    Because a bacterium doesn’t say I choose myself. Evolution works at various levels, including on the population level, not individual teleological selection.

    gpuccio: I would mention here that competition for resources, however important, is not the only way a replicator can become more efficient than another one. Big transitions, like prokaryote -> eukaryote, or single celled -> multicellular, or basic body plans, have probably deeper reasons than mere exploitation of resources.

    They are all adaptations to compete for limited, local resources. Cooperation is an adaptation.

    Zachriel: We can avoid your conundrum by providing a simulacrum of an environment set in a dimensional world.

    gpuccio: Not sure what you mean, but sounds promising. Do you care to elucidate?

    We’ve mentioned this several times already. A simulation isn’t created blind. An arbitrary fitness landscape would be chaotic simply because the vast majority of landscapes are chaotic. We try to recreate important aspects of the natural environment, a space with food perhaps. We can’t recreate all the complexity of the actual world, but we can create some small aspects of it.

  147. 149 Zachriel December 10, 2010 at 1:39 pm

    gpuccio: The RV part is a random walk.

    Yes, but random walk is not the primary mechanism of evolutionary adaptation.

    gpuccio: The RV part of “evolution” can potentially try anything. And it uses its probabilistic resources for that.

    No it can’t, because it is constrained by selection.

    gpuccio: The fact that some walks are interrupted by {} NS does not restrict the search space:

    Of course it does!

    gpuccio: If protein A becomes protein B through, say, 5 mutations, each of those mutations in itself can be neutral, or negative, or positive, just like any other possible mutation. Therefore, the 5 AAs change is a random walk, exactly like all other 5 AAs changes form the starting state.

    If there are selectable pathways, then selection will find that pathway much faster than a random walk.

    gpuccio: (I am obviously assuming that the individual mutations are not positively selectable …

    You listed positive mutations, then say you will ignore them.

    Zachriel: you don’t comprehend how much recombination changes how an evolutionary algorithm works compared to point-mutation alone.

    gpuccio: You have said that many times, but never argued for that. Recombination is random.

    Random recombination does not result in a random search. That’s simply not the case. We can show this with evolutionary algorithms. That you pretend not to notice evolutionary algorithms doesn’t make the evidence go away.

  148. 150 Zachriel December 10, 2010 at 1:40 pm

    gpuccio: In my model, the computer, with all its informational content (including the operating system and any other software) is the environment.

    The experiment is silly. But even if you showed that viral evolution won’t work in a computer OS, all you’ve shown is that the single environment isn’t suitable. In fact, the vast majority of fitness landscapes are not amenable to evolutionary exploration. But biology isn’t in an arbitrary landscape, but one that exhibits many types of order that are suitable for replicators.

    gpuccio: In my model, the computer, with all its informational content (including the operating system and any other software) is the environment.

    The experiment is silly. But even if you showed that viral evolution won’t work in a computer OS, all you’ve shown is that the single environment isn’t suitable. In fact, the vast majority of fitness landscapes are not amenable to evolutionary exploration. But biology isn’t in an arbitrary landscape, but one that exhibits many types of order that are suitable for replicators.

    gpuccio: It’s not the model that is blind. The model requires an environment blind to the replicators, to prevent cognitive bias in the programming of the environment. That is a very simple scientific concept, why can’t you grasp it?

    The biological environment is not arbitrary. You don’t model weather by using an arbitrary fitness landscape with varying laws of physics. You model the actual environment.

    The valid experiment for evolution is to model various aspects of the natural environment, or to categorize various fitness landscapes in order to draw some general conclusions that might be applied to the particular.

  149. 151 Zachriel December 10, 2010 at 1:40 pm

    gpuccio: Being “valid” because the word is recognized by a dictionary which already has all the information about which words are valid, and which are not, is only a very trivial demonstration of intelligent selection of what is already known.

    It’s the equivalent of having a library of all functional proteins.

    gpuccio: If you select individual letters by a dictionary, you can obviously get anything in a short time. Bravo!

    Selection is for whole word only. Try again.

  150. 152 Zachriel December 10, 2010 at 1:54 pm

    gpuccio: {Wordscape} is certainly an abstraction. A very useful, ad hoc, information rich “abstraction”.

    Of course. It has to be rich to provide an interesting case. Evolutionary algorithms incorporate information from the environment into the population of genomes, not directly, but through selection.

    gpuccio: With completely different formal properties.

    Ah! So we can talk about the “formal properties” of various fitness landscapes.

  151. 153 Petrushka December 10, 2010 at 2:10 pm

    Apparently for you.

    ______________________

    It’s sad to see you engage in outright dishonesty,

    You are the one invoking a magical mythical entity that has superhuman knowledge and powers.

    Try limiting your explanation to observable processes.

  152. 154 Toronto December 10, 2010 at 3:05 pm

    gpuccio: Darwinism is the mainstream theory at present, and it is perfectly right that it be taught in classes.

    You are in disagreement with the core concept and processes of evolution.

    How can something this wrong be right enough to be taught?

  153. 155 Petrushka December 10, 2010 at 3:15 pm

    GP: Let’s be clear about what you have claimed.

    1. You have claimed knowledge of the functional space far beyond the knowledge claimed by all biologists together. You have personal knowledge of the ruggedness and other characteristics of the landscape — something no researcher claims.

    2. You seem to base this on the work of Douglass Axe, who performed one small experiment as a graduate student, and who has never engaged his peers at conferences or submitted further work for review. Have you ever even considered that he’s hasn’t sampled enough of the landscape to draw such sweeping conclusions?

    3. Based on your private and comprehensive knowledge of this landscape you have determined that traversing it incrementally is impossible. This despite countless evidences that living things actually do, from day to day exhibit variations, some of which are significant for their survival and success.

    4. Based on your private and comprehensive knowledge of biochemistry you have determined that fossil data are irrelevant, despite the fact that they show incremental change leading to new complex functionality.

    5. you have pronounced population genetics to be a bogus field of study. Despite never having written so much as a simple weasel program, you are competent to judge the efforts of thousands of people working with GAs. You cite the work of Dembski, who for ten years misrepresented a ten line BASIC program, and who has yet even to acknowledge the error or correct his approach to the problem.

    6. You assert that a computer model of a physical process is not valid unless the computer operating system is subject to random modification by the process being modeled. You have failed to cite any precedent for this model of a model.

    7. You decry excessive abstraction in models, and yet you cite information theory as setting limits to chemistry.

    8. You cite large numbers as an obstacle to evolution, but you have posited an entity for which large numbers are irrelevant. You have provided no positive evidence for the existence of such an entity. Your evidence for the existence of such an entity is necessity, which takes us back to your private knowledge of the very landscape which you assert is too vast to be known or explored.

    9. You simply blow off the work of thousands of researchers in the field of protein evolution, claiming superior knowledge and powers of reason.

    Have you ever considered what would happen to your stance in a courtroom under cross examination? Would you care to bet a case of single malt whiskey?

  154. 156 Toronto December 10, 2010 at 3:27 pm

    gpuccio: Design detection is not an objection to “your” model. It is a positive theory, based on empirical observations and good reasoning.

    “Design detection” is no more a theory than “evolution detection” is.

    They are peers of each other. neither are proposed processes.

    Our side has gone further and also put together proposed mechanisms of how evolution would work, e.g., RV, NS, etc., and supplied algorithms that model these processes than can then be evaluated, refined and re-tested.

    Where are your processes, algorithms and models that we can all inspect, refine and learn from?

    I know you well enough by now gpuccio to realize that you actually understand the questions I ask.

    What I don’t understand is why you pretend that you don’t.

    If it is uncomfortable for you to answer, say so or simply tell me you don’t want to answer at all.

    These are serious discussions as a loss in the courts that allows ID in the door means that evolution ..will.. be replaced by some form of ID.

    Regardless of what your personal opinions are, you are part of the “movement” that wants the “ID/creation of life” taught and evolutionary theories removed from science classes.

  155. 157 Petrushka December 10, 2010 at 4:21 pm

    The problem with Design Detection is that is requires the elimination of all possible natural causes.

    That’s all it is. the residue after all natural explanations have failed.

    The conceptual problem is that there will never be a time when all possible explanations have been exhausted.

    ID proponents mock this and assert that evolution is unfalsifiable.

    What they neglect and ignore is history. The search for natural causes makes progress. Where there were once no transitional fossils there are hundreds of thousands.

    Where there were no transitional fossils leading to whales there are now.

    Where there were no subsets of flagellar proteins there are now dozens.

    Where it was asserted that the mutations conferring resistance to CQ was detrimental to malaria parasites, we see compensating mutations that make resistance survive in the absence of CQ.

    I notice no response to these obvious shortcomings.

    I see no discussion of the dismantling of irreducible complexity.

  156. 158 gpuccio December 11, 2010 at 11:23 am

    Zachriel:

    “Ah! So we can model evolution using fitness landscapes”

    In the experiment, NS was really acting in the lab system. The fitness landscape is a theoretical model to try to explain data really obtained in the lab. That is perfectly correct, as I have already said.

    “We might explore various non-Euclidean geometries, learn all sorts of neat stuff about them, then discover how to apply them to science at a later point. We can try to understand evolutionary algorithms and various classes of fitness landscapes to draw general principles.”

    You can explore NS in two ways: either you model it theoretically in a correct way (that means modeling what kind of function can be naturally selected, which is difficult and non general, depending on the specific mechanisms of the replicators), or you can just set an experiment where NS acts, either in a biological setting, with biological replicators, or in an informational setting, with informational replicators. In both cases, the experiment will be correct only if NS is really acting. Then you can model resulting data as you like.

    Hayashi’s experiment is an example of a biological experiment where NS is acting.

    Evolutionary algorithms are examples of informational experiments where no NS is acting.

    An informational setting such as the one I have proposed is an example of an informational experiment where NS would be working.

    That is all the difference.

    “In terms of this discussion, it means we can understand general principles of evolutionary algorithms. So, when you make sweeping generalizations of your own, we can refer to this body of mathematics to show that you are wrong.”

    You are wrong. You don’t understand, or you don’t want to understand. My point is not, and has never been, that we cannot understand “general principles of evolutionary algorithms”. My point is simply that no evolutionary algorithm has anything to do with NS. It is simple, and you will not get rid of the point just by misrepresenting my argument.

  157. 159 gpuccio December 11, 2010 at 11:41 am

    Zachriel:

    “Because a bacterium doesn’t say I choose myself. Evolution works at various levels, including on the population level, not individual teleological selection.”

    Wrong manipulation of words, again. In your model, the bacterium dos not chosse, nor does the environment say “I choose this”. Only conscious beings choose.

    So, we could as well renounce to the word “selection”, which is only an ID metaphor.

    But words apart, there is no doubt that it is the replicator which actively “self-selects” itself, not in the sense that it says: “I choose myself”, but in the obvious sense that it is the replicators’s replicating efficiency that determines the final result. The role of the environment in the selection is passive, while the role of the replicator is active. That’s all I meant.

    “We can’t recreate all the complexity of the actual world, but we can create some small aspects of it.”

    You can create all that you want, but then it

  158. 160 gpuccio December 11, 2010 at 11:49 am

    Zachriel:

    “Because a bacterium doesn’t say I choose myself. Evolution works at various levels, including on the population level, not individual teleological selection.”

    Wrong manipulation of words, again. In your model, the bacterium dos not chosse, nor does the environment say “I choose this”. Only conscious beings choose.

    So, we could as well renounce to the word “selection”, which is only an ID metaphor.

    But words apart, there is no doubt that it is the replicator which actively “self-selects” itself, not in the sense that it says: “I choose myself”, but in the obvious sense that it is the replicators’s replicating efficiency that determines the final result. The role of the environment in the selection is passive, while the role of the replicator is active. That’s all I meant.

    “We can’t recreate all the complexity of the actual world, but we can create some small aspects of it.”

    You can create all that you want, but then it is NS which must come into action. By designing the environment to favour some result you expect, and giving it an active role, direct or indirect, in measuring some specific function and rewarding it, you are cutting NS out of the equation. And anyway, to test the logical assumption that complex functions will evolve by NS and RV, you don’t need to emulate the biological environment, which is however impossible.

    If you really wanted to simulate the biological environmetn, the first thing you should to would be to simulate the complexity of the replicating functions of the biological replicators.

    Indeed, as I have said, what is “naturally selectable” depends on the properties of the replicators, even more than on the properties of the environment. In complex replicators, only a tiny subset of functions will allow a positive differential replication.

  159. 161 gpuccio December 11, 2010 at 12:09 pm

    Zachriel:

    “Yes, but random walk is not the primary mechanism of evolutionary adaptation.”

    The only other “mechanism” is NS, which is absent in your “evolutionary algorithms”.

    “Of course it does!”

    No. The search space remains the same. Random events can reach any part of the search space. Single mutation random events can enter any random walk, but some walks will be interrupted by negative NS. That is not a reduction of the search space of the random events, it is a reduction of the space of the results of the whole algorithm (RV + NS). But the search space of the RV part, outside the interventions of NS, remains the same. And that is the only search space which appears in my computations and in my arguments.

    “If there are selectable pathways, then selection will find that pathway much faster than a random walk.”

    If there are selectable pathways, please show them, instead of repeating what I agree upon.

    “You listed positive mutations, then say you will ignore them.”

    I listed them because they can exist in any random wlak, even if they will be extremely rare in any random walk. My point is that it is, anyway, a random wlak, unless you show a pathway where each mutation is selectable. With the same probability for positive mutations as in any other randpom walk.

    “The experiment is silly. But even if you showed that viral evolution won’t work in a computer OS, all you’ve shown is that the single environment isn’t suitable. In fact, the vast majority of fitness landscapes are not amenable to evolutionary exploration. But biology isn’t in an arbitrary landscape, but one that exhibits many types of order that are suitable for replicators.”

    Now you are really a TE! No. Any kind of order can be reproduced in a computer environment. The natural landscape has order like any other landscape can have. It is not specially suitable for replicators. And there is no reason why it should be particularly “amenable to evolutionary exploration”. Now you are resting all
    your poor arguments on some special “magics” of the natural environment. That’s false. There is no magics in the environment which can explain the emergence of dFSCI.

    “The experiment is silly. ”

    I am not taking offence for that, but why say it twice? 🙂

    “The valid experiment for evolution is to model various aspects of the natural environment, or to categorize various fitness landscapes in order to draw some general conclusions that might be applied to the particular.”

    Well, it’s not a valid experiment for NS. If you don’t like my experiment, we can just conclude that no simulation is good to model NS, and stick to lab data.

  160. 162 gpuccio December 11, 2010 at 12:15 pm

    Zachriel:

    “It’s the equivalent of having a library of all functional proteins.”

    With their ordered sequences. Which the environment definitely does not have.

    “Try again.”

    I don’t need to. I just considered both possible scenarios. I paste here what I wrote:

    “Being “valid” because the word is recognized by a dictionary which already has all the information about which words are valid, and which are not, is only a very trivial demonstration of intelligent selection of what is already known.

    And however, a very long word, unrelated to other words, would be extremely difficult to find (or empirically impossible, if long enough), even in that model, unless you select individual letters, and not only whole words. For instance, a very long english word, of non latin origin, would be extremely difficult to find, if you use an italian dictionary, even if that single word were included in the dictionary. That is exactly the case with basic protein domains. Each of them is an unrelated, very long word.

    If you select individual letters by a dictionary, you can obviously get anything in a short time. Bravo!”

    You commented on the first paragraph, and on the last, and smartly (and quite unfairly) “avoided” the middle part.

    Next time, try to read better, or just to be more fair.

  161. 163 gpuccio December 11, 2010 at 12:22 pm

    Zachriel:

    “Of course. It has to be rich to provide an interesting case. ”

    The subject was “the dictionary”. And the point is that the environment has no dictionary. Try again.

    “Ah! So we can talk about the “formal properties” of various fitness landscapes.”

    Why not? We can talk of everything. And we do. So, show me a fitness landscape which has the same formal properties of NS (being passive in allowing the best replicators to better replicate). That would be more interesting than repeating wrong and trivial manipulations of words.

  162. 164 gpuccio December 11, 2010 at 12:24 pm

    Petrushka:

    “It’s sad to see you engage in outright dishonesty,”

    That is really funny.

  163. 165 gpuccio December 11, 2010 at 12:27 pm

    Toronto:

    “How can something this wrong be right enough to be taught?”

    What is the problem? I believe in democracy. The main opinions should be taught. I am not a dictator, nor an integralist.

    In private, I have the right to challenge the opinions I believe wrong, even if they are shared by most peoople. But I wpould never impose my convictions to others.

    Sometimes, you really surprise me with your comments.

  164. 166 gpuccio December 11, 2010 at 12:31 pm

    Petrushka:

    I have no words. You are unique.

    To be frank, I believe I have spent too much time answering you. But please go on posting, reading what you write is an extreme and surprising experience.

  165. 167 gpuccio December 11, 2010 at 12:43 pm

    Toronto:

    ““Design detection” is no more a theory than “evolution detection” is.”

    I suppose no theory is more a theory than any other theory. Theories are theories.

    “They are peers of each other. neither are proposed processes.”

    I don’t know evolution, but design detection is accomplished through specific processes. Doesn’t the word “detection” suggest anything to you?

    “Where are your processes, algorithms and models that we can all inspect, refine and learn from?”

    Design can be detected by dFSCI. I suppose you have vastly inspected that at your ease. Maybe not learnt from, however.

    “I know you well enough by now gpuccio to realize that you actually understand the questions I ask.”

    It’s beautiful to be known well. I think I understand. And I think I have answered.

    “What I don’t understand is why you pretend that you don’t.”

    So maybe you don’t understand me well enough. I certainly don’t understand you, however. Sorry not to be able to reciprocate.

    “If it is uncomfortable for you to answer, say so or simply tell me you don’t want to answer at all.”

    It is not uncomfortable at all. I just don’t understand what you want.

    If you just want me to agree with your wrong points, I am sorry, I ususally don’t do that, not even for friends, not even for those who know me well enough.

    “These are serious discussions as a loss in the courts that allows ID in the door means that evolution ..will.. be replaced by some form of ID.”

    I have never believed science should me made in courts. I am not interested in courts.

    “Regardless of what your personal opinions are, you are part of the “movement” that wants the “ID/creation of life” taught and evolutionary theories removed from science classes.”

    No, I am not.

  166. 168 Petrushka December 11, 2010 at 1:37 pm

    The role of the environment in the selection is passive, while the role of the replicator is active. That’s all I meant.

    __________________

    You are assigning intentionality to chemistry. The words passive and active are meaningless in this context. Some variants have more reproductive success. It’s an observation that does not imply intentionality.

  167. 169 Petrushka December 11, 2010 at 1:45 pm

    If you really wanted to simulate the biological environmetn, the first thing you should to would be to simulate the complexity of the replicating functions of the biological replicators.
    ____________________________

    Science abstracts. Gravity is an abstraction based on events observed on earth. It was a leap of intellect to apply this abstraction to planetary motions.

    Some models and simulations will be more useful that others. The test of the usefulness of a simulation is whether it predicts observable phenomena. For example, will a simulation of population evolution predict the evolution of bacteria in a controlled experiment?

    Your notion that controlled experiments are without value is interesting. It is consistent with you disregard for the history of science.

  168. 170 Petrushka December 11, 2010 at 1:46 pm

    I listed them because they can exist in any random wlak, even if they will be extremely rare in any random walk.

    The natural landscape has order like any other landscape can have. It is not specially suitable for replicators
    ________________

    Assuming your conclusion.

  169. 171 Zachriel December 11, 2010 at 2:55 pm

    My Goodness. This is why ID Arguments never get anywhere, and why ID is completely ignored by working biologists. Let’s see if we can concentrate on a few points.

    Zachriel: Ah! So we can model evolution using fitness landscapes

    gpuccio: In the experiment, NS was really acting in the lab system. The fitness landscape is a theoretical model to try to explain data really obtained in the lab.

    Ah! So we can model evolution using fitness landscapes!

  170. 172 Zachriel December 11, 2010 at 2:58 pm

    gpuccio: In both cases, the experiment will be correct only if NS is really acting.

    We can use mathematical or algorithmic means to model natural selection. For instance, if a haploid organism has a heritable trait that leaves 1% more offspring on average than its competitors, then there is a 2% chance of fixation of that trait. We can use an evolutionary algorithm and get the same result. And this result can be compared to biological organisms, such as the rate of fixation of antibiotic resistance in varying environments.

    And no, we don’t swing our computer on a string to simulate planetary motions.

    Zachriel: but random walk is not the primary mechanism of evolutionary adaptation.

    gpuccio: The only other “mechanism” is NS, which is absent in your “evolutionary algorithms”.

    Evolutionary algoriothms do not act like random walks, so you are obviously missing something.

    gpuccio: The fact that some walks are interrupted by {} NS does not restrict the search space:

    Zachriel: Of course it does!

    gpuccio: No. The search space remains the same. Random events can reach any part of the search space. Single mutation random events can enter any random walk, but some walks will be interrupted by negative NS.

    Or positive selection. In any case, we can easily determine whether evolutionary algorithms work like random walks. They don’t. You really need to come to grips with this.

    gpuccio: The natural landscape has order like any other landscape can have. It is not specially suitable for replicators.

    Actually it does. The far vast majority of possible landscapes are chaotic. (Do you know what this means?) But the natural fitness landscape has at least two important features making it suitable for evolutionary exploration; locality and directionality. (Do you know what this means?)

    gpuccio: If you don’t like my experiment, we can just conclude that no simulation is good to model NS, and stick to lab data.

    Even if you showed that your viral evolution won’t work in a computer OS, all you’ve shown is that the single environment isn’t suitable. In fact, the vast majority of fitness landscapes are not amenable to evolutionary exploration. We already know that!

  171. 173 Zachriel December 11, 2010 at 3:04 pm

    Zachriel: It’s the equivalent of having a library of all functional proteins.

    gpuccio: With their ordered sequences. Which the environment definitely does not have.

    That’s right. The feedback is only on whether it is functional, whether it is one of the sequences that work. That’s how word evolution works.

    gpuccio: And however, a very long word, unrelated to other words, would be extremely difficult to find (or empirically impossible, if long enough), even in that model, unless you select individual letters, and not only whole words.

    That actually raises an interesting question. (Let’s assume you understand the word evolution algorithm, e.g. it doesn’t select by locking letters.)

    If we formed a dictionary of 70,000 random sequences of varying length, would an evolutionary algorithm be adept at finding longer sequences? If we have a dictionary of 70,000 English words, some with German roots, some with Latin roots, some from other languages, would an evolutionary algorithm be adept at finding long sequences?

  172. 174 Zachriel December 11, 2010 at 3:17 pm

    gpuccio: We can talk of everything. And we do. So, show me a fitness landscape which has the same formal properties of NS (being passive in allowing the best replicators to better replicate).

    Sigh. A fitness landscape is *defined* as the relationship between genotype and reproductive success. Whether a particular fitness landscape is representative of some facet of biology is another question.

    gpuccio: You commented on the first paragraph, and on the last, and smartly (and quite unfairly) “avoided” the middle part.

    gpuccio: For instance, a very long english word, of non latin origin, would be extremely difficult to find, if you use an italian dictionary, even if that single word were included in the dictionary.

    Our last comment addresses your question in a much more general fashion. But you indicated some confusion when you suggested the algorithm selected individual letters. It does not. Another problem is that you suggested that evolution was searching for a specific word, rather than simply complex words in general. Word evolution, like most evolutionary algorithms will have different results in different runs. Evolution is a contingenet process. As for related words, well the English dictionary includes words from a great variety of languages, including Latin and Germanic languages.

  173. 175 Petrushka December 11, 2010 at 10:05 pm

    which has the same formal properties of NS (being passive in allowing the best replicators to better replicate)

    ____________________________

    This seems like a rather off definition of passive. Ecosystems do not allow unhindered replication. Nor is there any ecosystem that can be defined by a single parameter or single dimension.

  174. 176 Petrushka December 11, 2010 at 10:46 pm

    But you indicated some confusion when you suggested the algorithm selected individual letters. It does not. Another problem is that you suggested that evolution was searching for a specific word, rather than simply complex words in general.

    ____________________________

    This s the mistake Dembski made for ten years, despite numerous efforts to explain it to him.

  175. 177 Petrushka December 12, 2010 at 2:04 am

    I know your are not impressed by GAs, but it is possible to make a GA that neither searches for anything in particular nor uses a dictionary to select complete words.

    It is possible to score strings by their similarity to a language rather than by whether they match a word.

    Such and algorithm does build complete words. Not only that, it builds strings that look and sound like words, but aren’t in a dictionary.

    http://itatsi.com

  176. 178 Petrushka December 12, 2010 at 2:20 am

    It’s quite simple to make a fitness oracle that knows nothing about words, but knows about the relative frequency of letters at various positions.

    Such and algorithm can not only make words, but can make strings of letters that look and sound like words.

    Using frequency tables from different languages you can demonstrate the effects of relative sparseness.

    http://itatsi.com/

    What this demonstrates is that the source or cause of variation is irrelevant.

    What’s left of the ID argument is the assertion that the natural world does not support incremental change.

  177. 179 gpuccio December 12, 2010 at 9:45 am

    Zachriel:

    You go on misrepresenting, misinterpreting, and creating confusion. I really would not expect that form an intelligent person, which you obviously are.

    So, instead of answering every single misleading comment, I will try to reaffirm some very simple facts:

    1) I have never said that EA behave as random walks. I have never said that they cannot find some complex results. I have never criticized the potentialities of EV. Please, stop discussing these false points, because I have never made them. I think one at least deserves to be answered on what one said, not on what one never said.

    2) I have, definitely, stated that EA have nothing to do with the neo-darwinian model, because they definitely are not based, nor can simulate, NS. All EA are base on some form of intelligent selection, not on NS.

    3) I have also detailed the difference between IS and NS: NS can only act on functions which directly increase the reproductive rate in some environment. IS can act potentially on any defined function.

    4) Any formal property of an environment, including locality and directionality, can be included in a programmed computer environment. In blind. So, you can test, by my model, any kind of formal computer environment you like, provided it is not conceived specifically to help some specific kind of replicators. The blindness of the programming guarantees that, but does not exclude that specific generic properties, like locality and directionality, be included in the system.

    5) Regarding the word model, I understand very well the difference between selecting for whole words or for single letters, thank you. I understand that so well, that I distinguished the two scenarios in my discussion. Regarding your questions, I have already answered them, so I can only paste again what I said, hoping that you don’t ask again the same things:

    “And however, a very long word, unrelated to other words, would be extremely difficult to find (or empirically impossible, if long enough), even in that model, unless you select individual letters, and not only whole words.”

    Is that clear enough?

    6) Regarding random walks, what I did state (and that had nothing to do with EA), is that in the neo-darwinian model the RV part is a random walk. Clear?

    I will write it again, hoping that this time I may be understood correctly: in the neo-darwinian model the RV part is a random walk.

    Tbe even more clear, let’s make some example. I will consider two different scenarios, both compatible with a standard neo darwinian model:

    a) A protein A “evolves” to protein B through random variation. Protein B is the first new positively selectable step. I will not define here how complex the transition is in this example.
    Protein “A” is a functional protein, and is actively transcribed and translated.
    RV can act on protein A in two ways:

    a1:

    single step mutations (non positively selectable by definition, given the assumptions of the example). So, each single mutation is starting a random walk. Negative selection can still happen, so RWs whic cause a significant loss of function will not survive, and other random walks will continue to take place. The final result is that RV can only change protein A in its island of functionality, so protein B will have more or less the same function as A. This is the “big bang theory of protein evolution” in brief. I really can’t see how that can help the neo darwinian model. Obviously, many mutations will be neutral or only slightly detrimental, and in those cases the random walk can go on. It is also possible that the loss of primary function is not so relevant as to give a negatively selectable trait. In that case, also, the random walk can go on. But anyway, whatever happens in the RV part “is” a random walk.

    a2:

    other kinds of variation, involving many AAs in a single event (frameshift mutations, inversions, deletions, and so on). This is a random wlak that, by definition, can “jump” to any place in the search space, even with one single event.

    b) The RV operates on a duplicate, non functional gene. Here RV can work in complete freedom, unconditioned by NS (at least until some funtionally naturally selectable result is reached, and in some lucky way it is transcribes, translated, immediately integrated in the existing system, and therefore naturally selelected). Therefore, Protein A can vary in any possible way. It is a random walk, an unconditioned random walk.

    QED.

  178. 180 Petrushka December 12, 2010 at 11:20 am

    Something overlooked in this analysis is that a population (even a population of mammals) can test billions of simultaneous variations in each generation. In the case of microbes, a generation may require an hour or less.

    a possible analogy might be the spreading of seeds, most of which will fall on sterile ground. This analogy is particularly apt in sexually reproducing organisms, in which billions of sperm cells may be produced for each successful fertilization.

  179. 181 gpuccio December 12, 2010 at 12:42 pm

    Petrushka:

    The number of mutations in a population has never been “overlooked” in ID. It is an integral part of the computation of probabilistic resources, and of the determination od appropriated thresholds for dFSCI.

    I am amazed that you “overlook” this obvious fact. Why do you believe that IDists have proposed such high thresholds for design detection? (be it Dembski’s UPB of about 500 bits, or my biological probability bound of 150 bits). That demonstrates how little you understand ID theory.

  180. 182 gpuccio December 12, 2010 at 12:45 pm

    Petrushka:

    By the way, just to be precise, the “billions of sperm cells produced for each successful fertilization” don’t count as probabilistic resources. Only the actual fertilizations count.

    That’s why bacteria have huge probabilistic resources compared to mammals, both in terms of population numbers and of reproduction rate. That was one of my points with Zachriel, which he has brilliantly “overlooked”.

  181. 183 Zachriel December 12, 2010 at 4:40 pm

    gpuccio: So, instead of answering every single misleading comment, I will try to reaffirm some very simple facts:

    That’s often a good way to clarify a position and reduce confusion.

    gpuccio: 1) I have never said that EA behave as random walks.

    You have said things like this:

    If protein A becomes protein B through, say, 5 mutations, each of those mutations in itself can be neutral, or negative, or positive, just like any other possible mutation. Therefore, the 5 AAs change is a random walk, exactly like all other 5 AAs changes form the starting state.

    If the protein is subject to selection, it may never explore that pathway, or may explore it more quickly. Evolution does not act like a random walk.

    gpuccio: The neo-darwinian model (which is the only thing I am debating here) is RV + NS. RV is a random search.

    Random variation does not resemble a random walk when under selection. For instance, it may never move far from a fitness peak.

    gpuccio: As I have debated many times, if the starting point is unrelated to the final point (as in basic protein superfamilies), that is a random search.

    This again conflates evolution with a random search. A system under selection will not resemble a random walk. That’s because, unlike a random walk, there are many areas of the fitness landscape which will never be explored by evolution.

    gpuccio: The RV part in the neo darwinian model “is” a random search, provided it starts from an unrelated state.

    An evolving system which includes selection will not act like a random walk. You want to separate random variation from selection, but evolution certainly includes selection, especially in the context of adaptation.

  182. 184 Zachriel December 12, 2010 at 4:44 pm

    gpuccio: 2) I have, definitely, stated that EA have nothing to do with the neo-darwinian model, because they definitely are not based, nor can simulate, NS. All EA are base on some form of intelligent selection, not on NS.

    If we have a variant in a population that on average produces 10% more fertile offspring, then there is a 20% chance of fixation. We can model this with an evolutionary algorithm, and test it against wild species. Consequently, we have a valid model of natural selection. You seem to be confusing natural selection with the source of variation. Natural selection is relatively easy to model, and there are a variety of mathematical models, as well.

    gpuccio: 3) I have also detailed the difference between IS and NS: NS can only act on functions which directly increase the reproductive rate in some environment. IS can act potentially on any defined function.

    Yes, natural selection increases the rate of reproductive succes in a given environment. A fitness landscape is a mathematically equivalent model of this relationship. But if that doesn’t suit you, you can model some aspects of the physical environment with replicators competing for access to limited, local resources.

    Of course, we can directly observe evolution by natural selection in the wild, so we can test any model against those observations.

  183. 185 Zachriel December 12, 2010 at 4:49 pm

    gpuccio: And however, a very long word, unrelated to other words, would be extremely difficult to find (or empirically impossible, if long enough), even in that model, unless you select individual letters, and not only whole words.

    “Unrelated” is too vague. You suggested seeding a single English word of non-Latin origin in an Italian dictionary. We provided you a substantive reply.

    A) Evolution doesn’t search for any specific target. Different runs will likely result in different results.
    B) English includes words from a great variety of languages already. So are you saying that starting from a Latinate word, you will never find a Germanic word?
    C) If we formed a dictionary of 70,000 random sequences of varying length, would an evolutionary algorithm be adept at finding longer sequences? If we have a dictionary of 70,000 English words, some with German roots, some with Latin roots, some from other languages, would an evolutionary algorithm be adept at finding long sequences?

    The last will perhaps help you understand how evolutionary algorithms can inform this discussion, and reveal your confusion.

  184. 186 Petrushka December 12, 2010 at 5:11 pm

    I am amazed that you “overlook” this obvious fact.
    __________________

    Were you amazed that Behe was caught at the Dover trial for not having done this bit of arithmetic? Are you amazed that the small population of bacteria in the Lenski experiment were able to test the entire search space?

    In any given organism, the search space is limited to what can be reached by small changes. If you can point me to someone in the ID movement who does probability calculations this way, I’d like to see it.

    I suppose Behe comes the closest, but his notion of cumulative probabilities seems to include the gambler’s fallacy — the notion that the probability of a third or forth selection event is influenced by history.

    Sort of like assuming that rolling three sixes in a row influences the probability of rolling a six on the next throw.

    The competition to fertilize an egg is most certainly a form of selection. Why do you suppose the competition exists?

  185. 187 Zachriel December 12, 2010 at 5:17 pm

    Starting at the end.

    gpuccio: QED

    Quod erat demonstrandum suggests a definitive proof. In this case, it would require having exhausted all possible scenarios, something you haven’t done. For instance, you ignored the role of recombination, something that is believed to be important for the origin of protein domains. You also ignored the fact that, even if domains are separated in sequence space, many folds can be found in random sequences, so their frequency is high enough to be found by entirely stochastic means.

    gpuccio: a1: single step mutations (non positively selectable by definition, given the assumptions of the example). So, each single mutation is starting a random walk. Negative selection can still happen, so RWs whic cause a significant loss of function will not survive, and other random walks will continue to take place. The final result is that RV can only change protein A in its island of functionality, so protein B will have more or less the same function as A. This is the “big bang theory of protein evolution” in brief.

    The big bang theory of protein evolution suggests that there are only a limited number of ways of packing folded proteins. Hence, exploring all of them is simple in evolutionary terms.

  186. 188 Zachriel December 12, 2010 at 5:24 pm

    gpuccio: That’s why bacteria have huge probabilistic resources compared to mammals, both in terms of population numbers and of reproduction rate. That was one of my points with Zachriel, which he has brilliantly “overlooked”.

    We responded to that point, but it doesn’t seem to matter when we do.


    gpuccio: What about mammals?

    Zachriel: Rare events are rare in mammals, too. But the Theory of Evolution doesn’t depend on implausibly improbable events, but on selection from among natural variations.

    There are actually two conversations going on. Once concerning orthodox evolution, and one concerning the origin of protein domains. It is well-established, from a variety of evidence, that complex and irreducible adaptations can evolve. We have evidence that proteins have an evolutionary history, though perhaps not universal common ancestry. Because this sort of evolution is particulate, it does involve its own unique problems, but it is impossible to explore these questions when you reject what is already known about evolution.

  187. 189 Petrushka December 12, 2010 at 6:53 pm

    There are actually two conversations going on. Once concerning orthodox evolution, and one concerning the origin of protein domains.
    _____________________________

    Two conversations, but only one argument, that from irreducible complexity, gaps, and probabilities.

    I think GP is clever enough to realize that all this hinges on the ruggedness of the landscape.

    Good luck with that one.

  188. 190 gpuccio December 12, 2010 at 8:11 pm

    Zachriel:

    “If protein A becomes protein B through, say, 5 mutations, each of those mutations in itself can be neutral, or negative, or positive, just like any other possible mutation. Therefore, the 5 AAs change is a random walk, exactly like all other 5 AAs changes form the starting state.”

    I was speaking of the neo darwinian model, not of EAs.

    “Consequently, we have a valid model of natural selection. ”

    No, of selection. In NS, the function must increase the spontaneous replicating ability of the replicators. Without being actively recognized and rewarded. It’s a simnple point, isn’t it? And, for the model to say something. new complex functions must arise through that “source of variation”. As I have alredy said, the concept of NS is ties to the concept of “type” of variation: the RV must provide functions directly related to the replicating power.

    ” A fitness landscape is a mathematically equivalent model of this relationship.”

    No, it isn’t. That is the point. You wish it is, but it isn’t. It is IS.

    “But if that doesn’t suit you, you can model some aspects of the physical environment with replicators competing for access to limited, local resources.

    Of course, we can directly observe evolution by natural selection in the wild, so we can test any model against those observations.”

    Darwinists should do that. The model is theirs, not mine. I know it is wrong. The burden if the proof is on them

    And we observe only simple variation in the wild, never new complex functions. As already discussed.

    ““Unrelated””

    Less than 10% homology in a 150 letter word. Just to stay similar to protein domains. There are 6000 of them with that property.

    Please, answer this.

  189. 191 Petrushka December 12, 2010 at 9:12 pm

    In NS, the function must increase the spontaneous replicating ability of the replicators. Without being actively recognized and rewarded. It’s a simnple point, isn’t it?
    ___________________________

    It would seem like a simple concept, but it makes no sense. Natural selection is simply a description of the fact that some variants have greater reproductive success.

    What do you mean by spontaneous replicating ability?

    The most successful organisms on earth are and always have been single celled. There is no imperative toward increased complexity.

  190. 192 Zachriel December 12, 2010 at 9:56 pm

    Petrushka: It would seem like a simple concept, but it makes no sense.

    We’re having the same troubles parsing gpuccio’s claims.

    gpuccio: In NS, the function must increase the spontaneous replicating ability of the replicators. Without being actively recognized and rewarded.

    Consider a simple example, antibiotic resistance in bacteria. A mutation creates resistance. Then, in the presence of antibiotics, the trait spreads in the population consistent with models of natural selection and population genetics. We can create a fitness landscape representing the relationship between the trait and reproductive fitness, and then model the entire process with an evolutionary algorithm. Why? Well, we might want to understand what happens when the antibiotic concentration varies in order to to predict whether resistance becomes fixed.

    So, we have a model of random mutation. We have a fitness landscape. We have a model of natural selection. We have an evolutionary algorithm. We have a predictive model.

  191. 193 MathGrrl December 12, 2010 at 10:58 pm

    gpuccio,

    I’m glad this thread is still active because I’ve finally gotten some time to refresh my memory on Tierra and ev. I’ll post more on that later tonight or tomorrow, but Petrushka has raised an essential point:

    Natural selection is simply a description of the fact that some variants have greater reproductive success.

    Natural selection is a result, not a process in and of itself. When you have imperfect replicators, inheritance, and competition for resources in an appropriate environment (such as the real world we observe), you’ll see differential reproductive success. That’s another phrase for natural selection.

    GAs and other evolutionary algorithms attempt to model what we observe. They don’t build in natural selection, they observe differential reproductive success and compare what they observe with observations of the real world in an attempt to better understand reality.

    ev is an excellent example of this. More later.

  192. 194 MathGrrl December 13, 2010 at 11:20 am

    gpuccio,

    Getting back to Tierra, when I asked what metric you were using to measure complexity, you responded

    dFSCI, obviously.

    In Tierra, organisms are modeled as programs written in one of the Tierra instruction sets. The initial replicator that Ray seeded his environment with was 80 bytes long. The system soon produced a parasite that was 45 bytes long. Tierra also evolved a full replicator of only 22 bytes. Numerous other working programs of various lengths appear in any run.

    Kevin Kelley has a book chapter online that describes the rich output of Tierra (it’s only two pages or so). I’m very curious to hear your thoughts on what he documented.

    My question for you is how, exactly, would we calculate dFSCI for these evolved programs?

  193. 195 MathGrrl December 13, 2010 at 11:37 am

    gpuccio,

    My point is simply that no evolutionary algorithm has anything to do with NS.

    That’s a strange claim when I’ve pointed out Tierra and ev in this very thread. Both model aspects of the real world and of observed evolutionary mechanisms and both result in differential reproductive success, just as we observe in the real world.

    Schneider’s ev is particularly interesting with respect to this discussion. Again, I urge you to read both his PhD thesis and the ev paper. ev is a model of the evolution of protein binding sites that Schneider was studying for his doctorate. He found that the Shannon information of the binding site evolved to be equal to the information required to identify the site in the genome, both in the real world and in ev.

    Schneider even directly addresses Dembski’s CSI and shows that ev does generate information solely through simple evolutionary mechanisms.

    I look forward to your comments on ev.

  194. 196 Petrushka December 13, 2010 at 2:36 pm

    Here’s a nice discussion of how selection works mathematically.

    http://www-lmmb.ncifcrf.gov/~toms/paper/ev/AND-multiplication-error.html

    ___________________

    To model what happens in natural biological systems, consider flipping all 10 coins at once. Initially there will be about 5 heads and 5 tails. We paste these to an index card. We then make 100 copies of the card, including the states of the 10 coins. While we make the copies of the coin states, we sometimes make an error, changing a head to a tail or a tail to a head. We then find the card that has the most coins with heads up and we throw away all the other cards. So if even one card has an extra head, it will be found. We reproduce that card 100 times (with errors) and repeat the selection. Suppose that we make an error in copying a coin state about 1 time in 100. Then almost every other generation we will get another head. Starting from about 50% heads, it will only take 10 generations to get a card with all heads. That is what happens in nature. Notice that we have wasted a lot of cards, coins and glue to get the all-head card – about 1000 sets! – but the result comes quickly.

    _______________________

    If you are going to attack evolution via mathematical abstractions — and “information” is the ultimate abstraction — you need to model the relevant events. I think Schneider has elegantly shown what is wrong with Dembski’s model. He ignores amplification.

  195. 197 gpuccio December 14, 2010 at 11:35 am

    Petrushka:

    “Natural selection is simply a description of the fact that some variants have greater reproductive success.”

    So, if we want to test NS, we have to observe variants that have greater reproductive success. For real. Even non biological replicators, in a computer environment, have to be able to replicate better, if we want to affirm that NS can act on them.

    “What do you mean by spontaneous replicating ability?”

    That the replicator is not actively rewarded by some programmed function in the environment, but achieves greater replicative success for its own properties. What is difficult in this concept, that all of you intelligent people cannot grasp it?

    “The most successful organisms on earth are and always have been single celled. There is no imperative toward increased complexity.”

    That’s exactly my point. Because the real mechanism that generates functional complexity is not RV + NS (which would have left everything at prokaryotes, the most successful replicators of all times), but design, and the designer’s need to express higher functions.

  196. 198 gpuccio December 14, 2010 at 11:40 am

    Mathgrrl:

    I am going to look at your references, and I hope I can answer soon.

  197. 199 gpuccio December 14, 2010 at 11:49 am

    Petrushka:

    Your example is obviously an example of intelligent selection. The “heads” are recognized and amplified by the system.

    I will not speak for Dembski (I never do), but I can certainly say that I have never ignored amplification.

    My point, in this discussion, has always been clear. If amplification can be achieved at each step, and if each step is not complex, and if the selectable events are steps to a complex function, then the darwinian mechanism can work.

    Is that clear? I have stated that many times, both here and at UD. Why cannot you apparently read or remember what I say?

    My point is that, if the three “ifs” are not true, then the darwinian mechanism cannot work at all.

    And my point is that the three “ifs” are not true.

    It is not true that naturally selectable functions arise all the time: they are extremely rare, and usually require a strongly selective environment to be amplified. Moreover, they are always simple, and they are not steps to more complex functions.

    That’s all we can say from the facts. There is no logical reason to believe differently. And there is no empirical evidence to believe differently.

    Therefore, for all we can know at present, the darwinian model cannot work at all.

  198. 200 Petrushka December 14, 2010 at 12:09 pm

    and they are not steps to more complex functions.

    ++++++++++++++++++++++++++++++++

    You have some detailed knowledge of every step ever taken in the course of the history of the earth?

    Just asking. You assert this with such certainty.

    I’d like to see you list the steps for even one function.

  199. 201 gpuccio December 14, 2010 at 12:24 pm

    Mathgrrl:

    I am reading the chapter about Tierra. I would throw here a couple of thoughts just to start the discussion, although I believe that to really understand many points we really need more detailed information on the code of the system, the replicators and the evolved variants.

    My first point is that Tierra, although in principle more similar tp my suggested model than other evolutionary algorithms, still it is a “virtual computer”, and the intervention of the programmer can weigh heavily on the results. For instance, in the “chapter” it is mentioned that the original system attributed a “prize” for shorter size, and that was changed in the following version to test a more “size neutral” environment. Whatever the possible consequences of these implementations on the results, that means that the system is note really testing natural selection, but an artificial fitness function. That means, too, that only a very careful, on objective, analysis of the system itself can reveal possible biases in the choices of the programmer.

    A second point is that the “evolution” described seems to be, in most cases, an optimization for shortes sizes. Moreover, in the few cases where the variation was described in the article, it was of one bit, if I am correct.

    To quantify dFSCI in the variants, we must know the code, how the code works, and the true bit variation which enables a new “function”.

  200. 202 Zachriel December 14, 2010 at 1:01 pm

    gpuccio: It is not true that naturally selectable functions arise all the time: they are extremely rare, and usually require a strongly selective environment to be amplified.

    This is demonstrably false. We can show this in vitro, in vivo, and in silico.

    If 1/4Ne << |s| << 1, then the probability of fixation is 2s, where s is the selection coefficient and Ne is the effective population size. In other words, if a variant produces 1% more fertile offspring on average, then there is a 2% probability of fixation. This is simple to test by varying the amount of antibiotics in a population of resistant and non-resistant strains of bacteria.

    As for "rare", well, the beneficial mutations are common enough to constitute a regular experiment in biological colleges.

  201. 203 MathGrrl December 14, 2010 at 1:02 pm

    gpuccio,

    To quantify dFSCI in the variants, we must know the code, how the code works, and the true bit variation which enables a new “function”.

    The code is basically a string of 5 bit numbers, each representing a microcode instruction. The original set of 32 instructions is documented on the site.

    I’m not sure what you mean by “the true bit variation”. Bits are occasionally flipped to model mutation and the mutation rate is configurable.

    Given that, how do we start to calculate dFSCI? What’s the first step?

  202. 204 Zachriel December 14, 2010 at 1:05 pm

    By the way, given such variants in a population, it is easy to model the course of those variants with an evolutionary algorithm. A given variant may fix, or go extinct, reach a balance, or fluctuate. We certainly can model evolution, and then compare our models to biological observations.

  203. 205 gpuccio December 14, 2010 at 1:17 pm

    MathGrrl:

    You have to know the code of the original replicator, the code of the new functional variant, and analyze how many bits had to change to confer the new functional state.

  204. 206 gpuccio December 14, 2010 at 1:20 pm

    Zachriel:

    What is false? That they are rare? They are.

    Obviosuly, one AA mutations are not too rare, comparatively, and in a strongly selective environment, like a bacterial culture under antibiotic pressure, we can observe them easily. I suppose we agree on that.

    More complex mutations, of 2 or more AAs, are exponentially more rare.

  205. 207 gpuccio December 14, 2010 at 1:22 pm

    Petrushka:

    “I’d like to see you list the steps for even one function.”

    Again. I’d like you to do that.

    The burden of proof is on those who propose the model. As I have said, there is no reason, neither logical nor empirical, to believe that complex functions arise as the sum of naturally selectable simple steps. You can believe that by faith, but I don’t.

  206. 208 Petrushka December 14, 2010 at 1:47 pm

    The burden of proof is on those who propose the model. As I have said, there is no reason, neither logical nor empirical, to believe that complex functions arise as the sum of naturally selectable simple steps. You can believe that by faith, but I don’t.
    ++++++++++++++++++++++++++++++++++++

    We don’t know where Pluto was a hundred years ago, but we extrapolate from regular, observable processes. ID has invented an unobservable actor, much as Newton invented sky fairies to account for the stability of orbits.

    After a while, science decides it is more productive to fill gaps with regular processes than to posit invisible actors.

    Faith in the regularity of nature is confidence base on long experience.

    Faith in fairies is a holdover from animistic religion.

  207. 209 Petrushka December 14, 2010 at 2:18 pm

    http://www.lehigh.edu/~inbios/pdf/Behe/QRB_paper.pdf

    Behe’s latest paper. Of 77 adaptive mutations listed in the paper approximately one in ten are categorized as a gain in function rather than a loss or modification of existing function.

    One in ten may seem a small percentage, but it’s not zero, and it applies to observed mutations in controlled laboratory experiments over the last twenty years or so.

  208. 210 MathGrrl December 14, 2010 at 2:23 pm

    gpuccio,

    You have to know the code of the original replicator, the code of the new functional variant, and analyze how many bits had to change to confer the new functional state.

    All variants leading to any live replicator are functional, else they would be reaped and wouldn’t leave progeny.

    The code for the original replicator is available in the Tierra documentation. It consists of 80 instructions. At 5 bits per instruction, that’s 400 bits. The shortest replicator that evolved was 22 instructions long, for a total of 110 bits. (We could discuss the 45 instruction parasites, if you prefer.)

    There are a couple of obvious ways to determine how many bits had to change. The first is to determine how many generations it took before the 22 instruction variant was observed. The second is to compare the two programs and identify how much, if any, of the code in the evolved program is homologous with that of the ancestor.

    Let’s assume we’ve done the calculation and found that it required lowercase_delta (I’m MathGrrl, I like my Greek letters) changes to the original replicator, some of which altered the code, others that only changed the length of the program. How do we calculate dFSCI in terms of lowercase_delta?

  209. 211 Zachriel December 14, 2010 at 2:27 pm

    gpuccio: What is false? That they are rare?

    We responded to two claims. One concerned the rarity of beneficial mutations. As pointed out, they are not so rare that they are not an everyday occurrence. The other point concerned your false contention that beneficial traits “require a strongly selective environment to be amplified.

    gpuccio: My point, in this discussion, has always been clear. If amplification can be achieved at each step, and if each step is not complex, and if the selectable events are steps to a complex function, then the darwinian mechanism can work…

    My point is that, if the three “ifs” are not true, then the darwinian mechanism cannot work at all.

    1. amplification
    2. incremental
    3. pathway

    Yes, that’s the basics of evolutionary theory (Darwin 1859), with appropriate caveats for the particulate nature of genetics.

    1. All healthy organisms are capable of producing more than replacment levels of offspring. That leads to competition and amplification of beneficial traits.

    2 & 3. We have many examples of incremental and selectable pathways in evolutionary history, such as the origin of the mammalian middle ear.

  210. 212 gpuccio December 14, 2010 at 9:11 pm

    MathGrrl:

    I probably don’t understand completely how these Tierra replicators work. We have to describe functions, to calculate dFSCI. My impression is that here the functional unit is the instruction. And a higher level unit is the instruction sequence, in the measure that it does something which requires the interaction of different instructions on a functional procedure.

    Just seeing that a replicator replicates better (whatever the mechanism is in Tierra) does not tell us how it replicates.

    So, let’s suppose that we compare the original 80 instructions replicator, and the 45 instructions replicators, the first question is: in what do they differ? For instance, a simple deletion of 35 instructions can bring to a 45 instructions replicator. That would require a single, simple event. If the 45 instructions replicator can still replicate, and if the environment rewards shorter replicators, then it is obvious that the shorter replicator has acquired an advantage, without having indeed a new bit of functional information.

    That is only a very theoretical example, just to show that we have to know the sequence changes, and their relation to function,, to calculate dFSCI.

    In proteins, the functional unit is the protein domain, which is a long and complex unit of information. Here, the functional unit is the instruction, and it is very simple (5 bits).

    Some other questions: how many 5 bit instructions are recognized by the system? I suppose there are 32 possible 5 bit instructions. Are all of them functional? Are all of them used in the original replicator?

    I will be grateful if you help me understand without having to personally check all these details.

    In theory, once we understand the differences between A (the original 80 instructions rep) and B (the 45 rep), we should try to “align” them, and see if the 45 instructions in B align to a 45 instructions sequence in A. In that case, we should consider the functional variations in bits which are responsible for the function in B which was not present in A (IOWs, if the 45 instructions, as they were in A, were not enough to ensure replication, and how many bits had to change to gain the replication function). And then compute the functional target (which, in such a rather simple system, could probably be done directly, may be by a top down algorithm).

  211. 213 MathGrrl December 15, 2010 at 1:17 am

    gpuccio,

    All 32 instructions are executable, but obviously not all combinations are viable.

    By abstracting the number of bit changes to lowercase_delta, I have allowed you to ignore the specifics of the Tierra instruction set. How do we calculate dFSCI in terms of lowercase_delta?

  212. 214 Petrushka December 15, 2010 at 1:57 pm

    Broken record time:

    Zachreil’s point is the heart of the matter. We observe amplification of variants. It makes no difference why some variations fix in the population and others don’t.

    It could be that some improve fecundity, or that they improve competitiveness, or that they confer resistance to toxins or predation.

    The issue is the sparseness of the landscape and whether there is an incremental path to current sequences.

    After a hundred and fifty years, we’re still arguing about hopeful monsters and divine intervention.

  213. 215 gpuccio December 15, 2010 at 4:24 pm

    Mathgrrl:

    Let’s suppose that a 45 instructions variant can replicate, while the homologue 45 instructions in the progenitor can’t. I will not consider for the moment the deletion event, which substantially is only a loss of information, and can be considered as a one bit mutation.

    The problem would probably require the following steps (but I could be wrong because I shoud have a better understanding of the functional connections, so just consider this as a tentative first approach):

    a) We define the ability to replicate as a new function for the set of 45 instructions, because the corresponding original 45 instructions set was not able to ensure replication (if, instead, the original 45 instructions set is able to ensure replication, then only one bit variation has happened, the deletion event, which has allowed an advantage by loss of unnecessary information, in a system which rewards shorter codes).

    b) We check how many bits have varied between the two variants. The sequence of changed bits can be considered the varying sequence.

    c) We try to determine, if necessary by approximation, the target space (the number of possible sequences of those bits which do ensure replicating ability). If the sequence is not too long, that can be done with precision by testing each possible combinatorial sequence for those bits. Otherwise, we can try an approximation by a top down method (testing each single varied bit to determine if it contributes to the new function or not.

    d) If we have determined the target space with some precision, we can just take the rate between the search space and the target space. The base 2 logarithm of that rate will be the functional complexity of the variation.
    For instance, if 20 bits have changed, the search space is 2^20. If we have found a functional target space of 2^8 for the varied sequence (2^8 variants of those bits are still functional), then the functional information in that variation is 12 bits.
    We could get to a similar, but less precise, result just observing how many bits of the 20 which have changed are really necessary to maintain functionality in the top down approach (that is certainly less reliable, because it does not take into account higher combinatorial variations).

    But again, in this case the really important point os to accurately define the function. A definition such as “ensure replication by 45 instructions” is IMO too generic. With a better understanding of how these replicators replicate, we could maybe define different functions according to the way different instructions are interpreted in the system.

    In biological models, I usually refer to single, well definable biochemical functions exactly to avoid the difficulties implicit in higher level functions, which involve the interaction of many simpler functional units. That’s why I always deal with protein domains, because they are the functional units in the proteome.

  214. 216 gpuccio December 15, 2010 at 4:38 pm

    Zachriel and Petrushka:

    I wrote:

    “It is not true that naturally selectable functions arise all the time: they are extremely rare, and usually require a strongly selective environment to be amplified. Moreover, they are always simple, and they are not steps to more complex functions.”

    Zachriel points out that selectable functions are not so rare. OK, we can say it in a way that can satisfy us all:

    a) Naturally selectable functions are the most rare kind of variation, most of variations being slightly negative, or neutral, or (rarely again) strongly negative).

    b) Simple naturally selectable functions, however, do occur often enough in high population, high replication rate models, and they are often selected, especially, but not exclusively, in the presence of a strong selective pressure (like antibiotic pressure).

    c) There is no evidence at all that complex naturally selectable events occur by RV.

    d) If they don’t occur, they cannot obviously be selected.

    e) There is no evidence at all that simple selectable molecular events are steps towards complex functions.

    You are free to believe that, even if there is no evidence and no logic model of that, simple selectable molecular events may lead to complex functions. I don’t.

    You are free to believe that complex functional events may occur by RV. I don’t.

    If you have new arguments about those points, we can discuss them. I don’t believe it is useful or funny to go through the same arguments many times, after we have detailed them as well as we can.

  215. 217 MathGrrl December 15, 2010 at 5:29 pm

    gpuccio,

    Let’s suppose that a 45 instructions variant can replicate, while the homologue 45 instructions in the progenitor can’t.

    All viable Tierra programs can replicate themselves, by definition. Those that cannot are not viable and are removed.

    Some programs, like the 45 instruction one, replicate by parasitism; they require the presence of other programs in order to replicate.

    If I understand the rest of your post, you would calculate the dFSCI of the 45 instruction parasite, with respect to the function of parasitism, as the log to base 2 of 2^225 (225 being the number of bits required to represent 45 instructions) divided by the number of possible 45 instruction programs that implement parasitism. Is that correct?

  216. 218 Petrushka December 15, 2010 at 6:40 pm

    c) There is no evidence at all that complex naturally selectable events occur by RV.
    ++++++++++++++++++++++++++++++++++++

    What is in question: the R, the V, or the size of the V?

    Free free to support your alternative with data.

  217. 219 gpuccio December 16, 2010 at 1:40 pm

    MathGrrl:

    No, we must not consider the whole code of 45 instructions, but only the bits changed in the transition from its precursor. This is dFSCI of a transition, not of the whole replicator. The concept is, is the 45 replicator is derived from an ancestor, let’s say the 80 bits replicator, it should be possible to align the 45 rep to the 80, and find the homologue part in the 80. The bits which are different in the two homologue sequences are the transition of which we are calculating dFSCI. The search space of that transition will be 2^n (where n is the number of varie bits). For the rest you are right, dividing the search space by the target space, and taking the log to base two, that would be the dFSCI.

    That’s how I would reason for proteins. In general, the concept should be valid here too, but again the definition of function, and therefore the computation I would perform, could be different according to a better understanding of the functional basis of the system. But the computing procedure is as above.

  218. 220 gpuccio December 16, 2010 at 1:43 pm

    Petrushka:

    There is no evidence at all that naturally selectable events implying more than 150 bits of functional information occur by pure RV.

    What is not clear?

  219. 221 gpuccio December 16, 2010 at 1:46 pm

    Petrushka:

    Ah, the alternative. It’s easy. They certainly occur by design, and very often. This post should already be enough.

  220. 222 Petrushka December 16, 2010 at 5:04 pm

    There is no evidence at all that naturally selectable events implying more than 150 bits of functional information occur by pure RV.
    ++++++++++++++++++++++++++++++++

    I don’t recall you citing an example of such a one step intervention by the designer.

  221. 223 gpuccio December 17, 2010 at 8:42 am

    Petrushka:

    “I don’t recall you citing an example of such a one step intervention by the designer.”

    In a design process, there is no need to do that in one step. Multistep implementations can be guided by procedures completely different from NS (for instance, intelligent selection).

    On the contrary, the darwinian model is limited to RV and NS, so either the result happen by mere RV (in as many steps as you like, but anyway in the range of what RV can do), or it comes in a series of naturally selectable simpler steps (which allows a significant gain in probabilistic resources by amplification).

    Maybe if I repeat that for other 300 posts, you could grab it. But maybe not.

  222. 224 gpuccio December 17, 2010 at 8:44 am

    Mathgrrl:

    It seems that Dembski et al. have timely helped me in the discussion about ev, providing this freshly published paper:

    http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.3/BIO-C.2010.3

  223. 225 Petrushka December 17, 2010 at 1:01 pm

    Maybe if I repeat that for other 300 posts, you could grab it. But maybe not.
    +++++++++++++++++++++++

    I take that as a no.

    You have no instances where you can point to the intervention of the Designer. The time, the place, the phenomenon.

    No particular gap.

  224. 226 MathGrrl December 17, 2010 at 3:44 pm

    gpuccio,

    We seem to be making progress!

    No, we must not consider the whole code of 45 instructions, but only the bits changed in the transition from its precursor. This is dFSCI of a transition, not of the whole replicator. The concept is, is the 45 replicator is derived from an ancestor, let’s say the 80 bits replicator, it should be possible to align the 45 rep to the 80, and find the homologue part in the 80. The bits which are different in the two homologue sequences are the transition of which we are calculating dFSCI. The search space of that transition will be 2^n (where n is the number of varie bits). For the rest you are right, dividing the search space by the target space, and taking the log to base two, that would be the dFSCI.

    Essentially what you’re saying is that dFSCI is log_sub2(2^lowercase_delta/2^t_s) where t_s is the number of bit strings of the same length as the string resulting from lowercase_delta transitions from the parent that have the same functionality as the derived string (this is hard to do without being able to display formulas).

    That’s kind of a strange formula; I’m not sure what it represents in the real world. Let’s spend a little time looking at it nonetheless. The 45 instruction parasite typically appears after a few thousand generations in an environment that starts only with 80 instruction replicators. The 45 instruction parasite includes code that recognizes the reproductive section of the 80 and uses it to reproduce itself. It is also important to keep in mind that every precursor of the 45 instruction parasite is a viable program in the Tierra environment. Those three data points suggest that lowercase_delta, the number of mutations required to create the 45 instruction parasite, could be as high as the smallest number of generations required for it to appear (several thousand). Even if every 45 instruction program demonstrated parasitism, that would still result in a dFSCI of 2000 or more.

    The important point here is that your calculation requires knowledge of the evolutionary path to be valid. That path can be far more complex than one might naively compute based on simply the lengths of the ancestor and ultimate progeny.

    Given this analysis and given the fact that some Tierra programs don’t evolve for tens or hundreds of thousands of generations, it doesn’t appear that generating more than 150 bits of dFSCI via simple evolutionary mechanisms is particularly difficult at all.

  225. 227 Petrushka December 18, 2010 at 1:39 am

    Mathgrrl:

    There’s an interesting question being raised here.

    You seem to think the information change should be represented by the mutation history, and GP seems to think the net information gain or loss should be represented by the “XOR” of the two strings. (figuratively speaking)

    Perhaps I have that wrong, but that’s what it looks like to me.

    I admit to being a biased observer. I have always thought that it makes no sense to discuss the probability of a net change without knowing the history of intermediate stages in detail.

    Only if you know the detail can you judge whether the steps were large or small, likely or unlikely, detrimental, neutral or beneficial.

  226. 228 MathGrrl December 18, 2010 at 5:29 pm

    Petrushka,

    Well summarized. To be fair, gpuccio’s suggestion was slightly better than a simple XOR, but his proposed approach would ignore the fact that all intermediaries must be functional so the number of changes between the original 80 instruction program and the 45 instruction parasite is going to be significantly higher than 35.

  227. 229 Petrushka December 19, 2010 at 3:23 pm

    all intermediaries must be functional
    +++++++++++++++++++++++++++
    All intermediaries must have reproduced and must have been competitive with other variants.

    If a designer is going to introduce a really significant new function, how does he know it effect on the total organism? How does he know, for example, whether the cost of maintaining a flagellum might outweigh its benefits?

    If large numbers supposedly prevent incremental evolution from building complex structures, how does the designer over come them?

    If the number of possible combinations is larger than the number of atoms in the universe, in what database does the designer store his data on functionality? Not to mention the data on the changing environment and ecosystem? How does he know when to introduce new versions?

    Seems like a daunting task for someone who is not God.

  228. 230 Toronto December 20, 2010 at 6:12 pm

    Petrushka: Seems like a daunting task for someone who is not God.

    And that’s really what the ID side is talking about when they say, “Intelligent Design”.

  229. 231 Petrushka December 20, 2010 at 8:27 pm

    There’s a discussion of Dembski’s paper here:

    http://telicthoughts.com/new-bio-complexity-paper/#comments

    It would appear that the version of ev under discussion is a variant of Weasel.

  230. 232 MathGrrl December 20, 2010 at 8:39 pm

    Petrushka,

    It would appear that none of the commenters at Telic Thoughts have actually read the ev papers. It is most certainly not a targeted search. In fact, the binding sites, the proteins, and the weight matrix all evolve.

  231. 233 Petrushka December 20, 2010 at 9:00 pm

    Dembski’s analysis finds five “sources of active information.”

    1. He asserts that the mutation engine isn’t truly random.
    2. The program uses and oracle that emulates a fitness gradient.
    3. The algorithm is iterative.
    4. The algorithm differentially propagates offspring that ar closer to the target.
    5. The algorithm is sensitive to mutation rate.

    OK, so what?

  232. 234 Petrushka December 21, 2010 at 2:07 am

    It would appear that none of the commenters at Telic Thoughts have actually read the ev papers.
    +++++++++++++++++++++

    I’d appreciate a rundown of how it works. It would seem that Dembski has it wrong also.

  233. 235 gpuccio December 21, 2010 at 10:17 am

    Petrushka:

    “The time, the place, the phenomenon.”

    OOL: 2000 new protein domains, a genetic code, a whole cell structure emerging from inorganic matter.

    Ediacara and Cambrian explosion: new sets of body plans emerging almost at the same time.

    Each new protein domain emerging after OOL.

    And many others.

  234. 236 gpuccio December 21, 2010 at 10:27 am

    MathGrrl:

    I am not sure I understand your reasoning.

    Have you the code of the 45 instructions parasite?

    Has it any homology with the 80 instructions precursor?

    If yes, how many bits have changed?

    Has intelligent selection and rewarding happened in the transition from the 80 instr. to the 45 instr.?

    How many 45 instr sequences are functional (parasitic)?

    You cannot calculate anything if you have not the answers to these questions. And we should have those answers. After all, the system is there for us to observe.

    I insist that, if intelligent selection and rewarding happen in Tierra, no calculation can be made.

    And I fully disagree with your final conclusion.

    We cannot go on discussing a system of which I ignore the principles of replication, variation and selection. If you are clear about them, please detail them a little.

    How does a replicator “replicates”? Is that a true replication event? What decides what is “viable”? How does the system “rewards”? (It probably does, given that the paper mentioned that shorter codes were rewarded in the first version, and not in the second version).

    I am sure that, if you analyze the system correctly you will see that “generating more than 150 bits of dFSCI via simple evolutionary mechanisms” is impossible, provided that the “simple evolutionary mechanisms” be true RV and true NS.

  235. 237 gpuccio December 21, 2010 at 10:31 am

    Petrushka:

    “Only if you know the detail can you judge whether the steps were large or small, likely or unlikely, detrimental, neutral or beneficial.”

    You are right in the sense that we have to know exactly if and when NS can intervene. Remember that dFSCI can only be applied to random variation. Therefore, each time a selection intervenes, and amplifies the result, we have to split our analysis.

    The only difference between you and me is that you believe blindly that each complex transition can be deconstructed into simpler naturally selectable steps. I believe that is not true, and that we have not a single evidence or reason why it should be true.

  236. 238 gpuccio December 21, 2010 at 10:35 am

    Mathgrll:

    “all intermediaries must be functional ”

    The point is exactly that. We have no clear definition of function and target space in this obscure system (at least, obscure for me: I am waiting that you really make it clear for me, as you seem to understand it well).

    In theory, you can design a system where almost all sequences will be considered viable by the system. In such a system, the target space will be by definition huge, and it will be easy to find any viable sequence by RV. That does not mean that you are creating dFSCI. Please, remember that dFSCI is calculated on the ratio between target space and search space.

  237. 239 gpuccio December 21, 2010 at 10:37 am

    Petrushka:

    “Seems like a daunting task for someone who is not God.”

    I appreciate your efforts to demonstrate that the designer can only be God, but cannot agree with the reasoning. Humans do the things you describe, although certainly on a minor scale, all the time.

  238. 240 gpuccio December 21, 2010 at 10:38 am

    Toronto:

    “And that’s really what the ID side is talking about when they say, “Intelligent Design”.”

    And that’s really what the non ID side wants to be falsely believed about the ID side.

  239. 241 gpuccio December 21, 2010 at 10:40 am

    Petrushka:

    “OK, so what?”

    So, ev is a good example of intelligent design. Successful intelligent design.

  240. 242 MathGrrl December 21, 2010 at 1:19 pm

    gpuccio,

    I am not sure I understand your reasoning.

    Have you the code of the 45 instructions parasite?

    Has it any homology with the 80 instructions precursor?

    If yes, how many bits have changed?

    Good, succinct questions. My core point above is that the number of bits of difference between the two digital organisms is not a valid measure of the minimum number of changes that had to take place in order for the 45 instruction parasite to evolve from the 80 instruction answer (which I have been representing as lowercase_delta).

    The reason for this is that Tierra will only allow viable organisms to survive. In the context of Tierra, “viable” means “capable of replicating.” Most changes to the original 80 instruction ancestor will prevent it from replicating. Some changes will be neutral, and a small set of changes will allow it to replicate better. This is analogous to what we observe in real world organisms. It is not possible to go directly from the 80 instruction ancestor to the 45 instruction parasite with single bit changes while maintaining the ability to replicate at each step.

    What we need to determine is the minimum number of steps required within “digital organism space” for lack of a better term. That is, what is the shortest path through the graph of viable organisms that leads from the 80 instruction ancestor to the 45 instruction parasite. We know that such a path exists because we know that Tierra only uses simple mutation to modify existing organisms when they replicate and we see the 45 instruction parasite appear routinely.

    However, the 45 instruction parasite appears routinely only after several thousand generations. That provides an upper bound on the length of the shortest path of approximately 4000 steps. (As an aside, the reason so many steps are required is probably that certain neutral mutations must have occured before the parasite code can evolve to take advantage of them. This is analogous to what is observed in Lenski’s e. coli experiment.)

    Now, even if we assume that the evolutionary mechanisms implemented by Tierra are very inefficient, the fact that we never see the 45 instruction parasite appear earlier than several thousand generations into a run suggests that there are at least 2000 changes required (and quite probably many more since it’s never been observed that early) to get from the 80 instruction ancestor to the 45 instruction descendent that exhibits the new functionality of parasitism. That leads to the answer to one of your other questions:

    How many 45 instr sequences are functional (parasitic)?

    In fact, quite a small subset. In practice, for the calculation of dFSCI as you described it above, it doesn’t really matter. As I noted, even if every 45 instruction program were a viable parasite, that would still result in a dFSCI measurement of 1955 bits, far more than your 150 bit limit.

    The case is even stronger for the 22 instruction replicator that appears after tens of thousands of generations (and never earlier). Getting to the 22 instruction replicator requires a long walk through viable organism space and, again, even if all 22 instruction organisms are replicators (they aren’t, by a long shot) the amount of dFSCI generated by the Tierra system will be more than two orders of magnitude larger than your 150 bit limit.

    This is getting long, so I’ll address your other questions in a separate post.

  241. 243 MathGrrl December 21, 2010 at 1:36 pm

    gpuccio,

    Has intelligent selection and rewarding happened in the transition from the 80 instr. to the 45 instr.?

    The short answer is no.

    The longer answer is that this question indicates that you still seem to misunderstand the purpose and design of GAs and other evolutionary simulations. Tierra, for example, simulates the environment and a small set of evolutionary mechanisms that have been observed in the real world.

    The environment is simply an area of computer memory in which digital organisms are written, plus a multithreaded virtual CPU that executes the code of the organisms in parallel. The evolutionary mechanisms are random mutation and death for non-viable and very old organisms.

    There is no explicit fitness function nor do the people running Tierra interfere in any way with the simulator once it is running.

    “Selection” or, rather, differential reproductive success occurs as a consequence of these simple rules. New functionality such as parasitism and hyper-parasitism arise without human intervention. Improvements to the ancestor code such as loop unrolling (see the Tierra home page for details) and significantly shorter (and therefore faster) replicators also evolve, again without any human intervention.

    Tierra and other evolutionary simulations show the power of observed, real world evolutionary mechanisms and suggest directions for biological research. They also, fortuitously, allow us to test concepts such as dFSCI when such concepts are rigorously defined as we are trying to do here.

  242. 244 Zachriel December 21, 2010 at 2:26 pm

    MathGrrl: Improvements to the ancestor code such as loop unrolling (see the Tierra home page for details) and significantly shorter (and therefore faster) replicators also evolve, again without any human intervention.

    Loop unrolling how complexity is not a simple scalar. ID calculations are often based on sequence length, but sequences in Tierra that evolved loop unrolling implement a more complex algorithm, but have a shorter sequence.

  243. 245 Zachriel December 21, 2010 at 2:27 pm

    Loop unrolling {shows} how complexity is not a simple scalar.

  244. 246 MathGrrl December 21, 2010 at 3:37 pm

    Petrushka,

    It would appear that none of the commenters at Telic Thoughts have actually read the ev papers.
    +++++++++++++++++++++

    I’d appreciate a rundown of how it works. It would seem that Dembski has it wrong also.

    The best description is Schneider’s ev paper. In it he desribes the purpose and design of the ev simulator. ev is basically a straightforward GA simulator with a few interesting configuration settings that reflect Schneider’s observations of real world biological systems as documented in his PhD thesis.

    The primary configuration parameters in ev are the length of the genome, the number of binding sites, and the size of a binding site. The original purpose of ev was to determine if the amount of Shannon information in the binding site would evolve to equal the amount of information required to identify a site in a genome of a particular length with a particular number of such sites. (I’m not doing the paper justice here, I recommend reading it for more detail.)

    The number of possible proteins encoded in the genome is equivalent to the number of distinct substrings with a length equal to the width of a binding site. If the genome is one hundred bases long and each binding site is five bases wide, for example, there would be ninety-six possible proteins (imagine a five base “window” sliding over the whole genome).

    Whether or not a protein maps to a binding site is determined by a weight matrix, although any non-linear algorithm would do for the purposes of the simulation. The values of this weight matrix are derived from a substring of the genome, so it, too, evolves during the simulation. This is very important when analyzing the claims made at Telic Thoughts — there is no fixed target in ev.

    In an ev run, the binding sites are randomly assigned to locations on the genome. Again, there is no fixed target. Interestingly, even when the binding sites overlap with each other, ev shows that the evolutionary mechanisms it implements are capable of finding a solution and that the information content of each binding site is as expected from the real world observations.

    With the binding sites assigned, ev proceeds to use randomly mutate a population consisting of copies of the genome. The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation. It is very important to note that the whole genome is subject to mutation. This means the protein coding regions, the binding sites, and the weight matrix are all continuously evolving.

    The fitness function only determines how many proteins bind to the randomly selected sites. It has no knowledge of what the binding sites look like or how the protein coding regions could better bind to them. With the sites, coding regions, and weight matrix all in flux, it couldn’t possibly have that information.

    With all of that going on, ev demonstrates that even the simple evolutionary mechanisms it simulates are sufficient to generate information in the binding sites. Further, the amount of information corresponds to what is observed in real world genomes.

    I’ll try to find the time over the holidays to review the latest paper from Marks and Dembski, but it is very clear that evolutionary mechanisms can create information in a genome.

  245. 247 MathGrrl December 21, 2010 at 3:40 pm

    gpuccio,

    So, ev is a good example of intelligent design. Successful intelligent design.

    Far from it. Please see my response to Petrushka describing the ev simulator. ev very closely reflects what is observed in real biological systems and does so using only very simple, known evolutionary mechanisms.

  246. 248 Petrushka December 21, 2010 at 4:15 pm

    And that’s really what the non ID side wants to be falsely believed about the ID side.
    ++++++++++++++++++++++++++++++

    It’s very difficult not to draw the conclusion that the Designer is God, what with Behe stating that explicitly.

    One only has to take a look at what has become of Uncommon Descent now that Dave Scot as been gone for a while.

    There is no longer any pretense in the ID community that the designer is the Christian God.

    Except for those who believe it is the Muslim God.

    If the ID community actually believer the Designer to be a natural entity, the community would be hard at work attempting to define and narrow the designer’s attributes, methods and capabilities. There would be an effort to demonstrate that design is possible through some means other than evolution.

  247. 249 Petrushka December 21, 2010 at 4:22 pm

    Actually, GP, Dembski, et al could strengthen the case for their mathematical analysis of searches merely by demonstrating a method other than GA for solving 10,000 point travelling salesman problems.

    That’s a particularly interesting case, because no one knows the best solution. So it’s impossible to smuggle the best solution into the oracle.

    Or demonstrating a non GA approach to solving any problem involving large numbers and no known perfect solution.

  248. 250 Petrushka December 21, 2010 at 4:30 pm

    Each new protein domain emerging after OOL.

    +++++++++++++++++++++++++++

    Feel free to pick a specific one, so we can investigate whether its coding sequence has any precursors, or whether the entire sequence popped into existence de novo.

  249. 251 Toronto December 21, 2010 at 6:15 pm

    gpuccio,

    By claiming life-forms are explicitly designed, any design errors are dead-ends. In order for the designer not to make mistakes, he must be able to know the future he is designing for.

    The only entities that can know the future are gods.

    Entities “like us”, can’t see the future.

    Do you see now why the designer cannot be “like us”, and instead must possess the attributes of something like the Christian God?

  250. 252 Petrushka December 21, 2010 at 6:17 pm

    Unless I completely misunderstand things, there seems to be a whole class of GAs that do not search for a target, but whose individuals compete with each other.

    The traveling salesman problem is the simplest that I can think of, because individuals compete in one dimension, distance. There is no target.

    ++++++++++++++++++++++++++
    GP: The only difference between you and me is that you believe blindly that each complex transition can be deconstructed into simpler naturally selectable steps. I believe that is not true, and that we have not a single evidence or reason why it should be true.
    +++++++++++++++++++++++++++++
    Nothing except the fossil record (which you seem to think is valid evidence when applied to the Cambrian, but not to the evolution of the mammalian middle ear). And except that it is what we observe in plant and animal breeding, and in laboratory experiments with microbes.

    And except for the fact that every irreducible structure — such as the flagellum or the blood clotting system, turns out not to be irreducible. When we look, we always find functional subsets of the structure.

  251. 253 Petrushka December 21, 2010 at 9:45 pm

    A page of responses to criticisms of ev:

    http://www.ccrnp.ncifcrf.gov/~toms/paper/ev/blog-ev.html

    Dembski and Marks have been beating on it for some time. I don’t see anything new in their latest effort.

  252. 254 gpuccio December 22, 2010 at 8:02 am

    Mathgrrl:

    “What we need to determine is the minimum number of steps required within “digital organism space” for lack of a better term. That is, what is the shortest path through the graph of viable organisms that leads from the 80 instruction ancestor to the 45 instruction parasite.”

    I don’t understand why you say that. The number of steps has nothing to do with dFSCI. The number of bits which have to change in a coordinated way through RV and without selection is what we need. And accounting for the size of the target space.

    I really don’t understand why you bring in the number of steps.

  253. 255 gpuccio December 22, 2010 at 8:04 am

    Zacrhiel:

    “implement a more complex algorithm, but have a shorter sequence.”

    In what sense, more complex?

  254. 256 gpuccio December 22, 2010 at 8:05 am

    Mathgrrl:

    “The reason for this is that Tierra will only allow viable organisms to survive. ”

    What is the definition of “viable”?

  255. 257 gpuccio December 22, 2010 at 8:08 am

    Petrushka:

    “If the ID community actually believer the Designer to be a natural entity, the community would be hard at work attempting to define and narrow the designer’s attributes, methods and capabilities. There would be an effort to demonstrate that design is possible through some means other than evolution.”

    Again, “natural” does not mean anything, except maybe “liked by Petrushka”. A god can certainly be a natural entity, and many other things different form what you can imagine, or like to imagine, certainly can.

  256. 258 gpuccio December 22, 2010 at 8:13 am

    Mathgrrl:

    Just an example of why what you state is wrong:

    ” The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation.”

    In what way does that “simulate” what should happen in the biological reality? Where is NS here? This is obviously intelligent selection and an intelligently planned algorithms. This is function measurement, reward, and artificial amplification. No NS at all.

    Which is, as always, my point about EAs.

  257. 259 gpuccio December 22, 2010 at 8:15 am

    Petrushka:

    Again, a GA approach is an intelligently designed approach. So, you gave the answer yourself.

  258. 260 gpuccio December 22, 2010 at 8:18 am

    Petrushka:

    “Feel free to pick a specific one, so we can investigate whether its coding sequence has any precursors, or whether the entire sequence popped into existence de novo.”

    Any ribosomal protein. ATP synthase. Any aminoacyl tRNA synthetase. And so on.

  259. 261 gpuccio December 22, 2010 at 8:20 am

    Toronto:

    “By claiming life-forms are explicitly designed, any design errors are dead-ends. In order for the designer not to make mistakes, he must be able to know the future he is designing for.”

    Any design error can be corrected. In time.

    “The only entities that can know the future are gods.”

    It’s not necessary to “know” the future. Human make inferences and plans about the future all the time. Sometimes successful.

    “Do you see now why the designer cannot be “like us”, and instead must possess the attributes of something like the Christian God?”

    No.

  260. 262 gpuccio December 22, 2010 at 8:30 am

    Petrushka:

    “Nothing except the fossil record (which you seem to think is valid evidence when applied to the Cambrian, but not to the evolution of the mammalian middle ear). And except that it is what we observe in plant and animal breeding, and in laboratory experiments with microbes.”

    The cambrian explosion is a porblem to be solved: when you see the emergence of a great number of body plans, it is simply reasonable to infer that a lot of new information is expressing itself. It is however true that we don’t know the molecular basis of that information, and I never suggested anything like that. I just pointed to an obvious point in natural history where the intervention of the designer should be looked for, if and when we have the molecular information for a detailed analysis.

    The evolution of the mammal middle ear, as far as we know, could be explained both by a design mechanisms and a non design mechanism. In absence of molecular data, we can say nothing.

    Suggesting that the cambrian explosion can be explained by a non design mechanism is possible, but it is certainly a much bolder statement.

    What we observe in “plant and animal breeding, and in laboratory experiments with microbes” is never macroevolution, never a complex transition.

    “And except for the fact that every irreducible structure — such as the flagellum or the blood clotting system, turns out not to be irreducible. When we look, we always find functional subsets of the structure.”

    That is simply not true. Sometimes we can find some differently functional subsets. That is different. A complex function can be sometimes effected by the cooperation of simpler functions in an intelligently planned way. That is completely different for deconstructing a complex function into simple selectable steps.

  261. 263 gpuccio December 22, 2010 at 8:32 am

    Petrushka:

    “Unless I completely misunderstand things, there seems to be a whole class of GAs that do not search for a target, but whose individuals compete with each other.”

    That does not mean that they are not intelligently designed. They are designed to solve a problem where there is not the search for a specific target. But the system is purposeful, and designed.

  262. 264 gpuccio December 22, 2010 at 8:33 am

    Petrushka:

    “Dembski and Marks have been beating on it for some time. I don’t see anything new in their latest effort.”

    If something is true, it needs not be “new”. Maybe better expressed. Or published.

  263. 265 MathGrrl December 22, 2010 at 12:40 pm

    gpuccio,

    “What we need to determine is the minimum number of steps required within “digital organism space” for lack of a better term. That is, what is the shortest path through the graph of viable organisms that leads from the 80 instruction ancestor to the 45 instruction parasite.”

    I don’t understand why you say that. The number of steps has nothing to do with dFSCI. The number of bits which have to change in a coordinated way through RV and without selection is what we need. And accounting for the size of the target space.

    I really don’t understand why you bring in the number of steps.

    I’m attempting to calculating dFSCI according to your definition, part of which you provided above:

    You have to know the code of the original replicator, the code of the new functional variant, and analyze how many bits had to change to confer the new functional state.

    That’s a reasonable criteria. I hope you’re not changing your definition.

    In order to “analyze how many bits had to change” we must take into account the mechanisms by which bits change. In Tierra and other GAs, organisms that are not viable (e.g. cannot replicate) do not leave descendants. There is therefore no way for bit changes that result in a non-replicating organism to be propagated in the population. Accordingly, we must, as I described above, calculate the shortest path through “viable organism space” that leads from an organism without a particular function (parasitism, in this case) to one that has that function.

    If we don’t take into account the viable pathways between two organisms, the calculation has no applicability to what we observe in the real world. In fact, ignoring the pathways and merely calculating the bit differences (XOR, as pointed out by Petrushka) is equivalent to the infamous “tornado in a junkyard” argument where functionality appears complete and in one step.

    If dFSCI is to be a valid metric for real world biological systems, it must take real world mechanisms into account.

  264. 266 MathGrrl December 22, 2010 at 12:56 pm

    gpuccio,

    Just an example of why what you state is wrong:

    ” The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation.”

    In what way does that “simulate” what should happen in the biological reality?

    It is a model of differential reproductive success. It’s easy to change the rule to have a different cutoff or to be stochastic. Other GAs demonstrate that the result is the same.

    Where is NS here?

    “The most fit 50% survive to reproduce.” is a model of natural selection, just as the environment is a model of the real world environment and the mutation of organisms is a model of real world evolutionary mechanisms.

    These models have value when we compare them to the real world and apply them to real world research. ev is particularly interesting in this regard because it demonstrates that even simple evolutionary mechanisms lead to the same results that are observed in real genomes.

    This is obviously intelligent selection and an intelligently planned algorithms. This is function measurement, reward, and artificial amplification. No NS at all.

    There is no intelligent selection taking place in ev. The simulation simply models what is observed in the real world, namely differential reproductive success based on fitness in the environment.

    In Tierra there isn’t even this level of selection — organisms that can’t replicate simply die.

    Which is, as always, my point about EAs.

    You seem to have a problem with the whole concept of modeling. Are you seriously asserting that it is impossible to use computer modeling to analyze biological systems?

  265. 267 Toronto December 22, 2010 at 1:08 pm

    gpuccio: I don’t understand why you say that. The number of steps has nothing to do with dFSCI. The number of bits which have to change in a coordinated way through RV and without selection is what we need. And accounting for the size of the target space.

    I really don’t understand why you bring in the number of steps.

    Because that is a necessary component of the mechanism of evolution, which is the mechanism you are claiming doesn’t work.

    1) ..small change that doesn’t stop reproduction
    2) ..small change that doesn’t stop reproduction
    3) ..small change that doesn’t stop reproduction

    4) ..all the previous small changes taken together, result in an advantage for a current environment

    5) ..small change that doesn’t stop reproduction
    6) ..etc.
    7) ..etc.

  266. 268 Toronto December 22, 2010 at 1:21 pm

    gpuccio:Any design error can be corrected. In time.

    A designer who tries something, sees what works and then keeps it, is showing behaviour that is no different than evolution.

    If the designer is using the same process as evolution, how do you tell them apart?

    Here’s a question I hope you answer.

    Why is it not acceptable to accept the theory of evolution whether it is designer-driven or not?

  267. 269 Petrushka December 22, 2010 at 3:36 pm

    If something is true, it needs not be “new”. Maybe better expressed. Or published.
    ++++++++++++++++++++++++++++

    Conversely, when your characterization of something is wrong, it doesn’t become right through repetition.

    Models and experiments are always designed. So what? Galileo modeled falling objects using an inclined plane. That’s not a perfect representation of falling in a vacuum, but it’s a useful approximation.

  268. 270 Petrushka December 22, 2010 at 3:46 pm

    GP:

    The change history is the relevant phenomenon in evolution because the history of incremental changes is precisely the thing you are challenging.

    You should be happy with this. It actually increases the number of bits changed.

  269. 271 Maya December 22, 2010 at 4:36 pm

    Petrushka writes:

    The change history is the relevant phenomenon in evolution because the history of incremental changes is precisely the thing you are challenging.

    You should be happy with this. It actually increases the number of bits changed.

    He’s unhappy with it because a) MathGrrl is forcing him to be explicit about how to measure his deliberately vague dFSCI concept and b) it’s clear from the analysis of Tierra that his 150 bit “limit” is easily exceeded by non-magical mechanisms.

    I predict that gpuccio will attempt to redefine dFSCI to keep it untestable, rapidly move the goalpoats, or decide he no longer has time to participate here (although he’ll spend plenty of time on UD hiding behind Clive’s skirts).

  270. 272 Petrushka December 22, 2010 at 5:14 pm

    dFSCI is a flawed concept if it doesn’t reflect the change history.

    It would be like saying a car’s odometer should only reflect the distance from the sales showroom to its present location.

    ID is asserting you can’t get here from there in incremental steps. That is a vacuous claim if you don’t know the change history.

    Of course the title of this thread reflects the absence of the change history, or at least the absence of complete detail.

    So science can extrapolate the history using observable mechanisms, or it can invent an invisible, ineffable entity that poofed the changes.

  271. 273 Petrushka December 22, 2010 at 5:21 pm

    Perhaps GP or Marks or Dembski can produce a version of ev or Tiera that doesn’t use random variation and selection. Perhaps they can demonstrate a method of generating 22 bit parasitic replicators in one step, directly from the head of the designer.

  272. 274 Petrushka December 22, 2010 at 9:44 pm

    Here’s a question I hope you answer.

    Why is it not acceptable to accept the theory of evolution whether it is designer-driven or not?
    +++++++++++++++++++++++++++

    I’m curious to know what ID proponents make of mutation, amplification, selection and wastage, all of which occur.

    From the designer’s perspective, what is the point of deleterious or neutral mutations, harmful frame shifts, harmful duplications and such?

    We’ve been told the designer is “like us.”

  273. 275 gpuccio December 23, 2010 at 2:44 pm

    Mathgrrl:

    I am afraid you don’t understand my definition of dFSCI. My definition apllies only to results or transitions where the result is not explainable in terms of necessity. The path followed has nothing to do with that. In a natural biological environment, the calculation of dFSCI applies to the transitions where NS cannot intervene.

    That’s why I stree the concept that EAs do not model NS. You say:

    “You seem to have a problem with the whole concept of modeling. Are you seriously asserting that it is impossible to use computer modeling to analyze biological systems?”

    I am asserting that you cannot model NS by a system which has the formal properties of intelligent selection. Maybe you have not followed all the discussion here, but I have made that point explicitly many times.

    It’s not that I have problems with the concept of modeling, as you all try to suggest. The point is that NS is completely different from IS.

    NS is exclusively that kind of reproductive advantage which derives fro a new function in the replicator, and not form any measurement, recognition or active reward from the system. In IS you model the system, and you decide what the system will recognize as viable, or what it will reward (according to different cases). In ev, it is obvious, from what you have said yourself, that the system recognizes and rewards some specific function. Which, of itself, could never give a selective advantage, and therefore could never be naturally selected.

    In Tierra, I believe that the general architecture of the system makes viable “organisms” probable enough to be in the range of the RV. No dFSCI appears. Nothing of what you have said relates in any way to the concept of dFSCI, which you obviously misunderstand.

    You have not answered my simple questions: what defines a replicator as “viable”? Has the 45 instr. homology to the 80 instr.? Can we calculate the target space of the 45 instr? Instead, you try to calculate an algorithmic “nbumber of steps” which has nothing to do with the concept of functional information.

  274. 276 gpuccio December 23, 2010 at 2:49 pm

    Mathgrrl:

    “There is no intelligent selection taking place in ev.”

    Yes, there is. A lot of it. You yourself say:

    “The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation.”

    Who decided that? That’s just a rule that was programmed intelligently by the programmer, with the obvious purpose of increasing the affinity for nibding sites. A purpose, a strategy, an implementation, a measurement, a reward. Intelligent selection.

    That has nothing to do with “natural” selection. The affinity for binding sittes in no way increases the reproductive function of the replicators. The result is obtained only because the function is recognized, measured and reward intelligently by the system, because the system was programmed to do that (IOWs, the system is rich in added specific information). That has nothing to do with spontaneous function, and it is not a model of NS.

  275. 277 gpuccio December 23, 2010 at 2:56 pm

    Toronto:

    I am afraid you are confused. The small changes you refer to are steps in a random walk, as long as no NS intervenes. Each step is an event, part of the probabilistic resources. The more the steps, the more probabilistic resources you are using. But the number of random attempts effected has nothing to do with the calculation of dFSCI, as Mathgrrl erroneously seems to assume.

    Let’s say you have to find a string of 5 characters by a random walk. You can find it in 30 attempts (if you are very very lucky) or in 10^6 attempts. The complexity of the string remains the same in both cases. It has nothing to do with the number of variation events which are really necessary to attain on result. It is instead a measure of the mean number of attemnpts you can expect to need to attain the result in a random system. It is computed from the probability of the result in a random system, not from the number of steps used in an algorithmic system (including well designed necessity mechanisms) to attain a result.

  276. 278 gpuccio December 23, 2010 at 3:00 pm

    Toronto:

    “A designer who tries something, sees what works and then keeps it, is showing behaviour that is no different than evolution.”

    Really? I did not know that “evolution” sees things, judges them, and acts accordingly to its judgments and with purpose. I must have missed something.

    “Here’s a question I hope you answer.”

    You make reasonable questions, and I will give reasonable answers.

    “Why is it not acceptable to accept the theory of evolution whether it is designer-driven or not?”

    A theory of designer driven evolution is exactly what ID is. And it works.

    Your theory is of non design driven evolution, and it does not work.

    That’s why.

  277. 279 gpuccio December 23, 2010 at 3:05 pm

    Petrushka:

    “Conversely, when your characterization of something is wrong, it doesn’t become right through repetition.”

    That’s certainly true. That’s why I don’t like repetitions. You are really forcing me to repeat things, but I think I will just stop doing that.

    “Models and experiments are always designed.”

    Some are well designed, some are not. My point is not that a model should not be designed, but simply that it should really model what it says it is modeling, and not other things. My point is that EAs are modeling IS, and not NS.

    I have suggested a model to test NS, but you don’t seem to like it. My model is designed just the same, but it is designed to model NS, not to model IS.

  278. 280 gpuccio December 23, 2010 at 3:09 pm

    Petrushka:

    “The change history is the relevant phenomenon in evolution because the history of incremental changes is precisely the thing you are challenging.”

    No. I am challenging the concept that complex naturally selectable changes happen by RV, and that simple naturally selectable changes are incremental to complex functions.

    That’s a repetition, because I have said that tens of times. Why do you always change what I am saying?

    “You should be happy with this. It actually increases the number of bits changed.”

    The only thing that increases here seems to be your mental confusion, and I am not happy of that. I like mental clarity, even in adversaries.

  279. 281 gpuccio December 23, 2010 at 3:13 pm

    Petrushka:

    “From the designer’s perspective, what is the point of deleterious or neutral mutations, harmful frame shifts, harmful duplications and such?”

    Is that even a question? Who said that the designer is the author of thoise things? To paraphrase a famous saying “deleterious random events happen”.

  280. 282 gpuccio December 23, 2010 at 3:38 pm

    To all:

    I don’t believe I have to change anything in my definition of dFSCI. You just re-read it and try to understand what it means.

  281. 283 Toronto December 23, 2010 at 4:36 pm

    gpuccio: In a natural biological environment, the calculation of dFSCI applies to the transitions where NS cannot intervene.

    Finally, we have a point that we can agree on!

    Since NS intervenes on ANY change, EVERY change impacts those following!

    ..small change -> NS -> small change -> NS -> small change/ etc.

    Don’t give up now as we’re very close to making you understand why ID cannot be “how” we are here!

  282. 284 Petrushka December 23, 2010 at 5:37 pm

    No. I am challenging the concept that complex naturally selectable changes happen by RV, and that simple naturally selectable changes are incremental to complex functions.
    +++++++++++++++++++++++++++++
    You keep asserting that, but repetition doesn’t make it true.

    We can see incremental change. We understand the physical causes of mutations. We cannot see magic designer change. You have Yet to produce a single example of a before and after sequences involving designer intervention.

  283. 285 Toronto December 23, 2010 at 5:48 pm

    gpuccio: Really? I did not know that “evolution” sees things, judges them, and acts accordingly to its judgments and with purpose. I must have missed something.

    This is what I meant when I said previously, that you pretend you don’t understand what I meant.

    “Evolution” does not “see and judge” the actual me, just as your designer does not “see and judge” the actual you.

  284. 286 Petrushka December 23, 2010 at 6:37 pm

    My point is that EAs are modeling IS, and not NS.
    +++++++++++++++++++++++++

    Is that anything like the difference between intelligent falling and natural falling?

    It would seem to me that the math wizards of ID could come up with a GA that models NS, so that everyone could see the difference.

  285. 287 MathGrrl December 23, 2010 at 11:05 pm

    gpuccio,

    I am afraid you don’t understand my definition of dFSCI. My definition apllies only to results or transitions where the result is not explainable in terms of necessity.

    That is not what you originally claimed. Coming into this discussion, your assertion was that dFSCI is an objective metric that provides a means for determining whether or not intelligent input was required to achieve a particular result.

    Your new claim here makes dFSCI useless for identifying intelligent input because it states that dFSCI cannot by definition arise from non-intelligent mechanisms.

    The path followed has nothing to do with that.

    If you want to know if a particular state can be reached via known evolutionary processes, the path or paths from the original state to the measured state is essential data.

    In a natural biological environment, the calculation of dFSCI applies to the transitions where NS cannot intervene.

    You are assuming your conclusion here. If dFSCI is an objective metric then it is possible to measure it in known systems to determine if your claim that more than 150 bits of dFSCI indicates intelligent intervention is true. If you’re now saying that dFSCI can only be measured when intelligent intervention is known to have taken place then dFSCI is useless as a metric for identifying intelligent intervention in unknown cases.

    That’s why I stree the concept that EAs do not model NS. You say:

    “You seem to have a problem with the whole concept of modeling. Are you seriously asserting that it is impossible to use computer modeling to analyze biological systems?”

    I am asserting that you cannot model NS by a system which has the formal properties of intelligent selection.

    What, exactly, are these “formal properties”. Your previous definitions of dFSCI in the discussion between you and I in this thread never mention those. Up until now, the working assumption was that dFSCI is an objective metric that can be measured for any system. You gave no indication previously in our discussion of Tierra or ev that it is inherently impossible to measure dFSCI in either of those environments.

    It’s not that I have problems with the concept of modeling, as you all try to suggest. The point is that NS is completely different from IS.

    NS is exclusively that kind of reproductive advantage which derives fro a new function in the replicator, and not form any measurement, recognition or active reward from the system.

    This is incorrect. Natural selection is simply another term for differential reproductive success. It is a result of imperfect replication, inheritance, and competition for resources. Populations with such characteristics will, over time, come to contain a higher percentage of individuals that use the resources of the environment most efficiently for reproduction.

    The idea that populations will not gain any “reward” from the environment is not consistent with real world observations. Living to reproduce is the reward.

    In IS you model the system, and you decide what the system will recognize as viable, or what it will reward (according to different cases). In ev, it is obvious, from what you have said yourself, that the system recognizes and rewards some specific function. Which, of itself, could never give a selective advantage, and therefore could never be naturally selected.

    The GAs we’re discussion model aspects of real world environments. In Tierra, organisms that cannot replicate don’t leave progeny, naturally enough. In ev, organisms that are less fit don’t reproduce. These are reasonable models of what we observe in the real world.

    In Tierra, I believe that the general architecture of the system makes viable “organisms” probable enough to be in the range of the RV.

    On what do you base this claim? What, specifically, about the design of Tierra makes this statement accurate?

    Further, what about Tierra is sufficiently different from the real world to make Tierra a poor model?

    No dFSCI appears. Nothing of what you have said relates in any way to the concept of dFSCI, which you obviously misunderstand.

    That is a grossly unfair accusation. I’ve been spending a significant amount of time discussing this with you in order to really understand what you mean by dFSCI. I have gone step by step through your explanations and tried to work with you to apply them to both Tierra and ev. At no point, until now, have you given any indication that it is inherently impossible to do so.

    I thought we were getting close. It seemed like we, together, were going to demonstrate that dFSCI is an objective, measurable quantity. Now I feel that you are changing your definitions just when the topic was getting mathematically interesting.

    You have not answered my simple questions: what defines a replicator as “viable”?

    If you don’t understand what makes a Tierra organism viable, you probably don’t understand the system well enough to make the claim you made above about the architecture of the system.

    I’ve explained at least twice that viability in Tierra means “able to replicate.”

    Has the 45 instr. homology to the 80 instr.? Can we calculate the target space of the 45 instr? Instead, you try to calculate an algorithmic “nbumber of steps” which has nothing to do with the concept of functional information.

    I explained in my previous post why it is essential to calculate the length of the shortest path through “viable organism space” in order to align dFSCI with biological reality. Simply restating your original claim, without addressing the issues I raised, is not a reason to disregard those issues.

  286. 288 MathGrrl December 23, 2010 at 11:15 pm

    gpuccio,

    “There is no intelligent selection taking place in ev.”

    Yes, there is. A lot of it. You yourself say:

    “The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation.”

    Who decided that? That’s just a rule that was programmed intelligently by the programmer, with the obvious purpose of increasing the affinity for nibding sites. A purpose, a strategy, an implementation, a measurement, a reward. Intelligent selection.

    It is a model of fitness in an environment and it reflects biological reality. As I noted previously, you can change the cutoff point and you can even make the reproduction algorithm stochastic, but you’ll get the same results.

    In the real world we see that more fit individuals leave more progeny and less fit individuals leave fewer or none. Over time this results in changes to the allele frequency in populations (evolution). This is just a model of what we observe in the real world.

    That has nothing to do with “natural” selection. The affinity for binding sittes in no way increases the reproductive function of the replicators. The result is obtained only because the function is recognized, measured and reward intelligently by the system, because the system was programmed to do that (IOWs, the system is rich in added specific information). That has nothing to do with spontaneous function, and it is not a model of NS.

    Those are some strong claims. I would be interested in hearing some support for them. However you attempt to do so, please explain why, despite your assertions, the ev system results in exactly the same amount of information generation that is observed in numerous real world genomes.

    ev was, after all, created to test the arguments in Schneider’s PhD thesis. He saw the same results in his digital genomes as he did in his biological genomes. That suggests that the model he used in ev is a reasonable approximation to the real world.

  287. 289 MathGrrl December 23, 2010 at 11:25 pm

    gpuccio,

    Let’s say you have to find a string of 5 characters by a random walk. You can find it in 30 attempts (if you are very very lucky) or in 10^6 attempts. The complexity of the string remains the same in both cases. It has nothing to do with the number of variation events which are really necessary to attain on result.

    This is an excellent example of one of the major issues in this discussion. Your statements are correct if and only if the search space is completely flat — that is, if each step is equally likely and possible. That is not the case in biological systems nor in Tierra.

    In those systems, not every intermediary is viable. To get from aaaaa to bbbbb, to use your five character string example, may require going through accca, cdedf, and xyzzy (one character change at a time) in order for each string to be viable enough to reproduce.

    This is why understanding the length of the shortest path of viability between an organism and its remote ancestor is essential to any calculations of the difficulty of that transition.

  288. 290 MathGrrl December 23, 2010 at 11:34 pm

    gpuccio,

    To all:

    I don’t believe I have to change anything in my definition of dFSCI. You just re-read it and try to understand what it means.

    I hope you don’t intend your post this way, but given the amount of time and effort I’ve invested in this discussion, I find it insulting.

    The usual expectation in science is that the person who is making claims about a new metric is responsible for clearly defining that metric and providing enough examples of how to calculate it that others can easily replicate those calculations. You haven’t done that, but I thought that you seemed intelligent, honest, and pleasant enough that I could work with you to get to that point.

    Now, just when it appeared we were going to be able to calculate dFSCI for some digital organisms and actually test some of your claims, you have changed your definition, intimated that such testing is impossible even in theory, and finished with asserting that any confusion is all the fault of those who are taking the time to engage with you.

    I had hoped for better from you. I would very much like to get back to the calculations, but that will require you to take responsibility for defining your terms and working constructively with me. Are you willing to do that? Are you even interested in testing your claims?

  289. 291 Mark Frank December 24, 2010 at 9:55 am

    I haven’t been following this thread but just took a quick glance. I hope the performance is OK.

    I was interested to see this comment from MathGrrl:

    Your new claim here makes dFSCI useless for identifying intelligent input because it states that dFSCI cannot by definition arise from non-intelligent mechanisms.

    Gpuccio – I imagine this is familiar? I don’t know how many times I have tried to persuade you that the very definition of CSI, FSCI and dFSCI entails that it is produced by intelligence. As soon as you can show a non-intelligent source then it is no longer complex.

  290. 292 gpuccio December 24, 2010 at 4:15 pm

    Toronto:

    “Since NS intervenes on ANY change”

    That’s really news.

  291. 293 gpuccio December 24, 2010 at 4:17 pm

    Petrushka:

    “We can see incremental change.”

    Where can you see naturally selectable change incermental to complex functions?

  292. 294 gpuccio December 24, 2010 at 4:18 pm

    Toronto:

    ““Evolution” does not “see and judge” the actual me, just as your designer does not “see and judge” the actual you.”

    Well, I can see and judge this phrase, but nothing kind comes to my mind…

  293. 295 gpuccio December 24, 2010 at 4:22 pm

    Petrushka:

    “It would seem to me that the math wizards of ID could come up with a GA that models NS, so that everyone could see the difference.”

    I have already suggested how to implement one, and nobody among you seems interested. Perhaps you already know what the results would be, and what that would mean.

  294. 296 Petrushka December 24, 2010 at 4:29 pm

    Where can you see naturally selectable change incermental to complex functions?
    ++++++++++++++++++

    Don’t insult everyone’s intelligence. Where can you see Pangaea separating into continents?

  295. 297 gpuccio December 24, 2010 at 5:11 pm

    Mathgrrl:

    What are you saying?

    “That is not what you originally claimed. Coming into this discussion, your assertion was that dFSCI is an objective metric that provides a means for determining whether or not intelligent input was required to achieve a particular result.”

    OK. In the cases where it is possible (see my many remarks abot the many false negatives).

    “Your new claim here makes dFSCI useless for identifying intelligent input because it states that dFSCI cannot by definition arise from non-intelligent mechanisms.”

    From what do you derive this weird conclusion? That’s what I wrote (and you quote):

    “I am afraid you don’t understand my definition of dFSCI. My definition apllies only to results or transitions where the result is not explainable in terms of necessity.”

    Where in the world can you see in my words your conclusions? In my definition of dFSCI, it is stated very explicitly that known necessity explanations must not be available, and that the information must be essentially non compressible as fa as we know. That’s exactly to rule out necessity mechanisms. Very complex outputs can come out of very simple necessity algorithms. In that case, the complexity we have to take into consideration is the complexity of the algorithm, not the complexity of the result. That has been disussed in detail on these trheads here. I have changed nothing in my definition. Maybe you should change something in your understanding.

    “If you want to know if a particular state can be reached via known evolutionary processes, the path or paths from the original state to the measured state is essential data.”

    Detailing possible necessity algorithms, represented by specific selectable paths, is the duty of darwinists, not mine. I don’t believe they exist. The fact that darwinists cannot detain any is strong support to my belief. In the case of Tierra, where all the system is specifically planned in a very artificial way, maybe those paths exist, and can explain some results of the system. You are the Tieraa expert, not I, so please detail those paths (I have asked many times for details about the sequences, homologies, criteria of viability, and so on. I am still waiting).

    “If you’re now saying that dFSCI can only be measured when intelligent intervention is known to have taken place”

    I am nopt saying that. I am saying that dFSCI applies to transitions or to parts of transitions where the intermediaries are not visible to NS. That’s completely different.

    “What, exactly, are these “formal properties”.”

    It’s easy:

    NS: the variation is selected because it confers to the replicator differential reproductive advantage of itself, without being in any way actively measured, rewarded ot amplified by the system.

    IS: the variation confers some function variation which is actively measured, rewarded and amplified by the system, and which in itself has no functional power to give to the replicator any differential reproductive advantage.

    “Your previous definitions of dFSCI in the discussion between you and I in this thread never mention those.”

    Because this point has nothing to do with dFSCI. This point is simply about EAs modeling IS and not NS.

    “Up until now, the working assumption was that dFSCI is an objective metric that can be measured for any system.”

    No.Look again at the definiton. Necessity algorithms are excluded.

    “You gave no indication previously in our discussion of Tierra or ev that it is inherently impossible to measure dFSCI in either of those environments.”

    That’s simply because I had no idea of how those systems worked, and I could not yet point to their added information (algorithmic components). I have never denied that well planned algorithmic systems cannot generate functional complexity. Indeed, I believe they cannot generate fucntional complexity for a completely new functional specification, but that’s another subject.

    One simple example. Let’s say you have an algorithm of, say, 200 bits of comlexity, which can compute the digits of pi. That algorithm can cetainly compute a great enough muber of difits of pi, so that the result will be far more complex than 200 bits (for examèple. 500 bits). In this case, we apparently have a 200 bit algorithm which produces 500 bits of functional complexity (the result is functionally specified). But again, according to my definition, the only complexity we have to account for is the initial 200 bits (which is the kolmogorov complexity of the system, whatever the complexity of the results). And anyway, the specification remains the same.

    “The idea that populations will not gain any “reward” from the environment is not consistent with real world observations. Living to reproduce is the reward.”

    It’s fine that different reproductive ability determines different reproduction in the environment.

    My point is that the environment must not actively measure the funbction or reward it. It’s a completely different concept, and I am surprised that you cannot get it. In NS, the environment is passive. In EAs, the environment has a lot of information about what to do with replicators according to pre programmed rules.

    “In Tierra, organisms that cannot replicate don’t leave progeny, naturally enough. ”

    I have asked many times what is the criterion to define a viable organism in Tierra.

    “In ev, organisms that are less fit don’t reproduce.”

    I don’t agree. You said:

    “The half of the copies with the lowest affinity for the binding sites are discarded and the other half are used to seed, again with random mutation, the next generation.”

    In what sense has the degree of affinity to a binding site anything to do with “fitness”? That does not make a replicator more fit” in any way. Unless the system measures the affinity, and decides to reward and amplify the greater affinity. Which is IS.

    “These are reasonable models of what we observe in the real world.”

    No.

    “On what do you base this claim? What, specifically, about the design of Tierra makes this statement accurate?”

    I am still waiting further details about Tuierra. From you.

    “I’ve been spending a significant amount of time discussing this with you in order to really understand what you mean by dFSCI.”

    It’s nobody’s fault if you didn’t succeed. Try again. I have given further clarifications now (which I had, anyway, already given). My impression is that you have not followed the whole discussion about dFSCI here.

    “I thought we were getting close. It seemed like we, together, were going to demonstrate that dFSCI is an objective, measurable quantity. Now I feel that you are changing your definitions just when the topic was getting mathematically interesting.”

    As I have tried to explain, I have changed nothing. So, either I am confused, or you don’t understand what I say. The choice is yours.

    “If you don’t understand what makes a Tierra organism viable, you probably don’t understand the system well enough to make the claim you made above about the architecture of the system.”

    That’s true. I have said many times that I don’t understand Tierra. And I have asked you further details, Which I am waiting for.

    I will be frank. I don’t want to spend my Christmas studying Tierra. It’s you who have brought it in. I am available to discuss it, but only to the point that you give me the necessary information. If you don’t, I will provisionally keep my present ideas about Tierra, which are not encouraging at all.

    “I’ve explained at least twice that viability in Tierra means “able to replicate.””

    I want to know what sequence of instructions determines replication, and how. Just satrt explaining how the 80 organism replicate, for instance.

    “I explained in my previous post why it is essential to calculate the length of the shortest path through “viable organism space” in order to align dFSCI with biological reality. Simply restating your original claim, without addressing the issues I raised, is not a reason to disregard those issues.”

    You are only showing again that you have not understood dFSCI. And anyway, why can’t you answer my questions about homology and target space? Just answer. Then, I will try to show you how dFSCI is calculated.

  296. 298 gpuccio December 24, 2010 at 5:34 pm

    Mathgrrl:

    “It is a model of fitness in an environment and it reflects biological reality.”

    Not true, as already argumented.

    “As I noted previously, you can change the cutoff point and you can even make the reproduction algorithm stochastic, but you’ll get the same results.”

    It remains an intelligent algorithm. Intelligent algorithm can certainly have stochastic components. And in no way does the stochastic component help to “reflect biological reality”. A true process of NS would.

    “In the real world we see that more fit individuals leave more progeny and less fit individuals leave fewer or none. Over time this results in changes to the allele frequency in populations (evolution). This is just a model of what we observe in the real world.”

    As explained, it isn’t, because it has nothing to do with real, natural fitness, but only with measured fitness.

    “Those are some strong claims.”

    Strong and true.

    “I would be interested in hearing some support for them.”

    Ask your own reason. I have no authority.

    “However you attempt to do so, please explain why, despite your assertions, the ev system results in exactly the same amount of information generation that is observed in numerous real world genomes.”

    I don’t understand this point.

    “This is an excellent example of one of the major issues in this discussion. Your statements are correct if and only if the search space is completely flat — that is, if each step is equally likely and possible. That is not the case in biological systems nor in Tierra.”

    Not in Tierra, certainly.

    “In those systems, not every intermediary is viable. To get from aaaaa to bbbbb, to use your five character string example, may require going through accca, cdedf, and xyzzy (one character change at a time) in order for each string to be viable enough to reproduce.”

    I have discussed this point in great detail with Zachriel. In biological systems, we accept that true NS acts. I agree on that.

    Now, we must separate two kinds of NS, which have completely different results.

    The first is “negative NS”. That is what you are referring to when you say that an organism must be able to survive to survive.

    Now, negative NS is a strong principle. It acts always, because many mutations are negative, and some of them are negative enough to prevent survival. We see it in human genetic diseases. I have never denied the importance of negative NS.

    But the only result of NS is to keep the existing information inside the boundaries of its functionality.

    If you look at the “big bang model” of protein evolution, quoted by me many times here, you will see that negative NS can well explain the variations of primary sequence inside the functional island.

    But negative NS cannot in any way favor the generation of new functional information. Indeed, it is an hindrance to that.

    Positive NS is what you need to get to complex transitions through a non completely random mechanism. So, a transition from one seuqence to a different one with different function can only be the result either of RV alone, or of RV plus positive NS of intermediaries.

    That has always been my point, very clear and very simple. dFSCI is applied only to the RV parts. If you can show functional, naturally selectable intermediaries, we split the calculation to the two remaining RV parts.

    Moreover, please consider, as I have said to Zachriel (without getting any comment from him on that), that the most common model for really important transition in the darwinian field is the “duplicated gene” model. Now, a duplicated gene, if no more functional, is “freed”, so to say. from the constraints of negative NS, but at the same time it becomes the theatre of a completely random walk, at least until it generates new functional infomation and, by some lucky magic, becomes transcribed and translated again.

  297. 299 gpuccio December 24, 2010 at 5:37 pm

    Mathgrrl:

    “I had hoped for better from you. I would very much like to get back to the calculations, but that will require you to take responsibility for defining your terms and working constructively with me. Are you willing to do that? Are you even interested in testing your claims?”

    Yes and yes. But you have to understand correctly what I say. I don’t mean to be offensive in any way. I just think you have not understood. There is no offense in bthat.

    I really believe I have not to change anything in the definition. I have tried to explain why, in my latest posts to you. If you need further clarifications about any aspect of the definition, I am here for that.

  298. 300 gpuccio December 24, 2010 at 5:40 pm

    Mark:

    “Gpuccio – I imagine this is familiar? I don’t know how many times I have tried to persuade you that the very definition of CSI, FSCI and dFSCI entails that it is produced by intelligence. As soon as you can show a non-intelligent source then it is no longer complex.”

    Yes, it is familiarly wrong. The definition only entail that we can recoignize something as produced by intelligence, in some cases. But I have renounced to try to convince you of that.

    It is obvious that, if you can show a non intelligent source, there is no way to be certyain that intelligence was involved. I cannot see why you are so fascinated by such a simple truth.

  299. 301 gpuccio December 24, 2010 at 5:41 pm

    Petrushka:

    “Don’t insult everyone’s intelligence. Where can you see Pangaea separating into continents?”

    Well, it’s Christmas, not time for insults. So I will pass on that.

  300. 302 gpuccio December 24, 2010 at 5:43 pm

    Mark, Zachriel, Toronto, Petrushka, Mathgrrl, Maya:

    By the way, please accept my very sincere wishes of a very merry Christmas to all of you.

  301. 303 Toronto December 25, 2010 at 7:25 pm

    gpuccio,

    Merry Christmas !

  302. 305 Toronto December 26, 2010 at 1:55 pm

    gpuccio,

    Toronto: This is what I meant when I said previously, that you pretend you don’t understand what I meant.

    ……..

    Toronto: ““Evolution” does not “see and judge” the actual me, just as your designer does not “see and judge” the actual you.”

    gpuccio: Well, I can see and judge this phrase, but nothing kind comes to my mind…

    Note the key word here, “pretend”.

    You have to have the courage to answer our questions in the spirit and context that they were asked. They may be leading questions, but they lead to a point, a conclusion you are under no obligation to accept, just simply to note, understand and then respond to.

    Whether you like it or not, you are representing your side of the ID/Evolution debate every time you engage with any of us.

    When you try to evade a question, I see the ID side evading a question from the Evolution side. There should be no reason for this as your side should be prepared to replace Evolution as a scientific field of study since you believe it to be so incredibly improbable and unscientific.

    If it is your belief that ID is not ready to be taken seriously, then continue ducking our questions.

  303. 306 MathGrrl December 26, 2010 at 3:39 pm

    gpuccio,

    Happy Boxing Day!

    Before going back to the details of Tierra and ev, I need to understand your new definition of dFSCI:

    “Your new claim here makes dFSCI useless for identifying intelligent input because it states that dFSCI cannot by definition arise from non-intelligent mechanisms.”

    From what do you derive this weird conclusion? That’s what I wrote (and you quote):

    “I am afraid you don’t understand my definition of dFSCI. My definition apllies only to results or transitions where the result is not explainable in terms of necessity.”

    This is the first time you have introduced the word “necessity” in the context of these two GAs. You seem to be using it as a term of art. What is your definition of “necessity”, as precisely as possible?

    If by “necessity” you mean “solely based on the rules of the environment, without additional input by an external intelligent actor”, then you are essentially stating that it is impossible even in principle to calculate dFSCI for any GA. All such simulations that we’ve been discussing evolve various behaviors or generate information based on the rules of the environment, with no additional input once the original settings are configured.

    Is it or is it not possible to calculate dFSCI for Tierra, ev, or other evolutionary simulations, according to your definition?

  304. 307 gpuccio December 26, 2010 at 10:27 pm

    Mathgrrl:

    “Is it or is it not possible to calculate dFSCI for Tierra, ev, or other evolutionary simulations, according to your definition?”

    First of all, again there is no new definition of dFSCI. You can find my definition in some detail here:

    http://www.uncommondescent.com/intelligent-design/why-secular-and-theistic-darwinists-fear-id/#comment-363528

    including a detailed discussion about compressibility. Please, refer to the concepts there for our following discussion.

    dFSCI is observed (or not observed) in an output. So, for Tierra, you can ask if some output generated by the system exhibits dFSCI or not. If it exhibits it, the conclusion, according to ID, would be that an intelligent being was involved in generating it. If that is not true, as it is not true in Tierra (if we take for granted the system, than th ID procedure would be wrong, because that would be a false positive.

    But that is not the case. Always starting from my incomplete data about Tierra, we observe that:

    a) Let’s take the 45 instr. as an output. Does it exhibit dFSCI?

    b) First of all, it was not generated de novo. It derived in some way from the 80 instr. precursor. So, we are considering a transition, and not a de novo generation.

    c) At this point, we have to restrict our analysis to the part which has changed in the output. That’s why I have repeatedly asked if there were homologies between the precursor and the final output. You have not answered, so at present I cannot identify the real transition about which we should evaluate the functional complexity.

    d) Once identified the true transition, we have to evaluate the part of it which is really necessary to the function. That requires a good definition of the acquired function (for instance, in terms of necessary instructions), and a functional test of the modified bits. Moreover, it requires an evaluation of the target space (what is the probability of finding an output with that function by RV only).

    e) But the greatest problem, as I have tried to elucidate, remains the necessity part. IOWs, Tierra is an algorithm, and it works by necessity rules, defined by its programmer. By “necessity” I obviously mean all those parts of the system which work according to algorithmic rules, and not according to a probabilistic variation. If the set of rules which define the system is such that it can calculate a certain output algorithmically, the algorithmic parts cannot be included in the calculation of the dFSCI of a transition. As I have tried to explain, dFSCI calculation must be split so that it can be applied exclusively to a transition (or de novo generation) for which we can assume a purely probabilistic cause. That is very clear in my definition. And that is exactly how I apply it to proteins.

    IOWs, if and when we can reasonably assume that NS has expanded some output generated by RV, I apply the dFSCI calculation only to the transition to the first output (before expansion), and then to the transition for the second (expanded) output to the final result.

    So, if protein A generates protein B through an internediate A1, which can be demonstrated to be naturally selectable and to expand in a natural environment, then I will calculate the dfSCI of the transition form A to A1, and then the dFSCI of the transition from A1 to B, but never the dfSCI of the whole transition from A to B, because the algorithmic component of the necessary expansion of A1 would make that calculation incorrect.

    The important point is that any algorithmic component must be truly documented, and not only “assumed as possible”, as Petrushka does. So, again, the problem with the darwinian model is that the algorithmic component (positive NS) is never documented, least of all correctly modeled, for complex transitions. Because there is no evidence that it exists.

    For Tierra, I have not gone into detail about the algorithmic parts of the system, because I still don’t understand the system.

    For ev, I have showed a clear algorithmic component in your words describing it.

    I hope I have been clear.

  305. 308 gpuccio December 26, 2010 at 10:32 pm

    Toronto:

    I still don’t understand what is the question I “ducked”.

    You wrote:

    “Evolution” does not “see and judge” the actual me, just as your designer does not “see and judge” the actual you.”

    I had simply refrained from commenting that such a statement is senseless, but I will do it now. It is.

    Evolution, in the darwinian sense, does not see and judge anything by definition, being a non conscious, non intelligent process, and being sight and judgement conscious processes.

    A designer (which is not “mine”, as far as I know) can see and judge, because he is by definition a conscious intelligent being. Whether he sees and judges me or you, actual or not, can be an object of debate, but it is certainly possible in principle.

    Am I “ducking” something else?

  306. 309 Toronto December 27, 2010 at 3:15 am

    gpuccio: Am I “ducking” something else?

    Yes, the subject we are ACTUALLY debating, and it’s not evolution, it’s the designer.

  307. 310 gpuccio December 27, 2010 at 12:00 pm

    Toronto:

    What am I “ducking”? I believe that one or more conscious intelligent beings, probably not physical, had and probably have a role in inputting intelligent information into living beings. Through some form of access to biological events, probably at quantum level, essentially similar to how our own consiousness can interface itself with our brain. The implementation of design was realized throrugh specific strategies, almost certainly not too gradual, which at present can only be hypothesized, and are certainly open to verification from observed facts. Among those strategies, the most obvious are guided mutations, targeted random variation and intelligent selection.

    I can’t see what I am ducking. This is my present model, and I have said these things many times. This is not the onlt possible model, but it is my present one.

    Other models are perfectly compatible with ID: intelligent aliens or other conscious intelligent physical beings, various forms of pre-loading of information, specific creation ex nihilo by an omnipotent god, and probably many others. I don’t believe they are convincing, but that’s just my personal opinion.

    Any other “duckings”?

  308. 311 Toronto December 27, 2010 at 1:28 pm

    gpuccio: Other models are perfectly compatible with ID: intelligent aliens or other conscious intelligent physical beings, …..

    No other model other than a being equal in power to the Christian god is compatible with ID.

    1) Only a being that has knowledge of the future can be your designer.

    2) Aliens have to come from somewhere. If they arose without a designer, that’s proof that so could we.

    3) If it is impossible for the alien designers to have come into being without a designer, who designed them?

    4) If aliens designed life, then the bible is wrong when it says God did.

    5) If the design of life was composed of sequences of trial and error, it would look exactly like evolution.

  309. 312 Toronto December 27, 2010 at 1:31 pm

    gpuccio: Through some form of access to biological events, probably at quantum level, essentially similar to how our own consiousness can interface itself with our brain.

    The brain is the ..source.. of our “consciousness”. It is not something that exists outside of our brain.

    As an analogy, hunger does not interface with our stomachs.

  310. 313 Zachriel December 27, 2010 at 3:56 pm

    gpuccio: First of all, again there is no new definition of dFSCI. You can find my definition in some detail here: http://www.uncommondescent.com/intelligent-design/why-secular-and-theistic-darwinists-fear-id/#comment-363528

    d is for digital.

    Z is for Zachriel.

    FS is for functionally specified. My definition for that is that a conscious intelligent observer must recognize a function and define it explicitly, giving also a quantitative method to asses its presence or absence.

    That means that dFSCI is not a pure metric, but depends on choice of function. Also, it presumably depends on degree of function, or at least detectable function.

    The necessity of being clear and explicit in defining the function is one of the reasons why I never try to apply, at least at the level that ID is at present, the concept of dFSCI to whole biological systems, and I apply it only to single proteins, or better still to single protein domains.

    That’s a reasonable limitation, at least until we get our bearings.

    C is for complex, and I for information. The concept of information in Shannon’s theory is, as all know well, more a measure of complexity.

    Shannon entropy is the uncertainty in a random variable.

    Indeed, as all know, Shannon’s theory is not about information, but about data transmission. I mean that we can measure the complexity of a string, but that measurement has nothing to do with the “meaning” of that string.

    As Shannon founded modern Information Theory, the first sentence is ambiguous, and should probably be avoided.

    Indeed, random strings have ususally the highest complexity, because they are not compressible and they have the highest uncertainty.

    Random strings have high Shannon entropy because each new bit is uncertain. They have high K-complexity because they are incompressible.

    The important point is that protein sequences are mostly pseudo random sequences. This too is well known: they are scarcely compressible, and they cannot be generated by any simpler algorithm.

    Proteins are compressible based on an evolutionary tree strategy.

    Hategan & Tabus, Protein is Compressible, Nordic Signal Processing Symposium 2004.

    Cao et al., A simple statistical algorithm for biological sequence compression, Data Compression Conference 2007.

    There are an infinite number of compression schemes, and we can never know or test most of them. We could use the compressed length, but for now, let’s assume that proteins are scarcely compressible.

    You ask why the non compressibility is important for dFSCI. The answer is that it is a good way to ensure that the string is not the result of some necessity mechanism.

    Not particularly. There are an infinite number of sequences that appear complex, but are the result of necessity.

    IOWs, we must completely rule out (at least empirically: they remain “logically” possible) all false positives. As anybody probably knows, that means usually that we accept a high rate of false negatives.

    A one-way filter. Many or even most designed objects may be excluded, but anything that makes it through must be designed.

    As you can see, the most difficult part in that is to know, even approximately, how big is the target space. As far as I know, Durston’s indirect method ia at present the best empirical answer to that.

    So, it’s not so easy to determine the target space, which depends on the choice and degree of function.

    ———-
    So, let’s look at your algorithm:

    1. The sequence must have a recognizable function.
    2. The sequence must resist compression.
    3. We take the (negative natural logarithm) of the ratio of sequences that exhibit the function to the number of possible sequences.

    Did we miss a step?

  311. 314 Petrushka December 27, 2010 at 6:12 pm

    1. The sequence must have a recognizable function.
    2. The sequence must resist compression.
    3. We take the (negative natural logarithm) of the ratio of sequences that exhibit the function to the number of possible sequences.
    ++++++++++++++++++++++++++

    None of which leads to any understanding of how design might be implemented.
    The functionality considered here is one dimensional, but actual functionality is multi-dimensional.

  312. 315 Zachriel December 27, 2010 at 8:09 pm

    Petrushka: None of which leads to any understanding of how design might be implemented.

    Or detected.

    The final step is the assertion that the only time we can determine the cause of dFSCI is when it has an intelligent cause, therefore biological dFSCI is also due to an intelligent cause. This is an unwarranted extrapolation, of course, and requires ignoring what we know about evolution. Anyway, it might be handy just to have an unambiguous metric of dFSCI.

  313. 316 gpuccio December 27, 2010 at 8:32 pm

    Toronto:

    I have given my model, why do you just comment on the others?

    Anyway:

    1) Wrong: I have specified many times to you that the designer can correct and complete his design in time. How many times must I repeat that?

    2) I agree that aliens are not a definitive answer, but still if they were responsible for the origin of life on our planet, all the scenarios are shifted. And they become different.

    3) Same as 2)

    4) Are you a blind defender of the Bible?

    5) Wrong, as discussed many times. The intervention of a conscious intelligent agent makes all the difference, for instance making the generation of dFSCI possible.

  314. 317 gpuccio December 27, 2010 at 8:33 pm

    Toronto:

    “The brain is the ..source.. of our “consciousness”. It is not something that exists outside of our brain.”

    That’s just your personal belief. I don’t agree. We have discussed that many times.

  315. 318 gpuccio December 27, 2010 at 8:43 pm

    Zachriel:

    “Z is for Zachriel”

    Is that supposed to be funny?

    “That means that dFSCI is not a pure metric, but depends on choice of function. Also, it presumably depends on degree of function, or at least detectable function.”

    Yes. And so?

    “As Shannon founded modern Information Theory, the first sentence is ambiguous, and should probably be avoided.”

    I believe it is commonly recognized that the term “information theory” is extremely inappropriate for Shannon’s theory.

    “Random strings have high Shannon entropy because each new bit is uncertain. They have high K-complexity because they are incompressible.”

    Yes.

    “We could use the compressed length, but for now, let’s assume that proteins are scarcely compressible.”

    Yes.

    “Not particularly. There are an infinite number of sequences that appear complex, but are the result of necessity.”

    Then, if the algorithm which can generate them is simpler than the sequence itself, the sequence K complexity is the complexity of the algorithm. IOWs, they are compressible.

    “A one-way filter. Many or even most designed objects may be excluded, but anything that makes it through must be designed.”

    Yes.

    “So, it’s not so easy to determine the target space, which depends on the choice and degree of function.”

    Yes, and yes.

    “Did we miss a step?”

    I suppose the essential is there. But I think it is the negative base two logarithm.

  316. 319 gpuccio December 27, 2010 at 8:44 pm

    Petrushka:

    “None of which leads to any understanding of how design might be implemented.”

    Design detection is one thing. Inferences about implementation are another.

  317. 320 gpuccio December 27, 2010 at 8:47 pm

    Zachriel:

    “Or detected.”

    Why? It says how it can be detected. You may obviously disagree.

    “The final step is the assertion that the only time we can determine the cause of dFSCI is when it has an intelligent cause,”

    Which is a fact.

    “therefore biological dFSCI is also due to an intelligent cause.”

    which is a reasonable inference.

    “This is an unwarranted extrapolation, of course, and requires ignoring what we know about evolution.”

    Luckily, there is really little to be ignored.

    “Anyway, it might be handy just to have an unambiguous metric of dFSCI.”

    Yes.

  318. 321 Toronto December 27, 2010 at 9:15 pm

    gpuccio: 1) Wrong: I have specified many times to you that the designer can correct and complete his design in time. How many times must I repeat that?

    You are making an assertion that is not warranted by any sort of evidence.

    I’m willing to accept your statement about the designer if you give me evidence about the designer’s work methods.

    It is quite possible he never makes mistakes, and it is also possible that 99% of his designs are failures.

    Show me what knowledge you have of the designer that would let you say that he makes mistakes, can fix them and always makes his deadline.

    Please, you have said it and you need to back it up.

    This is a scientific debate and science needs to be justified with detailed explanations, not simply repeated assertions.

    I don’t want you to just assert something, I want you to back up your statements.

    I’m getting frustrated because what I hear is, “I have said it before and I’ll say it again, this is a fact.”.

    That’s not good enough.

    Show me that you know this designer well enough that you know how he works.

  319. 322 Zachriel December 27, 2010 at 10:09 pm

    gpuccio: I believe it is commonly recognized that the term “information theory” is extremely inappropriate for Shannon’s theory.

    Wikipedia: Information theory was developed by Claude E. Shannon …

    Bell Labs: Claude Shannon’s 1948 paper `A Mathematical Theory of Communication,’ founded Information Theory.

    Shannon “information theory” site:.edu

    This is rather off-topic, but “commonly recognized” means, well, commonly recognized. Shannon’s Theory underlies all computer technology, including the Internet. Google’s algorithm may not understand the meaning of much of anything, but it can manipulate information quite well. There are several senses of the word “information”, but Shannon’s is certainly one of them.

    gpuccio: The answer is that it is a good way to ensure that the string is not the result of some necessity mechanism.

    Zachriel: There are an infinite number of sequences that appear complex, but are the result of necessity.

    gpuccio: Then, if the algorithm which can generate them is simpler than the sequence itself, the sequence K complexity is the complexity of the algorithm. IOWs, they are compressible.

    There are infinite numbers of such algorithms that we may not know or recognize. That leaves us with complex-looking sequences that are the result of necessity. You had used the term “ensure”. This is incorrect. The best we can hope for is to eliminate a subset of patterns from further analysis. Indeed, the more ignorant we are, the more likely it is that we will not recognize a pattern.

    gpuccio: But I think it is the negative base two logarithm.

    Your original said the natural logarithm, but log2 would make more sense, and is more consistent with other such descriptions.

  320. 323 Zachriel December 27, 2010 at 10:22 pm

    Zachriel:
    1. The sequence must have a recognizable function.
    2. The sequence must resist compression.
    3. We take the (negative log2) of the ratio of sequences that exhibit the function to the number of possible sequences.

    Did we miss a step?

    gpuccio: I suppose the essential is there.

    Good. By shortening the description, it is much easier to comprehend the entire procedure. (We can expand individual terms as necessary.)

    gpuccio: It says how it can be detected.

    No. All you’ve done so far is propose a measure of something called dFSCI.

    Zachriel: The final step is the assertion that the only time we can determine the cause of dFSCI is when it has an intelligent cause, …

    gpuccio: Which is a fact.

    It’s an assertion. Even granting that dFSCI is an unambiguous measure, most biologists think that such functional complexity can arise through evolution.

    Zachriel: therefore biological dFSCI is also due to an intelligent cause.

    gpuccio: which is a reasonable inference.

    You’re confused on the scientific method. You may have enough to lead you to personally suspect design, but such a claim has to entail specific empirical predictions. Otherwise, all you have is analogy.

  321. 324 MathGrrl December 27, 2010 at 11:54 pm

    gpuccio,

    “Is it or is it not possible to calculate dFSCI for Tierra, ev, or other evolutionary simulations, according to your definition?”

    First of all, again there is no new definition of dFSCI. You can find my definition in some detail here:

    http://www.uncommondescent.com/intelligent-design/why-secular-and-theistic-darwinists-fear-id/#comment-363528

    including a detailed discussion about compressibility. Please, refer to the concepts there for our following discussion.

    As I noted above, in the context of our discussion of Tierra and ev here, your recent post was the first to make any mention of “necessity” and this is your first reference in the subthread between you and I here to an external definition. I thought we were getting close to an actual calculation before you brought in these additional topics.

    Be that as it may, I’ve looked at your link and remember the discussion. There you mention “necessity” only in the context of compressibility. I think we need to get a better understanding of your definition before proceeding to a calculation. In particular, you said above:

    But the greatest problem, as I have tried to elucidate, remains the necessity part. IOWs, Tierra is an algorithm, and it works by necessity rules, defined by its programmer. By “necessity” I obviously mean all those parts of the system which work according to algorithmic rules, and not according to a probabilistic variation. If the set of rules which define the system is such that it can calculate a certain output algorithmically, the algorithmic parts cannot be included in the calculation of the dFSCI of a transition. As I have tried to explain, dFSCI calculation must be split so that it can be applied exclusively to a transition (or de novo generation) for which we can assume a purely probabilistic cause.

    I’m not at all sure what this is supposed to reflect in the real world. I also cannot see how to apply it to Tierra or any other evolutionary simulation. New organisms in Tierra arise from a mutation mechanism that has no knowledge of the environment and so is completely blind to fitness. Just as we observe in the real world, some mutations make the offspring non-viable, some are neutral, and some allow the offspring to replicate slightly better than the parent. Those organisms that are better at replicating tend to leave more progeny than those that do not, again just as we observe in the real world.

    Since Tierra does not have an explicit fitness function, all the functionality we see arise (parasitism, hyper-parasitism, loop unrolling, significantly improved replicators, etc.) comes from the changes due to random mutations. That seems to meet your definition of dFSCI, but you claim it does not.

    How, exactly, can we calculate dFSCI for this type of system?

  322. 325 MathGrrl December 27, 2010 at 11:55 pm

    gpuccio,

    I have a specific question about one particular statement you made above:

    As I have tried to explain, dFSCI calculation must be split so that it can be applied exclusively to a transition (or de novo generation) for which we can assume a purely probabilistic cause.

    Are you claiming that only de novo changes of more than 150 bits in a single step constitute dFSCI, or am I wrongly inferring the single step constraint?

  323. 326 gpuccio December 28, 2010 at 1:56 pm

    Toronto:

    “Show me what knowledge you have of the designer that would let you say that he makes mistakes, can fix them and always makes his deadline.”

    I have just answered to your stetment that the designer can only be the christian God. That is not true, and I have shown alternative scenarios. It’s your stetment which is not substantiated.

    However, I will try just the same to point to some facts which could support my scenario.

    First of all, as my main point is that the appearance of each new basic protein domain is a reliable sign of a design intervention, that implies that the desgner, whoever he is, has been actyive throughout natural history of living beings. Indeed, even if about half of the basic domians appear early at OOL or shortly after, the other half emerges in a scattered pattern, throughout natural history, up to mammals. That is a convincing evidence that the designer has been acting in time, and could still be acting.

    Secon, I believe that the strict sequence of two independent, very near in time, explosions of multicellular body plans (the Ediacara and the Cambrian) is best explained considering that the first explosion for some unknown reason “failed” a “short” time after its emergence, and was substituted by a second attempt. That would be evidence that the designer can fail, and that he can try again.

  324. 327 gpuccio December 28, 2010 at 2:25 pm

    Zachriel:

    “There are several senses of the word “information”, but Shannon’s is certainly one of them.”

    I think we essentially agree. I just meant that Shannon’s theory has nothing to do with meaning and semantics, which is usually the meaning commonly associated to the term “information”. I believe that is widely recognized. And Shannon himself calls it “a theory of communication”.

    “You had used the term “ensure”. This is incorrect.”

    That’s true. But as usual, I meant that empirically, not logically. The best way to exclude, empirically, that a necessity mechanism is responsible for the output is to be sure that no such a mechanism is known. In empirical science, we cannot deal with what “could be”. We have to reason with what we know.

    Moreover, the simple fact that no protein engineer has been able to find a “simple” mechanism” to find functional protein sequences through some algorithm is good evidence, IMO, that suchn an algorithm does not exist or, if it exists, it is extremely complex, much more comples than its possible outputs. Which is enough for my argument.

    “Your original said the natural logarithm, but log2 would make more sense, and is more consistent with other such descriptions.”

    Probably just an error of mine. I have always reasoned in terms of base 2.

  325. 328 gpuccio December 28, 2010 at 2:37 pm

    Zachriel:

    “It’s an assertion. Even granting that dFSCI is an unambiguous measure, most biologists think that such functional complexity can arise through evolution.”

    Well, let’s put it this way. The origin of biological information is controversial. You can believe differently, but it is. We in ID don’t believe that biologists have really explained its origin.

    ID uses dFSCI to detect design. The property, when present, and excluding the cotroversial set of biological information which is indeed the object of our discussion, is always associated with the intervention of conscious intelligent beings, without exception. That’s why we infer that it is the best positive marker of design.

    “You’re confused on the scientific method. You may have enough to lead you to personally suspect design, but such a claim has to entail specific empirical predictions. Otherwise, all you have is analogy.”

    ID makes at least as many predictions as the neo darwinian theory. Indeed, both are falsifiable, and one of them will be falsified.

    As we gather more data about proteomes, genomes, and natural history, it will be obvious which of the two is unsupported by facts. For instance, if we could demonstrate that the emergence of new basic protein domains happened in a very short time, and without any trace of selectable intermediaries, the neo darwinian model would be doomed, and design would be by far the best explanation. On the contrary, is a resonable, detailed algorithm, complete with intermediate selectable steps, and evidence of those intermediaries in natural history, could be shown through gathered facts, then the desing theory would be falsified: biological information could no more be considered as exhibiting dFSCI, and it would be satisfactorily explained by the neo darwinian theory.

    In the same way, it is always possible that some new theory comes out, which can explains data better. Scientific theories are never final, not even the ID theory.

    But at any moment we need some “best explanation”, to go on with our understanding of reality.

    So, ID is an inference by analogy, which explains very well observed facts, and which has specific implications which can, and will, be verified or falsified by facts.

  326. 329 gpuccio December 28, 2010 at 3:03 pm

    Mathgrrl:

    “Are you claiming that only de novo changes of more than 150 bits in a single step constitute dFSCI, or am I wrongly inferring the single step constraint?”

    I cannot keep track of what I have already discussed with any single interlocutor. My definition of dFSCI had been linked in the beginning by Mark, so i was confident that all interlocutors knew it.

    Regarding, the “single step” problem, I have been clear about that (but I don’t remember where and with whom). I repeat it here.

    The important point is not the number of steps. The important point is that the transition of more than 150 functional bits has to happen without any help by necessity mechanisms which favour a specific result.

    Let’s be more clear.

    If protein A effects a transition to (unrelated) protein B through a functional mutation of 50 AAs (which is well more than 150 bits), to affirm dFSCI we have to reasonably assume that no intermediary step was positively selected and amplified. If that is true, IOWs if there is no evidence at all of naturally selectable intermediaries in the transition, then it is of no relevance how many steps were necessary to reach B. Indeed, the probability of getting to B by chance is so low, that we can reasonably exclude that it happened by chance.

    Even in a purely random system, the transition could theorically happen nin a single step (for instance by a frameshift mutation), or in, say, at least 50 steps (if each mutation is successful in finding the right target), or in, say, 10^9 mutations. It does not really matter. Even if it happened in 10^9 attempts, it would still violate all laws of probability, adn we should still infer design, or some unknown necessity mechanism (with all the logical and empirical stretches that such a hypothesis implies).

    That’s why I say that the number of steps is not important. The probability of getting from A to B through random events, any kind and number of random events realistically conceivable in our system, is just too low to make it a reasonable possibility.

    The meaning of the threshold is exactly that. It takes into account the realistic probabilistic resources which can be available, including the number of possible events (attempts). I have chosen 150 bits because that puts the probability low enough even if we consider all the probabilistic resources of a maximal living system on our planet (maximal population, highest replication rate, time equal to the whole life of our planet).

    Dembski’s original 500 bits UPB was instead meant to exclude random causes in the whole universe. But frankly, I think that’s conceding too much. Our models are about life on our planet, with specific minimal requirements (at least prokaryotes, reproduction times that, although short, are not in the order of quantum times, and so on).

    So, again, I insist: the probability of finding a target space in a random walk is not significantly influenced by the number of events, if the number of possible events is extremely low compared to the search space. It is certainly true that finding a functional protein in one single event is more unlikely than finding it in 10^9 attempts, but if the target space/search space ratio is, say, of 200 bits, the differenc is not significant.

    And yet, intelligent people (like Ono and many biologists) have believed for decades that nylonase emerged in one step through a frameshift mutation. That demonstrates that faith can overcome all probabilistic barriers.

  327. 330 gpuccio December 28, 2010 at 3:05 pm

    Mathgrrl:

    I still need your input about a definition of reproduction in Tierra, of vìiability in terms of single instructions sequences, and about homology netween the “evolved” output and its precursor. No discussion can go on without those details (at least).

  328. 331 gpuccio December 28, 2010 at 3:07 pm

    Zachriel:

    However, as I have said many times, I am not a fan of any “scientific method” definition. I am more for Feyerabend than for Popper, I suppose.

  329. 332 Zachriel December 28, 2010 at 3:39 pm

    gpuccio: The answer is that it is a good way to ensure that the string is not the result of some necessity mechanism.

    gpuccio: But as usual, I meant that empirically, not logically. The best way to exclude, empirically, that a necessity mechanism is responsible for the output is to be sure that no such a mechanism is known. In empirical science, we cannot deal with what “could be”. We have to reason with what we know.

    That is also incorrect. It’s like searching under the streetlight for your lost keys because the light is better. Absent an exhaustive search, you would have to show that the limited search is representative of the domain under consideration. In this case, there are vast numbers of possible necessity mechanisms that you haven’t considered.

    gpuccio: Moreover, the simple fact that no protein engineer has been able to find a “simple” mechanism” to find functional protein sequences through some algorithm is good evidence, IMO, that suchn an algorithm does not exist or, if it exists, it is extremely complex, much more comples than its possible outputs.

    Functional proteins occur in random sequences at a frequency of at least ~10^-11.

    gpuccio: The origin of biological information is controversial. You can believe differently, but it is. We in ID don’t believe that biologists have really explained its origin.

    Not in the scientific or mathematical communities. There are very few outliers, and if the term scientific consensus has any meaning, it certainly applies to evolution.

    gpuccio: ID uses dFSCI to detect design. The property, when present, and excluding the cotroversial set of biological information which is indeed the object of our discussion, is always associated with the intervention of conscious intelligent beings, without exception. That’s why we infer that it is the best positive marker of design.

    No, it’s an unfounded assertion, a case of circular reasoning.

    gpuccio: ID makes at least as many predictions as the neo darwinian theory. Indeed, both are falsifiable, and one of them will be falsified.

    What are the *entailed* predictions from Intelligent Design Theory. That means to take the hypothesis, and derive the empirical implications.

    gpuccio: For instance, if we could demonstrate that the emergence of new basic protein domains happened in a very short time, and without any trace of selectable intermediaries, the neo darwinian model would be doomed, and design would be by far the best explanation.

    Notice that you didn’t find entailments in the Intelligent Design Hypothesis. In any case, proteins are particulate, so strict gradualism doesn’t always apply.

    This is similar to your fallacy concerning your “ensurance” concerning necessity mechanisms. Just because one natural mechanism may be discounted doesn’t mean you have discounted all possible natural mechanisms. In particular, there is the possibility of new domains appearing through various mechanisms such as exonization and recombination of existing exons.

    gpuccio: But at any moment we need some “best explanation”, to go on with our understanding of reality.

    Perfect example of God of the Gaps. We might speculate, but we don’t consider it a valid scientific conclusion without supporting evidence, i.e. testable entailments.

  330. 333 Toronto December 28, 2010 at 3:46 pm

    gpuccio,

    Here is a statement that you, not I, have made.

    gpuccio: 1) Wrong: I have specified many times to you that the designer can correct and complete his design in time. How many times must I repeat that?

    I don’t care who the designer is, but you must have had some scientific evidence about the manner in which the designer works, to base this statement on.

    What is that scientific evidence?

  331. 334 Zachriel December 28, 2010 at 3:49 pm

    gpuccio: However, as I have said many times, I am not a fan of any “scientific method” definition.

    It’s not the term at issue. Are you admitting, then, that there are no testable entailments of the assertion of Intelligent Design?

    gpuccio: If protein A effects a transition to (unrelated) protein B through a functional mutation of 50 AAs (which is well more than 150 bits), …

    Large changes can occur through recombination.

    gpuccio: … to affirm dFSCI we have to reasonably assume that no intermediary step was positively selected and amplified.

    Where is that in the definition?

    1. The sequence must have a recognizable function.
    2. The sequence must resist compression.
    3. We take the (negative natural logarithm) of the ratio of sequences that exhibit the function to the number of possible sequences.

    And even if we don’t know the intermediates, that doesn’t mean they never existed. We have evidence of Common Descent which pervades biology, but we certainly can’t expect to have details of every transition, especially molecular evolution that occurred millions or billions of years ago. What we do have are the testable entailments of Common Descent.

  332. 335 Zachriel December 28, 2010 at 4:19 pm

    (negative natural logarithm)

    Cut and pasted the wrong section. Should be (negative log2).

  333. 336 MathGrrl December 28, 2010 at 4:50 pm

    gpuccio,

    Let’s take this step by step so that we can avoid wasting any time.

    “Are you claiming that only de novo changes of more than 150 bits in a single step constitute dFSCI, or am I wrongly inferring the single step constraint?”

    . . .

    The important point is not the number of steps. The important point is that the transition of more than 150 functional bits has to happen without any help by necessity mechanisms which favour a specific result.

    Okay, my inference that all of the changes had to happen in a single step was incorrect. Thank you for clarifying that point.

    Your statement that “. . . the transition . . . has to happen without any help by necessity mechanisms which favour a specific result.”, suggests that known evolutionary mechanisms of inheritance, mutation, recombination (which is not used in Tierra or ev), and competition for resources meet your criteria for the ability to generate dFSCI in principle. Mutations are random with respect to fitness and the other evolutionary mechanisms result in differential reproductive success that favors those organisms that best make use of the resources in the environment, without regard to how they do so.

    Do you agree that these mechanisms could, again in principle, generate dFSCI?

  334. 337 Petrushka December 28, 2010 at 5:55 pm

    MG: “Do you agree that these mechanisms could, again in principle, generate dFSCI?”
    +++++++++++++++++++++++++
    Apparently not:
    +++++++++++++++++++++++++
    GP: “to affirm dFSCI we have to reasonably assume that no intermediary step was positively selected and amplified.”
    +++++++++++++++++++++++++

    It appears that dFSCI is indistinguishable from irreducible complexity.

    So here’s a question. Is it possible for ID to determine that a specific sequence is the result of designer intervention, and if so, what becomes of this determination if an intermediate sequence is found?

  335. 338 Petrushka December 28, 2010 at 9:07 pm

    Do you agree that these mechanisms could, again in principle, generate dFSCI?

    ++++++++++++++++++++++

    Another way of phrasing this is to ask what physical phenomenon or mechanism required for incremental evolution has not been observed?

  336. 339 Petrushka December 29, 2010 at 5:41 pm

    Is it time for a new continuation?

  337. 340 ptz ip camera December 29, 2010 at 5:56 pm

    Thanks because of this! I’ve been searching all above the web for that facts.

  338. 341 gpuccio December 29, 2010 at 9:12 pm

    Zachriel:

    “Absent an exhaustive search, you would have to show that the limited search is representative of the domain under consideration.”

    But I am sure that a very exhaustive search is being effected continuosly by all neo darwinian believers. I have the utmost faith that, if they have not been able to find convincing necessity mechanism, nobody else would.

    “Functional proteins occur in random sequences at a frequency of at least ~10^-11.Functional proteins occur in random sequences at a frequency of at least ~10^-11.”

    Again!!! Well, let’s say “naturally selectable functional proteins”. That’s the only functional standard which can operate in the neo darwinian mechanism.

    “Not in the scientific or mathematical communities. There are very few outliers, and if the term scientific consensus has any meaning, it certainly applies to evolution.”

    Scientific consensus has nboit the purpose or meaning to define scientific truth or to prevent alternative views. Science is not a political dictatorship, or at least it should not be.

    “No, it’s an unfounded assertion, a case of circular reasoning.”

    It is not circular. But if you want, you can join Mark in the club of the “irrational believers in ID’s circularity”. Have fun.

    At least, Mark has tried hard, and with some remarkable ingenuity, to show why he thinks that it is circular (without success, IMO).

    “What are the *entailed* predictions from Intelligent Design Theory. That means to take the hypothesis, and derive the empirical implications.”

    That basic protein domains emerged rather suddenly, and completely out of any chronological range compatible with a neo darwinian mechanism, is a very definite prediction.

    That the functional structure of protein space is such that transitions between protein domains are impossible through a neo darwinian model is another one.

    “Notice that you didn’t find entailments in the Intelligent Design Hypothesis. In any case, proteins are particulate, so strict gradualism doesn’t always apply.”

    What do you mean? The entailment is that the information for a functional protein can only be set by a control of configurable switches (design), and in no other known way. The entailment is that the current accepted explanation (neo darwinian mechanism) is bollocks.

    “This is similar to your fallacy concerning your “ensurance” concerning necessity mechanisms. Just because one natural mechanism may be discounted doesn’t mean you have discounted all possible natural mechanisms. In particular, there is the possibility of new domains appearing through various mechanisms such as exonization and recombination of existing exons.”

    I am afraid I have to quote Toronto’s point about aliens: where didi the existing exons come from? Please note that I have restricted my argument to basic protein domains which, being unrelated at primary sequence level (less than 10% homology), cannot derive from recombination. You are really clutching at straws.

    So your point seems to be: “I really don’t want to consider the design hypothesis, which would explain what we observe, and I will refuse any evidence in its favor, even if all my supposed explanations fall to pieces, clutching at my faith that some completely unexpected necessity mechanism may come to mind in some distant future. In the meantime, I suspend science and stick to my faith. Anything, provided that I may remain an intellectually satisfied darwinist.”

  339. 342 gpuccio December 29, 2010 at 9:16 pm

    Toronto:

    “I don’t care who the designer is, but you must have had some scientific evidence about the manner in which the designer works, to base this statement on.”

    As alredy said, the sequence was as follows:

    a) You stated that the designer could only be the christian god.

    b) I said: that’s not true, the designer needs not be omniscient, he can just adjust things according to feedback, as we human designers do.

    That is a logical point, it does not need scientific evidence. It is a simple answer to your wrong logical point.

    And anyway, I proposed the ediacara and cambrian explosions as an example reasonably interpretable as a design input which for some reason did not work, and was substituted by another design input.

  340. 343 gpuccio December 29, 2010 at 9:36 pm

    Zachriel:

    “It’s not the term at issue. Are you admitting, then, that there are no testable entailments of the assertion of Intelligent Design?”

    No. But I will not accept any rigid definiton of what is science.

    “Large changes can occur through recombination.”

    Again. No. Not between unrelated sequences. Recombination just remixes existing fucntional information. But in the case of bascin protein domains, it cannot help you.

    “Where is that in the definition?”

    It’s the non necessity/non compressibility part. Selectable intermediaries would imply a necessity mechanism which amplifies the probabilistic resources. dFSCIU can only be calculated on segments where such necessity mechanisms do not intervene.

    You all guys seem to not understand the real role of dFSCI in the ID theory. The idea is that some outputs cannot be obtained by necessity, ir RV, or a mix of the two. Necessity must be reasonably excluded by a careful analysis of the system. And the cimputation of the functional complexity is necessary to reasonably exclude a random generation of the output, or at least of the parts which are supposed to come out in absence of a necessity mechanism.

    In theory, dFSCI could well not exist. If we did not observe it so easily in human artifacts, we would not even know that such a thing is possible.

    It’s conscious design, and only cosncious design, which is able to generate dFSCI, violating all apparent rules of non conscious systems, whose results are limited to what necessity or chance can reasonably do.

    That’s why the ID argument is not circular: because, if the world were the deterministic machine that reductionists seem to believe in, then dFSCI should simply not exists. But consciousness and intelligent change all that. Consciousness and intelligent are able to do, easily, what even the most complex machine cannot do: generate new, originbal dFSCI all the time.

    So, if I can give a suggestion, your best line of defense is not to try to demonstarte that dFSCI does not exist (it does exist in human artifacts), or that it is a circular concept (it is not). Your only final shore would be to demonstrate that biological information, contrary ton all appretn evidence, is not dfSCI, because it can be generated by a necessity mechanism. And IMO, neo darwinian model, the classic, solid one, is still your best chance: not because it is good (it isn’t), but because there is practically nothing else on your field.

    So, go on. Show that the model works. Show that naturally selectable intermediaries, not too distant one from the other, so that RV can reasonably do the tricvk, do exists for all basci protein domains.

    Do your work, instead of clutching to silly philosophical arguments, or of trying to win a game you have already lost by having your adversary “disaqualified”.

    “We have evidence of Common Descent which pervades biology, but we certainly can’t expect to have details of every transition, especially molecular evolution that occurred millions or billions of years ago.”

    But it is strange that we have no detail of any basic domain generating transition, even of those which occurred much later. Remember that new basic domain superfamilies have continued to emerge throughout natural history.

    And, as already said, CD does not tell anything about causal mechanism. I believe in CD, and in design.

  341. 344 gpuccio December 29, 2010 at 9:43 pm

    Mathgrrl:

    “Do you agree that these mechanisms could, again in principle, generate dFSCI?”

    Not competition for resources, if it means amplification of some new function by NS. That is necessity. We have to calcuklate dFSCI only for the transitions where necessity does not intervene. As I have already said, the amplification of probabilistic resources effected by a selection mechanism makes the concept of dFSCI useless (or, in alternative, one would have to recompute the probabilstic resources, and change the threshold, but frankly I would not suggest that).

    Better to apply the concept only to the RV parts, where no necessity mechanism is in sight.

    I would like to specify some more ideas in my next answer to Petrushka, so I would ask you to continue reading next post.

  342. 345 gpuccio December 29, 2010 at 9:58 pm

    Petrushka:

    My sincere compliments. I think you have understood the point correctly:

    “MG: “Do you agree that these mechanisms could, again in principle, generate dFSCI?”
    +++++++++++++++++++++++++
    Apparently not:
    +++++++++++++++++++++++++
    GP: “to affirm dFSCI we have to reasonably assume that no intermediary step was positively selected and amplified.”
    +++++++++++++++++++++++++

    That is absolutely correct.

    “It appears that dFSCI is indistinguishable from irreducible complexity.”

    I agree with you. In essence, they are the same concept. But the formulation and the context are different. Behe’s concept of IC is traditionally applied to complex molecular machines, made of different parts, and is more a logical concept than a quantitative tool.

    dFSCI is best applied to single proteins, or even better to single protein domains. But, if you consider the aminoacids as “parts”, and the protein sequence as the “complex machine”, the concept is similar.

    “So here’s a question. Is it possible for ID to determine that a specific sequence is the result of designer intervention, and if so, what becomes of this determination if an intermediate sequence is found?”

    According to ID theory, each basic protein domain complex enough to exhibit dFSCI is best explained by a designer intervention. Of the 35 protein families anayzed by Durston in his paper, 28 have a functional complexity higher than 150 bits and so, according to my personal threshold, they can safely be considered as exhibiting dfSCI, and best explained by a designer intervention.

    If a selectable intermediary is found for any of them, what happens is simple: the assumption of dFSCI is invalidated, and the calculation is made again for the selectable intermediate, and then for the transition to the final output. If neither of the two paerts exhibits dFSCI, neo-darwinian mechanism becomes again a viable explanation for that case.

    That’s how science goes on.

    If, on the other side, no naturally selectable internediate is ever found, or if only extremely rare cases are found, design remains the best explanation for most of the cases.

    At the same time, our advancing understanding of the protein functional space, and of realistic probabilistic resources in biologic systems, will certainly allow a better evaluation of a functional threshold for dFSCI. I have the suspect that in suggesting 150 bits I have still been too generous…

  343. 346 gpuccio December 29, 2010 at 9:59 pm

    Petrushka:

    “Another way of phrasing this is to ask what physical phenomenon or mechanism required for incremental evolution has not been observed?”

    Naturally selectable intermediates which are steps towards complex functions.

  344. 347 gpuccio December 29, 2010 at 10:03 pm

    “Is it time for a new continuation?”

    Maybe.

    Anyway, I have checked: Mark’s initial post is of November 15.

    Whatever the judgement about the quality of the discussion (mine, however, is very positive), it is a fact that we have been discussing, in acceptable harmony, for a month and a half.

    That’s something, IMO.

  345. 348 Zachriel December 29, 2010 at 10:53 pm

    gpuccio: But as usual, I meant that empirically, not logically. The best way to exclude, empirically, that a necessity mechanism is responsible for the output is to be sure that no such a mechanism is known. In empirical science, we cannot deal with what “could be”. We have to reason with what we know.

    Then …

    gpuccio: But I am sure that a very exhaustive search is being effected continuosly by all neo darwinian believers. I have the utmost faith that, if they have not been able to find convincing necessity mechanism, nobody else would.

    Sorry, but no one has or can explore every possible “necessity mechanism”, including those not yet considered or imagined. That’s why negative claims are rarely useful unless the domain is strictly limited. It’s also why the scientific method was developed — to be able to reach reasonable conclusions absent such a search.

    Your claim is too broad to be of value.

    ———

    gpuccio: Moreover, the simple fact that no protein engineer has been able to find a “simple” mechanism” to find functional protein sequences through some algorithm is good evidence, IMO, that suchn an algorithm does not exist or, if it exists, it is extremely complex, much more comples than its possible outputs.

    Zachriel: Functional proteins occur in random sequences at a frequency of at least ~10^-11.Functional proteins occur in random sequences at a frequency of at least ~10^-11.

    gpuccio: Again!!! Well, let’s say “naturally selectable functional proteins”.

    Your claim concerned protein engineering. It’s not even a complicated algorithm, just random sequences. The science is still in its infancy, but the result provides a lower limit to the density of active domains.

    ———

    gpuccio: The origin of biological information is controversial. You can believe differently, but it is. We in ID don’t believe that biologists have really explained its origin.

    Zachriel: Not in the scientific or mathematical communities. There are very few outliers, and if the term scientific consensus has any meaning, it certainly applies to evolution.

    gpuccio: Scientific consensus has nboit the purpose or meaning to define scientific truth or to prevent alternative views. Science is not a political dictatorship, or at least it should not be.

    Nor did we make an appeal to authority. *You* claimed there was a controversy. Perhaps, but it’s not in the scientific or mathematical communities.

    ———

    Zachriel: What are the *entailed* predictions from Intelligent Design Theory. That means to take the hypothesis, and derive the empirical implications.

    gpuccio: That basic protein domains emerged rather suddenly, and completely out of any chronological range compatible with a neo darwinian mechanism, is a very definite prediction. That the functional structure of protein space is such that transitions between protein domains are impossible through a neo darwinian model is another one… The entailment is that the current accepted explanation (neo darwinian mechanism) is bollocks.

    That’s not an entailment of Intelligent Design Theory, but a negative argument against some version of evolutionary theory.

  346. 349 Zachriel December 29, 2010 at 11:54 pm

    gpuccio: But I will not accept any rigid definiton of what is science.

    To have scientific credibility, your claim has to entail specific empirical predictions that distinguish your claim from other competing claims.

    gpuccio: Recombination just remixes existing fucntional information. But in the case of bascin protein domains, it cannot help you.

    That is not correct. This knowledge will probably forever remain outside your purview, as you have already decided that evolutionary algorithms can’t help us understand the process. But to give you an idea, if certain amino acids (say Ala, Gly, Val, Asp and Glu) are more likely to form soluble proteins, then a population of functional proteins that consists of a high proportion of these amino acids is more likely to generate through recombination new soluble proteins when compared to a random sequences consisting of an even proportion of twenty amino acids. Similarly for other traits, such as the distribution of hydrophobic and hydrophilic bases. Recombination is a powerful mechanisms for generating variation.

    gpuccio: … to affirm dFSCI we have to reasonably assume that no intermediary step was positively selected and amplified.

    Zachriel: Where is that in the definition?

    gpuccio: It’s the non necessity/non compressibility part.

    Ah, so by noncompressible, you mean non-necessity mechanism. Now it’s clear. So, if you eliminate all known necessity mechanisms, then design. The problem, of course, is that you can’t eliminate all necessity mechanisms. Not knowing why the planets trace complicated orbits across the sky is not evidence of design. Ignorance is not evidence.

    gpuccio: But it is strange that we have no detail of any basic domain generating transition, even of those which occurred much later.

    Why would you expect evidence for protein evolution to be easy to come by? Being particulate, sequences can change so much as to unrecognizable. Ignorance is not evidence.

    Over the last several years, a great deal of data has become available. The same fold may occur in separate lineages due to the limitations of packing in a three-dimensional space. Or new folds can occur in a given linage. Proteins families have been unified into superfamilies, and even larger phylogenetic groupings.

    There are a number of outstanding problems, including divergence of phenetic and phyletic analysis because structures can change so much over time. But that’s to be expected. Fold and homology, though strongly correlated, can evolve in different directions.

    Scheeff & Bourne, Structural evolution of the protein kinase-like superfamily, PLoS Comput Biol 2005.

    Grishin, Fold Change in Evolution of Protein Structures, Journal of Structural Biology 2000.

    Yang & Borne, The Evolutionary History of Protein Domains Viewed by Species Phylogeny, PLoS On 2009.

    gpuccio: And, as already said, CD does not tell anything about causal mechanism.

    Sure it tells us something, though perhaps not everything we’d like to know. Any mechanism has to be consistent with Common Descent, including that humans descendend from more primitive hominids that descended from more primitive apes.

  347. 350 Toronto December 29, 2010 at 11:57 pm

    gpuccio,

    Let’s try this one more time and please, answer the questions that I ask and I promise to do the same.

    ID says that life is to complex to have arisen without a designer.

    Based on that, on the fact that living beings require design, can the designer have been a living being?

  348. 351 Petrushka December 30, 2010 at 3:25 pm

    And, as already said, CD does not tell anything about causal mechanism.

    +++++++++++++++++++++++++++

    As Zach points out, common descent constrains the universe of possible physical mechanisms and histories.

    It’s interesting that poofism has no such constraints. There are perhaps ten times ans many coding sequences as there are folds, and yet sequences obey the constraints of nested hierarchy.

  349. 352 MathGrrl December 30, 2010 at 4:28 pm

    gpuccio,

    “Do you agree that these mechanisms could, again in principle, generate dFSCI?”

    Not competition for resources, if it means amplification of some new function by NS. That is necessity. We have to calcuklate dFSCI only for the transitions where necessity does not intervene.

    It would have saved both of us some time if you had made that clear when we were starting to step through the calculation of dFSCI in Tierra and ev. Since both of those simulators, like the vast majority of GAs, model known evolutionary mechanisms, it is impossible for them to generate dFSCI by your definition.

    Your definition means that dFSCI cannot be used as a metric to identify the intervention of an intelligent actor. Consider the case where you have determined, through some means, that more than 150 bits of dFSCI are present in a particular artifact (biological, digital, or otherwise). New research then shows that there is a viable pathway using known evolutionary mechanisms that results in that artifact. Suddenly the dFSCI drops from over 150 bits to zero.

    dFSCI isn’t an objective indicator of intelligence, it’s an indicator that one is ignorant of how an artifact arose. Unless you can eliminate every possible “necessity” mechanism, you cannot claim that any particular system exhibits dFSCI. That makes it useless as a metric.

  350. 353 Petrushka December 30, 2010 at 4:46 pm

    I have the suspect that in suggesting 150 bits I have still been too generous…
    ++++++++++++++++++++++++++
    I'[m sure the goalpost will move over time.

  351. 354 Petrushka December 30, 2010 at 6:28 pm

    We have to calculate dFSCI only for the transitions where necessity does not intervene.
    ++++++++++++++++++++++++++++++++++

    Is anyone else left wondering how you know that *necessity* has not intervened?

  352. 355 Petrushka December 30, 2010 at 7:28 pm

    And if you don’t know the change history, how do you know how large the changes were and what caused them?

  353. 356 gpuccio December 31, 2010 at 1:40 pm

    Toronto:

    “ID says that life is to complex to have arisen without a designer.

    Based on that, on the fact that living beings require design, can the designer have been a living being?”

    Same wrong point made once by Mark (and maybe also by others).

    ID does not say anything about “life”. ID is very specific in its statements. I certainly try to be very specific.

    ID says that many structures in biological beings (such as the genome and the proteome) exhibit complex functional information, which is best explained by design.

    That has nothing to do with the concepts of “life” (a concept that I have personally never used in my arguments, because it is too vague and ill defined).

    A protein is not a living being. An enzyme can accelerate some biochemical reaction in a lab context, where no living being is required. Still, a protein exhibits complex functional information, like any other non living machine can do.

    So, your reasonings about “life” are completely out of context.

    Have I answered your question?

  354. 357 gpuccio December 31, 2010 at 3:16 pm

    Zachriel and Petrushka:

    A general comment about your shared argument of the form:

    “How can you be sure that some future necessity mechanism will not be found?”

    The argument is really weak, and if that is what you are left with, I am happy that I am not in your shoes.

    Design detection is a very successful procedure which, in all apparent human artifacts, can detect design correctly withot any false positive, if a strict detection tool such as dFSCI is used.

    That is a fact.

    Now, suppose that a meteorite falls on our planet. We know nothing about its origins, but we are sure that it is old and comes from the outer space.

    In the core of the meteorite, a stone table is found, with what looks like a very long inscription interpretable as a binary sequence.

    After long and careful studies, it is found that the sequence of bits, if correctly inserted into a computer, with the necessary adjustements, is a complex statistical software, something like R with all its packages, and can succesfully effect most known statistical evaluations on datasets.

    Now, the first reaction is: well, that’s astonishing; whoever wrote this inscription must certainly be a conscious intelligent being with a good understanding of statistics. That seems a remarkable fact upon which to build scientific hypotheses about the origin of this information.

    But the a group of smart paople comes, and says: no, you silly guys; what looks like a statistical software here only appears to have been designed. In reality, it came out from a complex mechanism involving both random events and some necessity mechanisms.

    So, the smart group goes on detailing (indeed, only after some pressure has been exerted by the audience) the mechanism.

    At first, the audience is astonished: the mechanism seems so simple and brilliant. Everything is explained.

    But, after some reflection, the mechanism is analyzed in more detail, and after some controversy everybody can see that it is completely wrong, and in no way it can explain the spontaneous generation of a complex statistical software.

    So, some people in the audience say: well, we have then to go back to the first explanation: the software was designed. After all, there is no other credible explanation available.

    But the smart people are die hard: they reason that indeed, it is not scientific even to consider the possibility that the software has been designed, because the only idea that non human agents may know statistics is intrinsecally blasphemous, and moreover it is against a fundamental priciple of science, known as methodological antropomorphism of statistical thought, which denies any validity to hypotheses like that.

    And it entail no interesting predictions, except maybe the possibility that new meteorites may fall in the future with complete spreadsheets in them.

    But the most important point is: how can we hypothesize design as the best explanation for a statistical software, when we cannot be sure that, in some distant future, some new necessity mechanism will be found to explain how signs on a rock must arrange themselves in that particular form? After all, a recombination of partial inscriptions rich in bits of the 0 type could explain what simple probabilistic considerations cannot explain. Or the rock could have some special structure amenable to evolutionary exploration. What need have we to consider, even for an instant, the weird idea that consciousness and intent were involved in the generation of a software?

    That would not be science, after all, but simply some despicable agenda to revive the irrational belief in alien statisticians.

  355. 358 gpuccio December 31, 2010 at 3:28 pm

    Petrushka:

    “And if you don’t know the change history, how do you know how large the changes were and what caused them?”

    We do observe the result of the changes. For instance, a new basic protein domain emerges at some specific point of natural history.

    Now, if the darwinian mechanism entails one predition, that prediction is that the new structure must have come out through a number of naturally selectable, intermediate functional steps, each of them in the range of RV, each of them expanded at some time to represent, if not all, at least a great part of the population.

    Well, is that prediction confirmed by facts. Absolutely not. We cannot see even one of such functional intermediates in the genomes and proteomes.

    But you say: well, all of them were substituted by the final, more functional forms.

    Well, really difficult to believe such a weird thing, in a biological world which, by admission of the same darwinists, is replete with evolutionary burden, pseudogenes, junk DNA, vestigial organs, and so on.

    How is it possible, say I, that so many useless witnesses of evolution have been preserved in what we observe today, and yet there is no trace of millions (or more) of functional, expanded molecules which, at their good times, were ubiquitous on our planet? Not a trace?

    And however, I continue, even if we find no trace in the genomes, why is it that our protein engineers have found no trace of those naturally selectable functional intermediates in their labs? Not through top down methods, not though bottom up algorithms, not through computer simulations or just sheer luck. Not a trace.

    Isn’t that amazing? Doesn’t that worry you, even a bit?

    But no, why would that be? After all, any reasonable reflection on those problems would not be science.

  356. 359 gpuccio December 31, 2010 at 3:34 pm

    Mthgrrl:

    By the way, my previous two posts were intended also as an answer to you. I just forgot to include you in the header (I apologize for that).

  357. 360 Petrushka December 31, 2010 at 3:34 pm

    After long and careful studies, it is found that the sequence of bits, if correctly inserted into a computer, with the necessary adjustements, is a complex statistical software
    +++++++++++++++++++++++++++++++++
    Can you provide a real example of this from history? A text deciphered without reference to a Rosetta stone or to the evolutionary roots of the language?

    Jiahu, Vinča, Dispilio, Banpo, Proto-Elamite, Linear Elamite, Linear A, Cretan hieroglyphs, Wadi, Byblos, Phaistos, Sitovo, Olmec, Isthmian, Zapotec, Mixtec, Quipu, Issyk, Khitan, Rongorongo?

    The decipherment of languages relies on the the fact that language evolves incrementally.

  358. 361 Petrushka December 31, 2010 at 3:41 pm

    How is it possible, say I, that so many useless witnesses of evolution have been preserved in what we observe today, and yet there is no trace of millions (or more) of functional, expanded molecules which, at their good times, were ubiquitous on our planet? Not a trace?

    +++++++++++++++++++++++

    How is it possible that most of the species that have existed are extinct?

    How is it possible that so many languages once fully functional are extinct? How is it possible that the Basque language exists without obvious connections to other European languages?

  359. 362 gpuccio December 31, 2010 at 3:41 pm

    Petrushka:

    “I have the suspect that in suggesting 150 bits I have still been too generous…
    ++++++++++++++++++++++++++
    I’[m sure the goalpost will move over time.”

    Nobody is moving goalposts. Maybe the contrary.

    If I were asking for more bits to affirm design, maybe you could be right. But I am hypothesizing that we can probablyvbuse less bits. I am saying that, realistically, it is probably easier to detect design than I originally supposed.

    As I have always said, the threshold for dFSCI has to be conventionally established according to what is known about the system it refers to, and its probabilistic resources. If it is confirmed, say, that a realistic biological system, like a bacterial system, even a very diffused one, cannot in reasonable times available in natural history, achieve any more than, say, 30 or 40 bits of random coordinated functional mutations, then it is obvious that my conservative threshold of 150 bits should be lowered.

    What has that to do with “moving goalposts”? Will darwinists never get rid of the bad habit to throw unwarranted trivial accusations and stereotypes at their interlocutors?

  360. 363 Petrushka December 31, 2010 at 3:45 pm

    But while we are on the subject of alien texts, is the Voynich manuscript a real, meaningful text, or a hoax.

    Feel free to apply the methods of design detection.

  361. 364 gpuccio December 31, 2010 at 3:48 pm

    Petrushka:

    “How is it possible that most of the species that have existed are extinct?”

    Most maybe, but not all. And even the extinct ones have left many traces, otherwise how could you know that they existed?

    “How is it possible that so many languages once fully functional are extinct? How is it possible that the Basque language exists without obvious connections to other European languages?”

    First of all, languages are the products of cosnciousness and of intelligent beings. Second, the connections may not be obvious, but I suppose they do exist (I am not a language expert).

    My point is that there is not trace of any functional, naturally selected, expanded intermediary for any of the basic protein domains which emerged at some point of natural history. I am not saying that we must find a trace of all of them.

  362. 365 gpuccio December 31, 2010 at 3:49 pm

    Petrushka:

    “Can you provide a real example of this from history? A text deciphered without reference to a Rosetta stone or to the evolutionary roots of the language?”

    For my example to be understood, the sequence in the meteorite could even be written in C++. That would open new interesting questions, but the meaning remains the same.

  363. 366 Petrushka December 31, 2010 at 3:54 pm

    Put another way, if I recoded the symbol stream of the Voynich manuscript into a binary stream, would ID methods be able to determine whether it is created by intelligence, or possibly the output of a random process?

    Do you see any problems at all in asserting that one can detect design without reference to the attributes of the designer?

  364. 367 gpuccio December 31, 2010 at 3:54 pm

    Petrushka:

    “But while we are on the subject of alien texts, is the Voynich manuscript a real, meaningful text, or a hoax.”

    I am really flattered that you consider me auch an authority on practically anything. But I really have no idea. And I have no reasons to spend my time forming one.

    I have detailed my argument in the field we are discussing: protein sequences. And, more in general, in the field of human artifacts.

    Have you any doubts that Hamlet was written by a conscious, intelligent (and brilliant) being?

  365. 368 gpuccio December 31, 2010 at 3:59 pm

    Petrushka:

    Just out of curiosity, I spent 5 minutes reading the Wikipedia page about the Voynich manuscript (I should be grateful ton you for enriching so much my personal culture).

    A very simple answer would be: I am sure it is a designed artifact. But maybe you believe that in the future some necessity algorithm will be discovered that can explain its emergence.

  366. 369 Petrushka December 31, 2010 at 4:01 pm

    For my example to be understood, the sequence in the meteorite could even be written in C++. That would open new interesting questions, but the meaning remains the same.
    +++++++++++++++++++++++++++++

    In the English alphabet?

    We know a great deal about humans and the products of humans. But even in archaeology, there are controversies about what are man made tools and what are accidental chips of flint.

    We already have an artifact like the one you imagined. Is the Voynich manuscript a text or is it gibberish?

  367. 370 Petrushka December 31, 2010 at 4:02 pm

    A very simple answer would be: I am sure it is a designed artifact
    +++++++++++++++++++++
    Don’t evade the underlying question. It it were coded in binary, how would you make that determination?

  368. 371 gpuccio December 31, 2010 at 4:04 pm

    Petrushka:

    “Put another way, if I recoded the symbol stream of the Voynich manuscript into a binary stream, would ID methods be able to determine whether it is created by intelligence, or possibly the output of a random process?”

    The simple binary sequence would not be approachable by conventional ID methods (such as dFSCI), in absence of a recognizable functional specification. Other tools, such as language analysis and similar, could probably help.

    The manuscript in its actual form is certainly designed, however.

    But you know, we can really see what an enzyme does, and how it works. That needs no special reference to the attributes of the designer (except, obviously, the basic attributes of all designers: consciousness, intelligence, intent).

  369. 372 gpuccio December 31, 2010 at 4:06 pm

    Petrushka:

    “Don’t evade the underlying question. It it were coded in binary, how would you make that determination?”

    I was not evading anything: I just had not yet read your following post. I have answered in my next post.

    Always a kind and understanding interlocutor, aren’t you, Petrushka?

  370. 373 MathGrrl December 31, 2010 at 4:07 pm

    gpuccio,

    Mthgrrl:

    By the way, my previous two posts were intended also as an answer to you. I just forgot to include you in the header (I apologize for that).

    With respect, those posts do not address any of the points I raised, including the core one that the definition of dFSCI that you most recently provided makes it useless as a metric for identifying intelligence.

    In actuality, you completely ignore that fact when you make this statement:

    Design detection is a very successful procedure which, in all apparent human artifacts, can detect design correctly withot any false positive, if a strict detection tool such as dFSCI is used.

    That is a fact.

    Your assertion is far from a fact. dFSCI, according to your most recent definition and clarifications, is not a detection tool. When we first began discussing dFSCI, you claimed that it was an objective measurement that has only been shown to exceed 150 bits when intelligent input was known to have taken place. You further claimed that it could be applied to biological systems to determine whether or not intelligent input was required for those systems to exist. Now, however, you have defined dFSCI as requiring intelligent input (no “chance” or “necessity”). That contradicts your original claim that dFSCI is an objective measurement that can be used to show that intelligent input is required when that is not known beforehand.

    dFSCI is also not at all immune to false positives, as I noted in my previous post. Since it is more a measure of ignorance about a system than it is of intelligent input, new knowledge could easily reduce a calculated value of dFSCI to zero.

    Your original claims about dFSCI were potentially testable, which is why I have been spending the time to understand them in sufficient detail to calculate it. Your current claims are no more than an argument from incredulity, dressed up in the language of mathematics. Essentially what you are saying when you claim that something exhibits dFSCI is that it appears too complex to have arisen by natural mechanisms. In order to support such a claim, you would need to eliminate all possible alternatives, but you make no effort to so so.

    If you disagree with my conclusion that dFSCI is, as now defined, useless as a metric, I would be very interested in a direct response to the arguments in my post dated December 30, 2010 at 4:28 pm.

  371. 374 gpuccio December 31, 2010 at 4:10 pm

    Petrushka:

    “We know a great deal about humans and the products of humans. But even in archaeology, there are controversies about what are man made tools and what are accidental chips of flint.”

    And so? I supposed you had understood the basic point that the ID procedure can detect design with no false positives, but with a lot of false negatives.

  372. 375 Petrushka December 31, 2010 at 4:10 pm

    I have detailed my argument in the field we are discussing: protein sequences. And, more in general, in the field of human artifacts.
    ++++++++++++

    Excuse me, but you are the one who brought up the subject of binary texts from unknown sources. You are the one who asked us to imagine that an alien bit stream could be decoded to run as a program on human made computers.

    So I merely ask how you go about deciphering texts that have no analogs to known texts and no evolutionary history.

    As for Hamlet, it you code it in binary, it will be found in the expansion of pi. That’s a joke, son, even though it is true.

    But tell us how extinction fits into your model of protein evolution. Do you deny that extinction happens?

  373. 376 gpuccio December 31, 2010 at 4:11 pm

    Petrushka:

    “In the English alphabet?”

    Well, it could well be a compiled C++ program, in binary form, not the source code.

  374. 377 gpuccio December 31, 2010 at 4:15 pm

    Petrushka:

    “But tell us how extinction fits into your model of protein evolution. Do you deny that extinction happens?”

    No. Why should I? I have quoted myself many times the extinction of the Ediacara beings as an example of probable failures of the designer. I suppose that, if Ediacara beings used some specific protein domains (which is reasonable), they could have been lost with their extinction.

  375. 378 Petrushka December 31, 2010 at 4:20 pm

    And so? I supposed you had understood the basic point that the ID procedure can detect design with no false positives, but with a lot of false negatives.
    +++++++++++++++++++++++++++++++++++

    But that claim would be a simple falsehood.

    the history of ID for the past 200 years is littered with false positives, claims of missing links and no transitional fossils.

    You seek refuge in an arena having no fossils. Nearly every coding sequence available is part of a living thing. There is virtually no ancient DNA, despite the claims of Ken ham and associates.

    But the lack of fossils does not change the way the world works. Extinction still happens, and history gets erased over time. Once viable things get replaced.

  376. 379 Petrushka December 31, 2010 at 4:23 pm

    No. Why should I? I have quoted myself many times the extinction of the Ediacara beings as an example of probable failures of the designer.

    +++++++++++++++++++++++++++

    I really doubt that you are so stupid that you haven’t realized that we are talking about the extinction of coding sequences.

    As for the Cambrian et al, I really have trouble imagining you as so monumentally dishonest as to bring up the fossil record as evidence for ID, having dismissed it in the case of the mammalian middle ear.

  377. 380 gpuccio December 31, 2010 at 4:39 pm

    Mathgrrl:

    “however, you have defined dFSCI as requiring intelligent input (no “chance” or “necessity”). That contradicts your original claim that dFSCI is an objective measurement that can be used to show that intelligent input is required when that is not known beforehand.”

    Hey, I have never said such a thing.

    Let’s try again.

    dFSCI is defined as present if no chance and necessity can explain the output we observe.

    That is not the same as saying that it “requires intelligent input”. Why are you saying that?

    As far as we know, outputs manifesting dFSCI could simply not exist. It is simple: I define a property (an output with formal characteristics which cannot be explained by chance and necessity). Then I look for that property in the world, nad I may not find it anywhere. There is no reasoning here about “requiring an intelligent input”.

    The association between dFSCI and intelligent input is empirical. It is neither logical, nor deriving from my definition.

    We just “observe” that dFSCI exists in human artifacts. That’s why we conclude that it is a good marker of design. From experience. Not form logic. Not from definition.

    I don’t understand why this simple point seems so difficult for all of you.

    That is also the reason why the definition of dFSCI is not circular. In the definition there is no mention at all of intelligent input. It is only the observation of that property in human artifacts, and the obvious connection in our experience between design, cosncious representations and intelligent outputs exhibiting dFSCI (such as language and machines) that candidates empirically dFSCI as a marker of design.

    Is that clear?

  378. 381 gpuccio December 31, 2010 at 4:41 pm

    Mathgrrl:

    “That contradicts your original claim that dFSCI is an objective measurement that can be used to show that intelligent input is required when that is not known beforehand.”

    No. My original claim is always the same.

    Just a comment on the word “objective”. The measurement of dFSCI is objective, but as many times specified it is relative to some specifically defined function. That is an important point.

  379. 382 MathGrrl December 31, 2010 at 4:49 pm

    gpuccio,

    dFSCI is defined as present if no chance and necessity can explain the output we observe.

    That is not the same as saying that it “requires intelligent input”. Why are you saying that?

    Because over on UD, you and others define intelligent design as the complement of necessity and chance. Do you disagree with this definition?

    Let me try to explain the flaw in your concept of dFSCI another way. When you make a measurement of a system and state “This system exhibits n bits of dFSCI.” where n is greater than 150, you are by the definition you provided asserting that design was required for the system to exist. Hidden within that assertion is an implicit assumption that the complexity of the system did not arise from mechanisms such as those that are part of modern evolutionary theory.

    You aren’t measuring anything to arrive at that result, you are simply stating the conclusion that is assumed by your definition.

    Unless you can eliminate all possible “chance” and “necessity” mechanisms, you cannot claim that dFSCI has been measured.

  380. 383 gpuccio December 31, 2010 at 4:59 pm

    Mathgrrl:

    “Your definition means that dFSCI cannot be used as a metric to identify the intervention of an intelligent actor. Consider the case where you have determined, through some means, that more than 150 bits of dFSCI are present in a particular artifact (biological, digital, or otherwise). New research then shows that there is a viable pathway using known evolutionary mechanisms that results in that artifact. Suddenly the dFSCI drops from over 150 bits to zero.

    dFSCI isn’t an objective indicator of intelligence, it’s an indicator that one is ignorant of how an artifact arose. Unless you can eliminate every possible “necessity” mechanism, you cannot claim that any particular system exhibits dFSCI. That makes it useless as a metric.”

    dFSCI is a very usefule metrics. It can detect design in human artifacts without any false positive. Again, that is a fact.

    It is not an indicator that we are ignorant of how an artifact arose. We have huge empirical evidence that artifacts exhibiting dfSCI never arise without the intervention of an intelligent agent. If huge empirical evidence, never falsified by any observed example, is not a scientifically valid argument, I really don’t know what is.

    The argument that some completely unknown, and completely unimaginable, necessity mechanism could perhaps in principle explain dFSCI is wrong, non scientific, and only a pretext. Nobody would deny that Hamlet is the product of design, only because we cannot know if some necessity algorithm leading to Hamlet through necessity may exist. That’s not scientific reasoning, but false logic.

    The simple truth is that complex pseudo-random sequences bearing a recognizable function are always the product of the setting of configurable switches by an intelligent agent, with the purpose of realizing an objective implementation of a conscious, purposeful representation. That is always true, in all human artifacts. No pseudo artifact satisfies the requirements of dFSCI. A lot of true human artifacts do.

    The sequence for myoglobin is functional only because it ensures a specific folding and active site, with very specific biochemical properties. The sequence of ATP synthetase implements a completely different fold, plan and function. To state that some common necessity mechanism could originate both (and the thousands of other specific folds in the proteome) is not science: it is complete folly.

    The functional information in basci proteins folds must be explained. If you proposed mechanism of neo darwinian evolution cannot explain it (and it cannot), then no other reasonable necessity mechanism has any hope to do that, as far as we can cognitively assess.

    Design can. Conscious beings can do those things, and do them. Humans do create new dFSCI all the time. They create language, they create machines, they create codes. Any being with similar basic properties (consciousness, intelligence, purpose, the possibility to interact with matter) can certainly do the same.

    Nothing else in the universe, as far as we know, can.

  381. 384 MathGrrl December 31, 2010 at 5:00 pm

    gpuccio,

    “That contradicts your original claim that dFSCI is an objective measurement that can be used to show that intelligent input is required when that is not known beforehand.”

    No. My original claim is always the same.

    You initially claimed that dFSCI could be measured and that values of greater than 150 bits always indicated intelligent design. Now you are defining dFSCI as only being calculable for those aspects of a system that do not arise from “chance” or “necessity”. Elsewhere (UD) the complement of “chance” and “necessity” has been defined as intelligent design, in the context of ID. This means that dFSCI indicates intelligent design by definition, not because of any correspondence to empirical data.

    Whether or not the confusion arose from you changing definitions or my misunderstanding you, the fact remains that dFSCI as clarified by you over the course of this thread cannot be used to determine whether or not intelligent design is present for systems where its presence is unknown. In order to calculate dFSCI by your definition we must know that neither “chance” nor “necessity” resulted in the system we’re measuring.

    dFSCI is, therefore, useless as a metric for identifying intelligent design.

  382. 385 MathGrrl December 31, 2010 at 5:02 pm

    gpuccio,

    dFSCI is a very usefule metrics. It can detect design in human artifacts without any false positive. Again, that is a fact.

    You’ve made this claim repeatedly. Please prove it. Show how to calculate dFSCI for some human artifact, in detail.

  383. 386 gpuccio December 31, 2010 at 5:03 pm

    Mathgrrl:

    “you are by the definition you provided asserting that design was required for the system to exist. ”

    No. I am assering that from the empirical observation that only in designed things we find dFSCI. It’s completely different.

    “Hidden within that assertion is an implicit assumption that the complexity of the system did not arise from mechanisms such as those that are part of modern evolutionary theory.”

    It is not an implicit assumption: it is an explicit result of a detailed analysis of the proposed mechanism, of its intrinsic logic, and of the empirical evidence in its favor (or rather of its absence).

  384. 387 MathGrrl December 31, 2010 at 5:04 pm

    gpuccio,

    We have huge empirical evidence that artifacts exhibiting dfSCI never arise without the intervention of an intelligent agent.

    You have defined dFSCI as only being generated by intelligent design (the complement of “chance” and “necessity”). Of course we’ve never seen it arise without intervention by an intelligent agent — that’s how it’s defined to arise.

    Can you honestly not see the circularity in your argument?

  385. 388 MathGrrl December 31, 2010 at 5:07 pm

    gpuccio,

    you are by the definition you provided asserting that design was required for the system to exist. ”

    No. I am assering that from the empirical observation that only in designed things we find dFSCI. It’s completely different.

    No, this has nothing to do with empirical observations. You have defined dFSCI as being generated only by intelligent design.

  386. 389 gpuccio December 31, 2010 at 5:07 pm

    Mathgrrl:

    “Whether or not the confusion arose from you changing definitions or my misunderstanding you, the fact remains that dFSCI as clarified by you over the course of this thread cannot be used to determine whether or not intelligent design is present for systems where its presence is unknown. In order to calculate dFSCI by your definition we must know that neither “chance” nor “necessity” resulted in the system we’re measuring.”

    We must know that the variation is beyond any realistic probabilistic resources, and that no necessity mechanism is known, or even credibly imaginable, that can, alone or in association with RV, explain that output.

    That is very simple. I have never requested a logic proof that no necessity mechanism will ever be able to explain that output. Such a concept is more of mathemathics or logic, not certainly of empirical sciences.

    My definition only requires that no known algorithm can explain that output, alone or in association to a credible RV mechanism.

  387. 390 gpuccio December 31, 2010 at 5:11 pm

    Mathgrrl:

    “No, this has nothing to do with empirical observations. You have defined dFSCI as being generated only by intelligent design.”

    No. I have never said that. I have defined dFSCI as a formal property, which can be present or not, which can exist or not. A priori, even intelligent agents could not be able to generate dFSCI, and it could simply not exist (although certainly definable).

    But that is bot the case. Intelligent agents do produce dFSCI all the time. That is a fact. That is an empirical observation. So much so, That I have no idea of how they do that, although I am pretty sure that having conscious representations and cognition of meaning and purpose is necessary to achieve that result.

    But that is only an interpretation. That humans do produce dFSCI all the time is only an observed fact.

  388. 391 gpuccio December 31, 2010 at 5:13 pm

    Mathgrrl:

    “You have defined dFSCI as only being generated by intelligent design (the complement of “chance” and “necessity”). ”

    I have never used that phrase. Others at UD do that. Not me.

    My definition of dFSCI is purely empirical. It entails no general theory of agency. The only facts used are: conscious intelligent agents do exist (observed fact); conscious intelligent agents do produce dFSCI (observed facts). You are putting in my mouth words and concepts that I have never used.

  389. 392 gpuccio December 31, 2010 at 5:15 pm

    Mathgrrl:

    “Can you honestly not see the circularity in your argument?”

    The only way you can show circularity is by making me say things I have never said.

  390. 393 Petrushka December 31, 2010 at 7:53 pm

    We must know that the variation is beyond any realistic probabilistic resources, and that no necessity mechanism is known, or even credibly imaginable, that can, alone or in association with RV, explain that output.

    +++++++++++++++++++++++

    Let’s go back to the Basque language. Do you have an objective way of determining whether this language is the result of a chain of intermediate languages, or whether it is completely isolated from other languages.

    And even where we know the history of a language (because the written intermediates still exist) why aren’t the intermediates still in use?

    Are you really arguing that coding sequences that no longer exist never existed?

    That’s really the test, whether you have an objective way of determining isolation.

  391. 394 MathGrrl January 1, 2011 at 4:12 pm

    gpuccio,

    “Can you honestly not see the circularity in your argument?”

    The only way you can show circularity is by making me say things I have never said.

    But you have said them, just not explicitly. By defining dFSCI to exclude “chance” and “necessity” you have only some form of agency remaining.

    I know how much ID proponents like analogies, so perhaps this one will make the point clear. Let’s say I want to define a metric called digital elephantine specially created information, dESCI for short. I define dESCI as the result of actions of mammals and to exclude results that could arise from the actions of mammals that have fewer or more than four knees. I then go on to measure dESCI and am shocked (Shocked!) to find that it is always associated with elephants! Clearly dESCI is a robust metric for detecting elephant activity. After all, I never mentioned elephants explicitly in my definition.

    You have exactly the same problem. Unless you have an alternative other than agency once you exclude “chance” and “necessity”, the concept you are referring to is agency, whether you use the word or not.

    I note in passing that if you do manage to come up with an alternative to agency, you will have soundly refuted Dembski’s Explanatory Filter.

  392. 395 Mark Frank January 1, 2011 at 4:59 pm

    I just tried to access this discussion with IE and the performance is again getting very long (OK on Chrome). So I have created yet another thread to continue the discussion – but changed the title to recognise Gpuccio’s heroic perseverance and good humour in the face of constant criticism.

  393. 396 http://www.stylehive.com/person/pinkjake02/profile December 17, 2013 at 6:37 am

    For most up-to-date information you have to visit world wide web and on
    the web I found this web page as a best web site for newest updates.


  1. 1 The Gpuccio thread (cont) « In Moderation Trackback on January 1, 2011 at 6:10 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: