The Gpuccio thread (cont)

This is a continuation of the ongoing discussion between Gpuccio and three or four opponents.  My congratulations to Gpuccio for having the courage and perseverance to continue it.  My congratulations to the others for being right.Smile 


And I want to kick off this continuation by picking up on the circularity of dFSCI being the product of ID.  In this comment yesterday by Mathgrrl we had this sequence:

Mathgrrl:

dFSCI is defined as present if no chance and necessity can explain the output we observe.

Gpuccio:

That is not the same as saying that it “requires intelligent input”. Why are you saying that?

Mathgrrl:

Because over on UD, you and others define intelligent design as the complement of necessity and chance. Do you disagree with this definition

Later you wrote:

I have never used that phrase. Others at UD do that. Not me

You may not define design as the complement of necessity and chance but do you actually believe there is another option – a fourth possibility as it were? 

137 thoughts on “The Gpuccio thread (cont)”

  1. That is not the same as saying that it “requires intelligent input”. Why are you saying that?

    +++++++++++++++++++++++++

    The process of design has a lot of attributes in our experience: representing conscious cognitions and purposes, inputting those representations into matter through what Abel calls “configurable switches”, and so on.

    Why no intermediaries (cont)

    we know empirically that dFSCI is always the product of conscious intelligent beings.

    Why no intermediaries (cont)

    dFSCI is an informational property. It has nothing to do with life. It is found in many non living things (eg a computer program). It is just empirically found to be the product of conscious intelligent beings.

    Why no intermediaries (cont)

  2. gpuccio: It’s the non necessity/non compressibility part.

    Zachriel: Ah, so by noncompressible, you mean non-necessity mechanism.

    So now we have this:

    1. The sequence must have a recognizable function.
    2. The sequence must not be due to known necessity mechanisms.
    3. We take the -log2 of the ratio of sequences that exhibit the function to the number of possible sequences.

    Is that correct?

  3. Let me give you an example of what it is like to ask you a question.

    ========
    TV Host: Toronto, how much do you weigh?

    Toronto: I find that question to be irrelevant since weight is such an ill-defined concept. While mass would be a better term, it can only be used relative to other masses, and there are an infinite amount of them in the universe.
    ====

    I would consider such a response to the TV host to be disrespectful since Toronto, and any other reader, understands what the host is actually asking.

    gpuccio: That has nothing to do with the concepts of “life” (a concept that I have personally never used in my arguments, because it is too vague and ill defined).

    Let’s use the current physical state of gpuccio then as an example of life.

    Since life, as defined by the above, is considered too complex by ID to have arisen without an intelligent designer, could the intelligent designer have been alive before designing life?

  4. Zachriel:

    “So now we have this:

    1. The sequence must have a recognizable function.
    2. The sequence must not be due to known necessity mechanisms.
    3. We take the -log2 of the ratio of sequences that exhibit the function to the number of possible sequences.

    Is that correct?”

    Yes.

  5. Mark:

    “You may not define design as the complement of necessity and chance but do you actually believe there is another option – a fourth possibility as it were?”

    A very general comment about what you say, and about what Mathgrrl says, and probably many others have tried to argue.

    I am amazed that you in the darwinist field, intelligent and competent as you are, are so often obviously wrong on some fundamental epistemological principles, especially when you have to counter ID positions, and you are short of other “arguments”.

    Let’s make it short, if possible.

    I can believe something, or just state something, for many different reasons and in many different ways.

    a) First of all, I can define something as such from the beginning.

    Example:

    I could define dFSCI as anything that is the product of a cosncious intelligent being.

    I have never done that.

    I define “design” as the purposeful product of a conscious intelligent being. Not dFSCI.

    b) In alternative, I can give some definition from which some other statement derives as an inevitable logical consequence. In a sense, then, it can be argued that the derived statement is implicit in my original definition, at least if we accept the principles of logics.

    Example: I could define design as the logical complement of chance and necessity: something is defined as designed if it is not originated by chance or necessity,

    I have never done that. I have defined design as the as the purposeful product of a conscious intelligent being. There is no reference to chance and necessity in that definition, nor any logical theory which considers chance, necessity and design as logical, mutually exclusive counterparts. My definition of design is completely independent, and completely empiricall.

    Let’s give it in a more complete form, just to help future discussion:

    Some object is designed if we can affirm that a conscious intelligent agent was involved in its origin, and that the conscious intelligent representations of that being (the designer) purposefully contributed to determine the final form of the object.

    c) I can give my independent definitions, and them throrugh empirical observations, conclude that there is some empirical association between them. That’s what empirical science does all the time.

    Example:

    I define design as I have done.

    I define dFSCI as I have done.

    I observe that many known designed things exhibit dFSCI.

    I observe that no known non designed thing exhibits dFSCI.

    Thereby, I “infer” that dFSCI can be a good marker of design, and I build a design detection model based on that inference.

    I apply my detection model to further known cases, and it works perfectly.

    Thereby, I feel confident enough to apply my detection model to the only class of objects which seem to exhibit dFSCI, but whose origin is not known with enough detail and certainty to allow us to affirm or deny the involvement of a designer (biological objects).

    According to the observed results, as dFSCI is found abundantly in many of those objects, I infer a design origin for those biological objects, and build one or more models based on that assumption, and try to detail and test them.

    That is the simple, correct procedure. There is no circularity in it. Circularity is only in your biased, and wrong, interpretations.

  6. Petrushka:

    You quote me as saying:

    “we know empirically that dFSCI is always the product of conscious intelligent beings.”

    Please. note the “empirically”.

    And you also quote me as saying:

    “That is not the same as saying that it “requires intelligent input”. Why are you saying that?”

    It is rather obvious that your purpose is to imply a contradiction.

    But there is none.

    I am not saying “in my definition” that dFSCI requires an intelligent output.

    But I am certainly concluding, from empirical data, that it is always the product of conscious intelligent beings.

    There is no contradiction.

  7. Mathgrrl:

    There is a fundamental logical error in your elephant example, and in all your argumentations (and those of the others here) about circularity.

    You are misrepresenting my definition of dFSCI.

    I have said many times that dFSCI is a formal property. The reason for that should be obvious, but I will repeat it just the same.

    My definition requires a lot of formal characteristics which must be positively present to affirm dFSCI in an object:

    a) the object must be interpretable as a digital sequence.

    b) an explicit function must be definable and measurable in the object

    Plese note that these two formal requirements have nothing to do with chance or necessity. The first is only a restriction of the field for methodological reasons. But the second is a formal property which is necessary to hypothesize that something is designed.

    Then we have two more requirements:

    c) the functional information (-log2 of the rate of functional sequences to the search space) must be higher than a conventional threshold, appropriate for the system

    Now, it is true that the purpose of c) is to eliminate those cases where a random result could acquire some possible function without the intervention of a designer.

    and finally:

    d) No known necessity mechanism must be able to explain that result, either alone or on association to reasonable random mechanisms.

    c) and d) are necessary to detect design: their role is to avoid false negatives.

    Without c), many simple congigurations, which can certainly happen in a random system, could be erroneously attributed to conscious intent.

    Without d), many apparently complex configurations, which are indeed the result of a simple algorithm, could be erroneously attributed to conscious intent.

    So, c) and d) are a necessary part of the detection procedure, but they are not, in themselves, a definition of dFSCI. dFSCI is not defined as anything which is not the product of chance and necessity.

    dFSCI is define as “anything which is digital and functional”, and furthermore is not explained by known necessity mechanisms or by reasonable random events, so that we can safely enough attribute the observed function to a conscious intent.

    So, let’s say that I define as “object involved in some direct elephantine activity” any object which has on itself traces of elephantine DNA. That could be reasonable, but I should add the condition that I am sure enough that the elephantine tissue was not added to the object by some mechanism which has nothing to do with direct elephantine activity (such as passive transport by a third party).

    Isn’t that reasonable?

  8. Mark:

    Thank you anyway for giving me the honor of being in the title. 🙂

    And I appreciate your loyalty to your ideas, and to your “friends”. You are wrong, but you don’t realize it, so you are certainly one of the good guys.

  9. Toronto:

    “Let’s use the current physical state of gpuccio then as an example of life.

    Since life, as defined by the above, is considered too complex by ID to have arisen without an intelligent designer, could the intelligent designer have been alive before designing life?”

    That has more sense, so I can answer.

    The biological intelligent designer, IMO, is not alive in the same way as what you define “the current physical state of gpuccio”.

    I thought I had included in my tentative model the idea that the designer or designers of life IMO is (are) probably not physical.

    (By the way, can you set a TV interview for me? 🙂 )

  10. Mathgrll:

    You asked for some example of calculating dFSCI in human artifacts, and I will try to answer.

    As we are interested in digital outputs, I suppose that the best examples remain:

    a) language

    b) computer programs

    Now, in principle the calculation is rather simple. The most difficult part, as we will see, is not at all being reasonably sure that no necessity alternative is available (for human artifacts, that is usually rather obvious, as we will see).

    The difficult part is always the computation of tyhe target space, and therefore of the functional complexity.

    So, let’s start from computer programs.

    We take some binary program for which we can easily define a single specific function. I will use as an example a sorting program working under Windows, and for which the function can be defined as: being a working windows executable, which can accept as input lists of words or numbers, and give as output the ordered list according to traditional ordering criteria.

    Please, consider that my example will be hypothetical, because I have not the data to try a real calculation.

    So, let’s say we have a program like that, and that our computer experts tell us that the binary sequence we have is really the best they can conceive: the shortest, optimized program to do what it does, at least in that computer system.

    Let’s say that the program is 500 bits long (here I am really guessing, I have no idea). It is practically not compressible by any known algorithm. And nobody has the even least suspect that you could create it by some other program shorter than 500 bits. Indeed, when you suggest that possibility, the computer experts look at you as though you were a complete fool.

    Let’s say that your experience of random programs generated in similar computer environment tells you that the best result ever obtained, and reasonable obtainable, to get a working program in windows by chance is limited to small sequence of, say, no more than 20 bits (which, I suppose, should anyway be used in some already existing programs, because I doubt that a 20 bit sequence can act as a windows executable. Anyway, for the moment let’s pretend it can).

    Let’s say that, just to be on the safe side, we fix our threshold for this system to 100 bits.

    Now comes the most difficult part. Let’s say that you want to approximate the target space. Now, your computer experts tell you that the program is highly optimized, and that each single bit is important. But let’s say that a few bits (4 or 5) can be changed without losing completely the function (maybe with just a few bugs here and there).

    The possibility remains, obviously, that other configurations of bits of approximately the same length could give the same function, even if your experts have no idea at all of how that could happen.

    But let’s say that, after considering all that is known, you reach the reasonable conclusion that, even if other working programs of the same length do exists, they cannot certainly be many. So, just to be on the safe side, you hypothesize a maximum functional space of 100 bits. (Longer sequences can obviously be functional too, but they would also increase the search space, so we will ignore them in our reasoning).

    That would give you the following result: the program you observe exhibits dFSCI, because it is functional, cannot reasonably be obtained by necessity mechanisms, and has a functional complexity of at least 400 bits.

    Please note that the functional space could always be determined in principle by a trial and error approach, but given the numbers implied that is usually impossible. So, in each case, the target space must be approximated by indirect methods, taking into account all that is known about the problem.

    In the case of protein families, the Durston method gives us a rather simple and reliable way to approximate the target space and to compute functional complexity.

    In the case of language, the procedure is the same, but the evaluation of the size of the target space is more tricky. Language is a flexible tool. The functional efficiency should be evaluated according to the ability of strings to convey a well defined meaning to some specific intelligent audience.

    But even here, it is easy to infer that most human language products do exhibit dFSCI.

    First of all, if we put the threshold at 500 bits (Dembski’s UPB, which should be the safest option of all), and with a minimal english alphabet of 30 symbols, the minimum search space of 500 bits corresponds to only 102 characters, which is not very much. So, we have difficulties in determining the target space, but let’s reason for a big output.

    I have taken many times the example of Hamlet, just to recognize the importance of Shakespeare for this kind of discussions, historically.

    I have a result of 170659 characters for it. In base 30, that would give 837405 bits, if I am not wrong (I am not mathboy, after all).

    So, let’s say that, unless you believe that at least 2^836905 sequences of characters of that length could well convey the whole meaning of Hamlet to a reader (which could be tested by asking 100 well chosen questions about the play), which I don’t think is a reasonable assumption at all, then we can conclude that Hamlet does exhibit dFSCI.

  11. Mark:

    About your question:

    “You may not define design as the complement of necessity and chance but do you actually believe there is another option – a fourth possibility as it were? ”

    a more explicit answer.

    There are other options, in principle.

    1) First of all, as I have said, there is no special a priori reason why design should be able to generate dFSCI. It is, bit that’s not a necessary a priori conclusion.

    If design were not able to generate dFSCI, two different scenarios could be observable:

    a) dFSCI, while definable, would not be observed anywhere in the world.

    b) dFSCI would be observed, but not in connection with desing. That would imply that some events can be the result of neither chance nor necessity nor design (see later).

    2) The second possibility is that dFSCI can emerge independently form design. It is perfectly possible, in prinicple, that we may observe events that are:

    a) Not explained by any laws of the universe, nor likely to be explained by new laws in the future, because apparently lacking any regularity

    b) Not describable by any known random distribution

    c) Not connected to design (for instance, completely non functional, and generated in environments where there is no reason to expect the intervention of a designer).

    Now, I know as well as yourself that such events are not observed. But that is an empirical consideration. There is no reason in principle for such a kind of events to be “impossible”.

    So, I don’t really think there is another option. Events, as we know them, are either the result of necessity, or of randomness, or of design, or of some mix of the three.

    That’s probably why Dembski or others may state that design is “the complement of chance and necessity”. I have never said that, but it is not an unreasonable thing to say, provided that it is clear that it is only a consideration based on empirical experience, and not a logical necessity or a “a priori” definition.


  12. Gpuccio

    Looking at your (2) above.

    a) Not explained by any laws of the universe, nor likely to be explained by new laws in the future, because apparently lacking any regularity

    b) Not describable by any known random distribution

    c) Not connected to design (for instance, completely non functional, and generated in environments where there is no reason to expect the intervention of a designer

    First – (b) – needs a little clarification. No real observed distribution is ever perfectly described by a known "random" distribution (by which I guess you mean something like a normal model). But for any observed distribution it is possible to devise a known “random” distribution model that describes it as closely as you wish.  So I interpret (b) as something like:

    Cannot easily be described by a known random distribution

    Given this, then there are many, many real world examples of (2). Any place we come across a complex distribution of objects will do e.g. incidence of daily maximum temperature anomalies (amount above and below seasonal average) at a given location.

    This meets the three criteria:

    (1) There is no regularity to the pattern (remember we are talking about the difference from seasonal average)

    (2) There is no easy random model for describing them

    (3) Certainly not connected to design

    If you dispute weather there are many, many other examples – almost anything to do with fluid dynamics will do.

    I had always assumed you would put such phenomena down to chance – but now you are creating a separate category for them?

  13. Mark:

    Interesting point. I think what I meant is that in principle some events could be so unpredictable that not even a probabilistic description can give any advantage vs the mere recording of the event.

    IOWs, even for daily maximum temperature anomalies I suppose that, given a large number of data, you can in some way reduce the uncertainty of a prediction. I was thinking that, in principle, some set of events could be so unpredictable as not to allow even that.

    I am not completely sure of this concept, and I would appreciate your input (as much as others’). Anyway, it has scarce importance in the end, at least until such kind of data will be observed.

    I suppose that my point was that even a probabilistic description of some set of events is in some way a law, although not an exact one. In theory, events which obey no law are possible. Laws are not a logical given.

    I would also like to specify that designed objects do obey basic necessity laws, obviously, and even probabilstic laws, where they can apply. It is their special complex functional form which cannot be explained by either, but only by the input of a conscious representation implying meaning and purpose.

    IOWs, to put it as Abel does, the configurable switches of the system can take indifferently one position or another, according to the laws of physics, and there is (usually) an equal probability for both events. So, any final configuration of switches, provided that it is of the pseudo-random type, will not violate anything.

    But the specific complex functional configuration which conveys a complex function or meaning can be achieved only if a conscious representation in a conscious agent guides and determines the final configuration of the switches, in full respect of natural laws.

  14. Errata corrige:

    I just realizaed that the following phrase in one of the above posts:

    “c) and d) are necessary to detect design: their role is to avoid false negatives.”

    shoud be:

    “c) and d) are necessary to detect design: their role is to avoid false positives.”

  15. gpuccio: The biological intelligent designer, IMO, is not alive in the same way as what you define “the current physical state of gpuccio”.

    I thought I had included in my tentative model the idea that the designer or designers of life IMO is (are) probably not physical.

    This is what is so frustrating, that you finally answer the question you knew I was asking all along.

    You indicated at one time they could be aliens which is now no longer a valid option since aliens would be “alive” in the sense that gpuccio is.

    Here is a very important question I’d like to not to have to rephrase 6 or 7 times.

    Is it possible that the designer could have been something other than the Christian god?

    (By the way, can you set a TV interview for me? 🙂 )

    Would you be evasive? 🙂

  16. d) No known necessity mechanism must be able to explain that result, either alone or on association to reasonable random mechanisms.

    ++++++++++++++++++++++

    That seems to be the nub.

  17. In order for you to assert that incremental evolution cannot account for coding sequences,, you must assert that the landscape of function is so sparse that no incremental paths exist to the current sequences.

    Try answering my question about the Basque language. Can you demonstrate that no incremental path is possible connecting this language with other human languages, bearing in mind that each intermediate must be a fully functional language?

  18. Zachriel: So now we have this:

    1. The sequence must have a recognizable function.
    2. The sequence must not be due to known necessity mechanisms.
    3. We take the -log2 of the ratio of sequences that exhibit the function to the number of possible sequences.

    Is that correct?

    gpuccio: Yes.

    So then, the argument is of the form, given a complex functional structure, No known necessity mechanism implies Design.

  19. From a post at UD, and maybe pertinent to some of our past discussions here:

    “Is gene duplication a viable explanation for the origination of biological information and complexity?”

    Published in “Complexity”

    Joseph Esfandiar Hannon Bozorgmehr

    Article first published online: 22 DEC 2010

    DOI: 10.1002/cplx.20365

    Copyright © 2010 Wiley Periodicals, Inc.

    Abstract:

    “All life depends on the biological information encoded in DNA with which to synthesize and regulate various peptide sequences required by an organism’s cells. Hence, an evolutionary model accounting for the diversity of life needs to demonstrate how novel exonic regions that code for distinctly different functions can emerge. Natural selection tends to conserve the basic functionality, sequence, and size of genes and, although beneficial and adaptive changes are possible, these serve only to improve or adjust the existing type. However, gene duplication allows for a respite in selection and so can provide a molecular substrate for the development of biochemical innovation. Reference is made here to several well-known examples of gene duplication, and the major means of resulting evolutionary divergence, to examine the plausibility of this assumption. The totality of the evidence reveals that, although duplication can and does facilitate important adaptations by tinkering with existing compounds, molecular evolution is nonetheless constrained in each and every case. Therefore, although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms. “

  20. Zachriel:

    “So then, the argument is of the form, given a complex functional structure, No known necessity mechanism implies Design.”

    No. It is of the form:

    “Given a complex functional structure, No known necessity mechanism makes design the best explanatory inference.”

  21. I suspect you will attempt to attack the Basque language question by asserting that language is a product of intelligent agents, but that is irrelevant to the question I’m asking.

    I’m asking about the sparseness of the landscape and whether the different languages of the world can be linked by small, incremental changes.

    I’m wondering how you determine, from current configurations, that no incremental pathways exist.

  22. gpuccio,

    c) the functional information (-log2 of the rate of functional sequences to the search space) must be higher than a conventional threshold, appropriate for the system

    Now, it is true that the purpose of c) is to eliminate those cases where a random result could acquire some possible function without the intervention of a designer.

    and finally:

    d) No known necessity mechanism must be able to explain that result, either alone or on association to reasonable random mechanisms.

    c) and d) are necessary to detect design: their role is to avoid false negatives.

    . . .

    dFSCI is define as “anything which is digital and functional”, and furthermore is not explained by known necessity mechanisms or by reasonable random events, so that we can safely enough attribute the observed function to a conscious intent.

    This still makes it clear that you are defining dFSCI to be the product of “conscious intent”. Unless you can identify another alternative to “chance” and “necessity”, you are assuming your conclusion in your definition and my previous points about circularity stand.

    It’s worth repeating my note from the previous thread: If you do come up with an alternative, you will have destroyed the argument behind Dembski’s Explanatory Filter.

  23. A bit of peer review:

    +++++++++++++++++++

    he manuscript is not acceptable in its current form. Perspective articles are in-depth, balanced articles that critically evaluate alternative hypotheses. While they may advocate a certain position, they do so after carefully and critically weighing the evidence for and against that position. By contrast, your manuscript, in its current form, reads much more like and advocacy piece and in many places is glib rather than analytical. Consequently, I regret to inform you that I cannot accept your manuscript in its present form.

    However, I would be willing to entertain the submission of a more in-depth and nuanced analysis of the issues you currently address. Such a manuscript would need to address a number of points not currently addressed, including, but not limited to, the following:

    1. Avoid misleading statements such as claiming that a new concensus has emerged about the fate of duplicate genes. While a number of people may have recently argued that neofunctionalization is not as common as once believed, there is nothing like a concensus on this issue. There are still plenty of evolutionary biologists who believe neofunctionalization may be relatively common.

    2. Your evolutionary arguments will need to be much more carefully considered. As an example, consider this passage in your manuscript:

    “Since gene duplicates are functionally redundant they can and do serve as useful spare parts, backing up for any harm inflicted at paralogous sites because of deleterious mutations. As Magdalena Skipper writes in a review for Nature Genetics 2, duplicates tend to compensate rather than innovate. Surveys of the human3, yeast4 and arabidopsis5 genomes have revealed an astonishingly pervasive level of genetic redundancy and functional compensation. Moreover, a team of Israeli researchers has even found evidence of response mechanisms whereby a redundant copy is up-regulated in the event of damage inflicted at a sister site. In this manner, the genome is deemed to be “robust” on account of this evolved protection and insurance policy against natural corruption and decay.

    Therefore, Ohno’s assumption that a gene duplicate would be initially free from selective pressure, until it chanced upon a new function, appears to be fundamentally flawed. Moreover, in the absence of any purifying selection whatsoever, degenerationand disintegration are almost guaranteed. Thus, a measure of purifying/stabilising selection seems necessary, and any evolutionary divergence would proceed under a “relaxed” regime rather than none at all.”

    While it may be true that initially duplicate genes are functionally redundant, this does not necessarily imply that both copies are evolutionarily constrained. If both copies can perform the same ancestral function, one may be free to diverge while the other retains the ancestral function. Demonstration of functional redundancy does not imply this is not the case. Of course, there may be situations in which both copies are needed to perform the ancestral function (e.g.as when a higher level of protein production is needed), but this needn’t always be the case. So, in no way does the kind of functional redundancy you cite necessarily invalidate Ohno’s assumption that gene duplicates may initially be free from selective pressure.

    There are a number of other examples of this type where your arguments are not as rigorous as they should be. I suggest that if you prepare a revised manuscript for submission, you get an evolutionary biologist to review your arguments before submission.

    3. You will need to be much more explicit in defining and explaining what you mean by new function. Consider the following passage, for example:

    “But it is still very unlikely that a gene duplicate would digress from its original
    functionality. Multigene families, such as hemoglobin8 (HB) or fucotransferase9(FUT), attest to the limitations of any neofunctionalization and indicate that anyvariability is actually quite limited – in the case of the latter, fucose is transferred on a greater diversity of glycans among the nine paralogs.”

    It may often be the case that duplicate enzyme-coding genes retain the same ancestral catalytic function and simply apply that function to different substrates. However, it is incorrect to imply that this does not lead to important evolutionary novelty. From my own knowledge of plants, I can think of several cases where this has happened:

    i. Evolution of the flavonoid pathway. Most genes for enzymes in the flavonoid pathway have arisen from gene duplication and modification of one copy to utilize a different set of substrates. Substantial evidence indicates that these “minor” modifications led to the production of novel plant secondary compounds (various flavonoids) that were extremely important in providing protection from UV radiation and thus allowing plants to colonize terrestrial habitats. This is a “minor” modification that led to enormous evolutionary novelty.

    ii. Duplication and neofunctionalization of MADS box transcription factors led to the evolution of flowers, which have been a key trait allowing the diversification of the angiosperms and their dominance over other types of plants.

    iii. Duplication and modification of one copy to use novel substrates has been instrumental in the evolution of several novel defensive pathways (e.g. glucosinolates) that have allowed plants to escape, at least for a while, their major natural enemies.

    The point of these examples is to indicate that while the catalytic changes neofunctionalization may bring about may seem “minor”, they can have profound downstream consequences for the generation of evolutionary novelty. You do not discuss this issue at all, and in not doing so you leave the impression that duplication and neofuntionalization is seldom responsible for evolutionary novelty. A resubmission will need to address this issue.

    I realize that you may not be pleased with my decision. Please recognize, however, that I am rejecting your current manuscript because it does not meet the guidelines for a Perspective article. As I have said, we would welcome submission of a balanced, nuanced, and rigorous manuscript on this topic.

  24. Toronto:

    “You indicated at one time they could be aliens which is now no longer a valid option since aliens would be “alive” in the sense that gpuccio is.”

    It’s you who create confusion, not I who evade.

    Now, for instance, you are creating confusion between what I have clearly indicated as a possible model, compatible with ID, but which I personally don’t believe in (aliens), and what I have indicated as my personal tentative model (I thought I had included in my tentative model the idea that the designer or designers of life IMO is (are) probably not physical).

    Please, be more patient and try to understand what I say. If a possibility is not included in my personal tentative model, that does not mean that it “is now no longer a valid option”. My personal model is not the obligatory standard of truth.

    I had immediately agreed with you that the aliens scenario implies the problem of aliens’origin, specifying however that it remains a valid scenario, although not a final one. Personally, however, I have never sponsored it.

    Let’s go to your new question:

    “Is it possible that the designer could have been something other than the Christian god?”

    Sure. Some examples. First of all it could be some different form of God, not corresponding to the Christian view.

    But maybe you mean “just a god”? (See, I am trying to anticipate you, and rephrase your questions in advance, just to gain time). Well, I am afraid you should ne more specific on what you intend for “a god”.

    Do you mean that the designer need to be transcendent, to be able to design life? I don’t believe that. No, any immanent being, even not physical, but with the necessary properties, could do that.

    So, another possibility is: one or more non physical conscious intelligent beings, not divine in any common sense of the word.

    And please, note that I am not saying that the designer “cannot” be the Christian God. That is a valid possibility, too.

    You know, my cognitive education has been strongly (and very positively) influenced by a short Gyro Geerloose story by Carl Barks, where the final message is more or less that it isn’t the right answers which count, if one does not know the right questions.

  25. Mathgrrl:

    “This still makes it clear that you are defining dFSCI to be the product of “conscious intent”. Unless you can identify another alternative to “chance” and “necessity”, you are assuming your conclusion in your definition and my previous points about circularity stand.

    It’s worth repeating my note from the previous thread: If you do come up with an alternative, you will have destroyed the argument behind Dembski’s Explanatory Filter.”

    I fully disagree with these conclusions, which I have addresses in extreme detail in all my last posts. You are free to think so, but your statement are completely unwarranted.

    If you cannot see that I am not defining dFSCI as “the product of conscious intent” (I have never done anything like that), how can I discuss? Please, say explicitly when and how I have done what you say.

    And it is completely false that my argument would have “destroyed the argument behind Dembski’s Explanatory Filter”. Please, read again what I wrote to Mark:

    “Now, I know as well as yourself that such events are not observed. But that is an empirical consideration. There is no reason in principle for such a kind of events to be “impossible”.

    So, I don’t really think there is another option. Events, as we know them, are either the result of necessity, or of randomness, or of design, or of some mix of the three.

    That’s probably why Dembski or others may state that design is “the complement of chance and necessity”. I have never said that, but it is not an unreasonable thing to say, provided that it is clear that it is only a consideration based on empirical experience, and not a logical necessity or a “a priori” definition.”

    It’s really frustrating how all of you continue to misunderstand empirical consideration as logical implications.

    Dembski’s explanatory filter is obviously an empirical diagnostic tool, based on observations, and not a theorem.

  26. Petrushka:

    You are too quick for me. I have not even read the paper yet, I was just proposing it to your attention.

    As soon as I read it, I will comment on it and on the (expexted) critics.

  27. Petrushka:

    “I suspect you will attempt to attack the Basque language question by asserting that language is a product of intelligent agents, but that is irrelevant to the question I’m asking.

    I’m asking about the sparseness of the landscape and whether the different languages of the world can be linked by small, incremental changes.

    I’m wondering how you determine, from current configurations, that no incremental pathways exist.”

    Frankly, I would agree with you: I would attack the question that way.

    I don’t know, I am not an expert in european languages and their history. I have no idea of how “incremental” the changes are, or of how many traces of them we have. The fact remains that it is an example of “evolution” of a language system used by conscious intelligent beings. I don’t believe it is pertinent with our discussion.

    I think we witness every day important changes in our language which are not incremental at all.

  28. http://www.pnas.org/content/106/37/15690.full.pdf+html

    The classical view of the space of protein structures is that it is populated by a discrete set of protein folds. For proteins up to 200 residues long, by using structural alignments and building upon ideas of the completeness and continuity of structure space, we show that nearly any structure is significantly related to any other using a transitive set of no more than 7 intermediate structurally related proteins.

  29. Gpuccio

    Interesting point. I think what I meant is that in principle some events could be so unpredictable that not even a probabilistic description can give any advantage vs the mere recording of the event.

    IOWs, even for daily maximum temperature anomalies I suppose that, given a large number of data, you can in some way reduce the uncertainty of a prediction. I was thinking that, in principle, some set of events could be so unpredictable as not to allow even that.

    It seems to me that what you are looking for is something that is truly random as opposed to pseudorandom. Setting aside whether this is logically possible – do you really want to say that something which is truly random has dFSCI?

  30. Evolution is a process characterized by change and differential amplification. The substrate is irrelevant. The only question you have posed is the size (and hence probability) of the individual changes.

    I’m asking how you determine the size of the changes in the absence of the history.

  31. Petrushka:

    Interesting paper, I will read it more carefully.

    As for your probable wrong implications, I must remark immediately that the paper does not deal in any way with protein function, least of all with naturally selectable intermediaries.

    Just in case…

  32. Petrushka:

    “I’m asking how you determine the size of the changes in the absence of the history.”

    I would be happy to determine the size of the changes in the presence of history.

    Could you provide some change history from your models, please?

  33. Mark:

    “It seems to me that what you are looking for is something that is truly random as opposed to pseudorandom. Setting aside whether this is logically possible – do you really want to say that something which is truly random has dFSCI?”

    No, that was not my point, as you can easily check if you re-read what I wrote.

    dFSCI must be functionally complex. Do you remember?

    Anyway, I am not sure that random is the same as you say. In a random system, you can attribute probabilities to events. I was wondering if a system could exist where no probability can be assigned, because the form of events is constantly changing and no mathemathical law can describe the system. But again, this is probably a very abstract problem (but important, I believe: the nature of probability is after all, as far as I can understand, a very controversial philosophical issue).

  34. gpuccio,

    If you cannot see that I am not defining dFSCI as “the product of conscious intent” (I have never done anything like that), how can I discuss? Please, say explicitly when and how I have done what you say.

    It is very clear that you define dFSCI as only measurable for artifacts that are not the result of “chance” or “necessity”. It is equally clear that you have not identified any alternative to “chance” and “necessity” other than design. You even concede that explicitly:

    So, I don’t really think there is another option. Events, as we know them, are either the result of necessity, or of randomness, or of design, or of some mix of the three.

    Therefore, by defining dFSCI as only measurable for other than “chance” or “necessity” you are defining it as a measurement of “design” whether or not you use the term “design” explicitly.

    Now, I know as well as yourself that such events are not observed. But that is an empirical consideration. There is no reason in principle for such a kind of events to be “impossible”.

    “Empirical”. You keep using that word. I do not think it means what you think it means. You haven’t shown any observations that led to the deduction of dFSCI. You have defined dFSCI to be a metric that is only applicable when “chance” and “necessity” are eliminated. Again, that assumes your conclusion in your definition. It is not at all surprising that dFSCI is only found when agency is known to have been a cause — you have defined it to be so.

    And it is completely false that my argument would have “destroyed the argument behind Dembski’s Explanatory Filter”.

    Dembski’s Explanatory Filter relies on the fact that the only alternative to “chance” and “necessity” is “design”. If you come up with a fourth alternative to save your concept of dFSCI, you will eliminate that trichotomy and completely undermine Dembski’s argument.

  35. gpuccio,

    Petrushka:

    “I’m asking how you determine the size of the changes in the absence of the history.”

    I would be happy to determine the size of the changes in the presence of history.

    Could you provide some change history from your models, please?

    I’m sure that Petrushka will respond, but I want to piggy back on this post because it relates to an issue that got dropped during the circular definition discussion.

    What Petrushka is saying, I believe, is that you can’t calculate dFSCI unless you know the entire change history behind a particular artifact. You seem to agree with this based on your previous statements to me:

    “Whether or not the confusion arose from you changing definitions or my misunderstanding you, the fact remains that dFSCI as clarified by you over the course of this thread cannot be used to determine whether or not intelligent design is present for systems where its presence is unknown. In order to calculate dFSCI by your definition we must know that neither “chance” nor “necessity” resulted in the system we’re measuring.”

    We must know that the variation is beyond any realistic probabilistic resources, and that no necessity mechanism is known, or even credibly imaginable, that can, alone or in association with RV, explain that output.

    The problem is that you are claiming that dFSCI exists in biological systems without demonstrating that all possible “chance” and “necessity” mechanisms have been eliminated. It’s good that you recognize the need to do so, but you have set yourself a formidable task.

    Where is your evidence that neither “chance” nor “necessity” mechanisms could result in the protein domains you are discussing?

  36. Mathgrrl:

    Excuse me, but your epistemology is rather strange. You say:

    ““Empirical”. You keep using that word. I do not think it means what you think it means. You haven’t shown any observations that led to the deduction of dFSCI.”

    Well, empirical means exactly that: an inference derived from observed facts, and not a logical deduction.

    Observations do not lead to deductions. Observations lead to inferences. We build logical models in the hope of explaining observed facts, and our models can contain internal deductions, or not. But the model is not “deducted” from facts.

    Maybe it is because you are mathgrrl: mathemathicians have to do with deductions, and not with empirical facts. But empirical science is different.

    You say:

    ” You have defined dFSCI to be a metric that is only applicable when “chance” and “necessity” are eliminated.”

    No. I have define dFSCI to be a metric which allows to empirically detect design, eliminating those cases with apparent function which could be originated by chance or necessity, and not by design. I have built the dFSCI tool exactly as I would build a diagnostic tool for a disease.

    “Again, that assumes your conclusion in your definition. It is not at all surprising that dFSCI is only found when agency is known to have been a cause — you have defined it to be so.”

    No. I have defined dFSCI so as to avoid false positives, so that if I find it I am relatively sure that the object is designed. The fact that I find it only when agency is implied (at least when we know if it is implied or not) is only the result of the fact that I have defined well my diagnostic tool: it works, and empirically detects design.

    I have the utmost respect for you, and you may be a mathemathician, and have not a good understanding of empirical sciences, but believe me, it is really frustrating to have to repeat so many times concepts which should be very obvious.

  37. MathGrrl:

    “Dembski’s Explanatory Filter relies on the fact that the only alternative to “chance” and “necessity” is “design”. If you come up with a fourth alternative to save your concept of dFSCI, you will eliminate that trichotomy and completely undermine Dembski’s argument.”

    It relies on the observation that it is so. I have come up with logical alternative possibilities just to show that an empirical observation is not a logical necessity or an axiom. But I know, as you do, that those logical possibilities have never been observed.

    What has been empirically observed is a perfeclty appropriate foundation for Dembski’s filter.

  38. Could you provide some change history from your models, please?

    +++++++++++++++++++++++++

    Ah, the old no transitional fossils argument. Same god of the gaps, new revival tent.

    Having folded the tent of no transitional fossils and the tent of no pathway to blood clotting and no path to the flagellum.

    I assume that before inventing an invisible, ineffable entity to tweak proteins without stepping outside a nested hierarchy, you or someone in the ID movement actually tested all the possible pathways.

    But I notice when confronted with a real world question regarding the change history of a language, you seem to have no objective mathematical tools with which to determine the plausible size of changes.

    My point would be that having an observable process has advantages over imagined invisible entities that have no attributes or constraints.

    Evolution has constraints. There must be a nested hierarchy of coding sequences, even though synonyms are common, and even though human designers ignore and violate the nesting when designing organisms. (So we know the Designer is not *like us*. The Designer builds things that look like they are the result of descent with small changes.)

    So you are correct that gaps must be addressed and shown to be the plausible result of small, incremental change. A difficult problem, but one that has been accepted from the beginning of modern biology. It’s just a subset of the whole arena of evolutionary biology.

    What you ignore are history and consilience. There are reasons for for pursuing one hypothesis over another. One reason is that many lines of evidence from geology through molecular biology point to fairly constant rates of mutation and to common descent with modification.

    Another reason is heuristics. A hypothesis having constraints and entailments suggests research. An imaginary entity having no constrains leads nowhere.

    In the meantime, it is amusing to watch creationist clowns like Bozorgmehr struggle to get flawed review articles past peer review.

    And watch Dembski struggle for decades to understand simple GA programs.

    And watch you attempt to divine the history of an object from its configuration.

  39. MathGrrl:

    “The problem is that you are claiming that dFSCI exists in biological systems without demonstrating that all possible “chance” and “necessity” mechanisms have been eliminated. It’s good that you recognize the need to do so, but you have set yourself a formidable task.

    Where is your evidence that neither “chance” nor “necessity” mechanisms could result in the protein domains you are discussing?”

    Again, darwinists have proposed a model which cannot work without naturally selectable functional intermediaries for everything. It is their burden to show those intermediaries.

    I have never stated that the neo darwinian model is “logically” impossible. Indeed, I have started this discussion by clearly affirming that, if those intermediaries existed, it would work.

    I am stating that the neo-darwinian model is completely unsupported by evidence, because darwinists have never shown those intermediaries, and we have no reason, neither logical nor empirical, to believe that they exist.

    So, I have no formidable task. They have.

    In absence of any detail which can support the model, the model does not explain anything. The dFSCI we have to explain remains the total dFSCI of the observed molecules, because nobody has offered any necessity mechanism which explains it, in whole or in part.

  40. Petrushka:

    “In the meantime, it is amusing to watch creationist clowns like Bozorgmehr struggle to get flawed review articles past peer review.

    And watch Dembski struggle for decades to understand simple GA programs.

    And watch you attempt to divine the history of an object from its configuration”

    Please, enjoy yourself.

    After all, I do the same with you.

  41. gpuccio,

    “”Empirical”. You keep using that word. I do not think it means what you think it means. You haven’t shown any observations that led to the deduction of dFSCI.”

    Well, empirical means exactly that: an inference derived from observed facts, and not a logical deduction.

    Observations do not lead to deductions. Observations lead to inferences. We build logical models in the hope of explaining observed facts, and our models can contain internal deductions, or not. But the model is not “deducted” from facts.

    Maybe it is because you are mathgrrl: mathemathicians have to do with deductions, and not with empirical facts. But empirical science is different.

    I’m quite familiar with empirical science, thank you very much. Please do excuse my slightly sloppy phrasing and address the real issue: Where are your observations that led you to your construction of dFSCI?

    “You have defined dFSCI to be a metric that is only applicable when “chance” and “necessity” are eliminated.”

    No. I have define dFSCI to be a metric which allows to empirically detect design, eliminating those cases with apparent function which could be originated by chance or necessity, and not by design.

    This is blatently false. You have made it very clear that you define dFSCI to explicitly exclude results from “necessity” and “chance” and you have utterly failed to provide any alternative other than “design”. By claiming to measure dFSCI in an artifact you are asserting that it is the product of “design” by your very definition.

    You cannot use dFSCI as a means of detecting “design” unless you know beforehand that “design” is present. It is therefore useless as a metric.

  42. gpuccio,

    “The problem is that you are claiming that dFSCI exists in biological systems without demonstrating that all possible “chance” and “necessity” mechanisms have been eliminated. It’s good that you recognize the need to do so, but you have set yourself a formidable task.

    Where is your evidence that neither “chance” nor “necessity” mechanisms could result in the protein domains you are discussing?”

    Again, darwinists have proposed a model which cannot work without naturally selectable functional intermediaries for everything. It is their burden to show those intermediaries.

    I was afraid you were going to respond with something like this. Frankly, I’m disappointed. This is a typical attempt to shift the burden of proof combined with an argument from incredulity. I expected better of you.

    You are making the positive assertion that dFSCI can be measured. You have defined it in such a way that you must eliminate all possible “chance” and “necessity” mechanisms in order to do so. The burden of proof is therefore clearly on you to demonstrate that you have eliminated those mechanisms before you can claim that a particular artifact exhibits dFSCI.

    I gather from your response that you have not yet done so for any real world biological system. If I am incorrect, I would be very interested in seeing the documentation of your method and results.

  43. After all, I do the same with you.

    +++++++++++++++++++++++

    The difference being that your admirers are at DU, whereas I merely attempt to echo the accumulated knowledge and judgement of mainstream science.

    I suppose it’s possible that people like Szostak are ninnies and that you are right.

    Whatever.

    You have planted your feet in a new arena of the gaps argument. If protein evolution were the only line of evidence for incremental evolution, you might have raised serious issues, but I’m afraid you are about 50 years too late for anyone to take gaps arguments seriously.

    Incidentally, in the arena of medical diagnostic tests, do you know of any that do not have false positives?

    Since the Designer does not design like a human being, and you have provides no alternative model for the Designer, I’m wondering how you can be certain you have eliminated false positives.

  44. gpuccio,

    Petrushka: Since the Designer does not design like a human being, and you have provides no alternative model for the Designer, I’m wondering how you can be certain you have eliminated false positives.

    I think the above clearly shows a major hurdle for the ID side to get over, which is, where are you getting your “non-human” viewpoint from?

    How do you determine what non-human design looks like?

    Does it look exactly like human design?

  45. I’m curious how dFSCI differs from the following formulation of the problem:

    If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.

  46. Zachriel: So then, the argument is of the form, given a complex functional structure, No known necessity mechanism implies Design.

    gpuccio: No. It is of the form:

    “Given a complex functional structure, No known necessity mechanism makes design the best explanatory inference.”

    Infer and imply have the same meaning in this context, though infer is usually the stronger of the two as imply might mean suggest in an everyday context.

    But okay.

    gpuccio: Given a complex functional structure, No known necessity mechanism makes design the best explanatory inference.

    So your conclusion depends on the extent of your ignorance. If we observe a highly complex phenomena such as the movements of the planets, which have a function in terms of the cycles of nature (day and night, seasons, weather, tides, flooding, planting of crops, etc.), then lacking an explanation (or stubbornly rejecting one accepted by the vast majority of scientists), you will conclude, er, infer design. When you become aware of an underlying mechanism, you will change your mind. It’s just God of the Gaps.

    The proper ‘inference’ when lacking an explanation, is to admit to ignorance. To make a scientific determination, you have to test for entailments of the claim, not merely point to gaps in scientific or technical knowledge.

  47. Gpuccio

     

    I am now little confused.  In your attempts to show that it is not circular that dFSCI is designed you wrote:

     

    2) The second possibility is that dFSCI can emerge independently form design. It is perfectly possible, in prinicple, that we may observe events that are:

    a) Not explained by any laws of the universe, nor likely to be explained by new laws in the future, because apparently lacking any regularity

    b) Not describable by any known random distribution

    c) Not connected to design (for instance, completely non functional, and generated in environments where there is no reason to expect the intervention of a designer).

    Now, I know as well as yourself that such events are not observed. But that is an empirical consideration. There is no reason in principle for such a kind of events to be “impossible”.

    Later on you write:

    No, that was not my point, as you can easily check if you re-read what I wrote.

    dFSCI must be functionally complex. Do you remember?

    Anyway, I am not sure that random is the same as you say. In a random system, you can attribute probabilities to events. I was wondering if a system could exist where no probability can be assigned, because the form of events is constantly changing and no mathemathical law can describe the system. But again, this is probably a very abstract problem (but important, I believe: the nature of probability is after all, as far as I can understand, a very controversial philosophical issue).

    (my emphasis in both cases)

    If you want to show that is it an empirical fact that dFSCI is designed (rather than a matter of definition) then you have to somehow describe what it would be like to have dFSCI that was not designed – even though there happen not to be any examples.

    Right now I am not sure what whether it would be functional or not!

  48. Mark:

    You are right. I made a mistake. That phrase should be:

    “c) Not connected to design (for instance, generated in environments where there is no reason to expect the intervention of a designer).

    Sometimes I get confused, and I am ready to admit it. As I was speaking of dFSCI, the functionality was implied.

    So, as I believe we have discussed earlier, the absence of connection to design would derive only by a reasonable assumption that no designer had access to the system. (You will remember your objection that in principle an omnipotent designer could, but please, let’s not go that way again 🙂 )

  49. Petrushka:

    “I’m curious how dFSCI differs from the following formulation of the problem:”

    It refers to digital strings, and is therefore appliable to molecular data. And it specifies a metric, and a specific methodology of judgement.

  50. Petrushka:

    “I suppose it’s possible that people like Szostak are ninnies and that you are right.”

    Everything is possible. Being right is not an academic endowment.

    “Incidentally, in the arena of medical diagnostic tests, do you know of any that do not have false positives?”

    You can virtually eliminate false positives in many cases, if you accept to have a lot of false negatives. If we used a much lower threshold in dFSCI evaluation, we would have a few false positives, and less false negatives. The threshold in dFSCI is fixed at extreme values, exactly to get that kind of result.

    “Since the Designer does not design like a human being, and you have provides no alternative model for the Designer, I’m wondering how you can be certain you have eliminated false positives.”

    In application in human artifacts, I am not aware of any false positive. I believe the same can be reasonably assumed to be valid in the controversial set of biological information, but anyway my statement about no false positives was referred to the cases where you can verify the results.

  51. Toronto:

    “How do you determine what non-human design looks like?

    Does it look exactly like human design?”

    If a designed object exhibits dFSCI, that is common to human and non human artifacts.

    You are right that a non human artifact could have a function which we, human observers, are not able to recognize. In that case, it will be one of the many false negatives.

    But the function of an enzyne is easily recognized by humans, whoever engineered it.

  52. Mathgrrl:

    “I’m quite familiar with empirical science, thank you very much. ”

    Then, please prove it.

    “Please do excuse my slightly sloppy phrasing ”

    You are obviously excused.

    “and address the real issue: Where are your observations that led you to your construction of dFSCI?”

    It’s easy. We observe in human artifacts, and especially in digital ones, language and programs, the amazing complexity which allows us to express and implement our personal, purposeful representations: ideas, goals, functions, even feelings.

    We reflect that such an amazing flexibility of configuration in digital strings, aimed at a definite, original purpose, cannot be generated by so called “natural” settings, and not even by our computing machines (at least, not in original form).

    So we wonder is some objective formal property is associated to this very specific outputs.

    W hile the distinguished mark of designed things remains that they are designed by a conscious intelligent being, we observe that many (not all) designed things use a specific configuration of information, reaching a very high complexity in terms of how many bits of the string must necessarily have some specific value so that the general function may be expressed.

    So, we try to define that objectively as “functional complexity”, and express it in bits.

    Then we verify that, as ourr initial intuition suggested, functional complexity is really associated with designed things. We realize that it apparently never arises out of a random system, except for very simple cases where the number of functional bits is so low that random variation can sometimes produce a functional, but not designed string. So, we fix a threshold of complexity, to avoid consfusion with such non designed outputs.

    In the same way, we observe that some apparently complex outputs, which bear some resemblance to human artifacts, can arise from particular algorithms operating in nature, without any intervention of a designer.

    But, at a better analysis, we realize that in all such cases the kolmogorov complexity of the output is however low.

    So, we choose to exclude all cases where such a necessary compressibility can be proved, to avoid confusion with such apparently complex, but not designed output.

    What remains seems to be a good procedure to detect true designed things, at least those that are complex.

    We apply the procedure in blind to human artifacts, and we verify that it works. Many false negatives, obviously, but apparently no false positive.

    So we are satsied that our procedure is good, and can be applied with reasonable confidence also to truly controversial cases.

    Such as biological information.

  53. Mathgrrl:

    ““You have defined dFSCI to be a metric that is only applicable when “chance” and “necessity” are eliminated.”

    No. I have define dFSCI to be a metric which allows to empirically detect design, eliminating those cases with apparent function which could be originated by chance or necessity, and not by design.

    This is blatently false. You have made it very clear that you define dFSCI to explicitly exclude results from “necessity” and “chance” and you have utterly failed to provide any alternative other than “design”. By claiming to measure dFSCI in an artifact you are asserting that it is the product of “design” by your very definition.”

    I repeat it, and you will probably not understand again:

    “I have defined dFSCI to be a metric which allows to empirically detect design, eliminating those cases with apparent function which could be originated by chance or necessity, and not by design.”

    You answer:

    “This is blatently false.”

    Why? It is true.

    You add, as though it were a clarification:

    “You have made it very clear that you define dFSCI to explicitly exclude results from “necessity” and “chance” and you have utterly failed to provide any alternative other than “design”.”

    And so? Where is the problem? That’s exactly what I said: “I have defined dFSCI to be a metric which allows to empirically detect design”. Do you understand simple english? Or is the problem that you have intentionally omitted the “empirically detect design” part?

    But you say that you understand empirical sciences. Then again, where is the problem?

    I “define” a property so that it can be a valid “empirical” tool to detect something. The association is empirically observed. The tool is tailored to detect the property which has been observed to be associated with what we want to detect, and to eliminate confounding cases. If we work well, we detect what we wanted to detect. QED.

    You say:

    “explicitly exclude results from “necessity” and “chance” and you have utterly failed to provide any alternative other than “design”.

    And why should I do such a thing? If an alternative exists, we will find it, and I will have to distinguish it from design. If no alternative is observed, then I will detect design in all functional strings which cannot originate by chance and necessity.

    Don’t be confused. I have stated, and state again, that the fact that design is the only observed alternative to chance and necessity is am empirical fact, and not a logical axiom.

    But it is an empirical fact. Until other kinds of causation will be shown to generate outputs, Dembski is right in empirically stating that design is the “complement” (but not in a logical sense) to chance and necessity. And so? If that is what we observe, that is what we observe. Is that my fault? Or do you believe differently?

    “You cannot use dFSCI as a means of detecting “design” unless you know beforehand that “design” is present. It is therefore useless as a metric.”

    That is really blatantly false.

    I use dFSCI very well, and in no part of the procedure I have to know “beforehand” that design is present.

    I observe a string in digital form.

    I recognize a funtion for it and define it objectively.

    I try do compute the functional complexity of the string fro that function.

    I verify that no known algorithm can generate that string.

    I check the functional complexity against an appropriate threshold.

    If all of these steps are satisfied according to the procedure, then, and only then, I infer design.

    Which, anyway, being an inference, is not the same thing as “to know, (beforehand or not) that design is present”.

    You are epistemologically confused again. An inference is a “best explanation”. It is not “knowing” that something is true.

  54. Mathgrrl:

    “Frankly, I’m disappointed. This is a typical attempt to shift the burden of proof combined with an argument from incredulity. I expected better of you.”

    Sorry for that. I try not to be disappointed because of my darwinist interlocutors (my life would be very sad indeed).

    Unfortunately, I always write trying to say what I believe to be right, and not to gain admirers.

  55. Mathgrrl:

    “The burden of proof is therefore clearly on you to demonstrate that you have eliminated those mechanisms before you can claim that a particular artifact exhibits dFSCI.”

    Sorry, not so.

    “I gather from your response that you have not yet done so for any real world biological system.”

    Nor will I ever try such a silly, and impossible, task. I will go on doing what I have said I do, not certainly what you, for your imperscrutable reasons, think I should do.

  56. Zachriel:

    “Infer and imply have the same meaning in this context, though infer is usually the stronger of the two as imply might mean suggest in an everyday context.”

    Well, being you who you are, I supposed you were using implication in a logico-mathemathical sense. Sorry for the misunderstanding.

    “The proper ‘inference’ when lacking an explanation, is to admit to ignorance. To make a scientific determination, you have to test for entailments of the claim, not merely point to gaps in scientific or technical knowledge.”

    I don’t agree. First of all, our understanding of biochemical scenarios is mopre than enough to exclude reasonable necessity mechanism, and not only a “gap” argument. The theoretical possibility of anything which cannot be provemn logically impossible, as I have argued, is not a valid empirical attitude.

    And I don’t agree on the absolute value you give to entailments, although I do believe that ID has a lot of entailments. But I really disagree with your ideology of science on that point.

    Anyway, I could be happy with a scenario where both darwinists and IDists confess their ignorance about the origin of biological information, and they both admit that their theories are only tentative explanations, and that they can only be compared according to how well they partially explain data.

    A similar statement could be done with strong AI supporters about the origin of consciousness.

    That’s fine for me. I am not for absolute truths in science.

  57. gpuccio: But the function of an enzyne is easily recognized by humans, whoever engineered it.

    Your sentence above is right at the heart of this debate.

    We are debating whether or not the function of an enzyme requires an “engineer”!

    You can’t simply assert it was “engineered”!

    This is what your side is supposed to be proving with evidence.

  58. I verify that no known algorithm can generate that string.

    +++++++++++++++++

    Do try to provide an example of a string that cannot be produced by an evolutionary algorithm.

  59. gpuccio:

    Until other kinds of causation will be shown to generate outputs, Dembski is right in empirically stating that design is the “complement” (but not in a logical sense) to chance and necessity.

    Minor note: Dembski doesn’t state this as an empirical fact — he defines it as such. From The Design Inference, p. 36:

    “To attribute an event to design is to say that it cannot reasonably be referred to either regularity or chance. Defining design as the set-theoretic complement of the disjunction regularity-or-chance guarantees that the three modes of explanation are mutually exclusive and exhaustive.”

  60. gpuccio,

    “Where are your observations that led you to your construction of dFSCI?”

    It’s easy. We observe in human artifacts, and especially in digital ones, language and programs, the amazing complexity which allows us to express and implement our personal, purposeful representations: ideas, goals, functions, even feelings.

    We reflect that such an amazing flexibility of configuration in digital strings, aimed at a definite, original purpose, cannot be generated by so called “natural” settings, and not even by our computing machines (at least, not in original form).

    That is not an observation, that is assuming your conclusion.

    Then we verify that, as ourr initial intuition suggested, functional complexity is really associated with designed things. We realize that it apparently never arises out of a random system, except for very simple cases where the number of functional bits is so low that random variation can sometimes produce a functional, but not designed string. So, we fix a threshold of complexity, to avoid consfusion with such non designed outputs.

    In the same way, we observe that some apparently complex outputs, which bear some resemblance to human artifacts, can arise from particular algorithms operating in nature, without any intervention of a designer.

    But, at a better analysis, we realize that in all such cases the kolmogorov complexity of the output is however low.

    You don’t observe that, you assume it.

    So, we choose to exclude all cases where such a necessary compressibility can be proved, to avoid confusion with such apparently complex, but not designed output.

    What remains seems to be a good procedure to detect true designed things, at least those that are complex.

    We apply the procedure in blind to human artifacts, and we verify that it works.

    You can’t apply it blindly to human artifacts. You know they are human designed artifacts and you’ve defined your metric such that it can only measure human design, then you surprise yourself by detecting design where you already know it exists. That is not empirical science.

    So we are satsied that our procedure is good, and can be applied with reasonable confidence also to truly controversial cases.

    Such as biological information.

    Your “procedure” is nothing more than some simple math wrapped around an argument from incredulity tied to an assumption of your conclusion. Empirical observations do not enter into it.

  61. gpuccio,

    “You have made it very clear that you define dFSCI to explicitly exclude results from “necessity” and “chance” and you have utterly failed to provide any alternative other than “design”.”

    And so? Where is the problem?

    The problem, as I have repeatedly pointed out, is that you cannot claim that your metric can detect design where it isn’t known to be present and simultaneously define your metric to be only measurable when design is present. The very act of applying your metric with such a definition assumes the presence of design.

    As I have noted, dFSCI is a measure of ignorance, not intelligence. It is not objective. When new knowledge is gained, dFSCI can disappear. It is useless as a metric.

    I use dFSCI very well, and in no part of the procedure I have to know “beforehand” that design is present.

    I observe a string in digital form.

    I recognize a funtion for it and define it objectively.

    I try do compute the functional complexity of the string fro that function.

    I verify that no known algorithm can generate that string.

    I check the functional complexity against an appropriate threshold.

    If all of these steps are satisfied according to the procedure, then, and only then, I infer design.

    By eliminating “chance” and “necessity” you are identifying “design” as the only other logical option. You have admitted that you cannot conceive of another alternative. Therefore, dFSCI provides no additional information. You know before calculating it that the artifact you are considering is designed. If you weren’t able to eliminate “chance” and “necessity” you wouldn’t calculate dFSCI in the first place because of its very definition.

  62. gpuccio,

    “The burden of proof is therefore clearly on you to demonstrate that you have eliminated those mechanisms before you can claim that a particular artifact exhibits dFSCI.”

    Sorry, not so.

    Denying the burden of proof will not make it go away. You clearly stated that:

    We must know that the variation is beyond any realistic probabilistic resources, and that no necessity mechanism is known, or even credibly imaginable, that can, alone or in association with RV, explain that output.

    Here you are correct. You must know those things in before calculating dFSCI. You cannot simply assume them.

    “I gather from your response that you have not yet done so for any real world biological system.”

    Nor will I ever try such a silly, and impossible, task. I will go on doing what I have said I do, not certainly what you, for your imperscrutable reasons, think I should do.

    Unless and until you meet the burden of proof that you yourself recognized, you cannot calculate dFSCI for a real world biological system and therefore you cannot make claims about what your dFSCI calculations mean.

  63. you cannot calculate dFSCI for a real world biological system

    ++++++++++++++++++++++

    Once it is admitted that no actual calculation has been made for a specific biological object, it becomes obvious that all the claims are bullshit.

  64. Toronto:

    “You can’t simply assert it was “engineered”!”

    But I am not asserting anything. I am just saying that it has a recognizable function. If it was engineered, whoever the designer was, and however similar or non similar to us he may be, the function in this case is recognizable. This was the simple point.

    It is diifficult to discuss with you darwinists, even the best of the group (and, for what it’s worth, I do believe that the people here are among the best), because you seem to have a compulsive necessity to find fault with anything I (or anybody on our field) is saying. Can’t you even conceive that an IDists can “occasionally” say reasonable things?

    To sum up: you had complained that, if the designer of biological information is somewhat different from us, we could not be able to detect the design. I answered that that would be true only if we were not able to recognize the function intended by the designer (which is a possibility). But, in the case of proteins, the biochemical function is usually obvious, so that’s not a problem.

    As you can see, I was not “asserting” anything, just concluding a hypothetical reasoning started by you.

    The advantage in biochemical single proteins is that we can refer to their immediate biochemical function, which is usually well known. Let’s call it “the local function”.

    Now, it is true that the local function is also part of more complex networks, nested in even more complex networks. Let’s call them the “higher level functions”.

    Now, it can be true, according to your reasoning, that the final purposes of the designer can be difficult to understand. I would agree, for instance, that it is not easy to say why the designer wants to implement life at all: if an answer can be done, it is certainly more a philosophical one than a scientific hypothesis.

    But, if we stick to lower level functions, all becomes easier. And if we really deal with the local function of each single protein, there is indeed no problem at all in defining the function.

  65. Petrushka:

    “Do try to provide an example of a string that cannot be produced by an evolutionary algorithm.”

    Hamlet?

  66. R0b:

    Welcome. Long time no see.

    Well, as you may know, I don’t consider what Dembski says as a sacred scripture. I remain convinced that he says what he says as a consequence of an implicit reasoning about what is observed. If he means it in a logical sense, and not as an empirical fact, then I disagree. I don’t believe there is any general theory of agency, universally accepted, from which we can deduce such a statement.

    But we can certainly accept it as an empirical statement.

  67. Mathgrrl:

    “That is not an observation, that is assuming your conclusion.”

    Why? Well, maybe the fault is of my phrasing. I must be really careful with all of you, you never try to understand what I am trying to say in a context. Lo. let’s be more precise:

    “We reflect that such an amazing flexibility of configuration in digital strings, aimed at a definite, original purpose, is not generated by so called “natural” settings in any of the cases we can observe, and not even by our computing machines we can observe (at least, not in original form).”

    Are you OK with that?

    “In the same way, we observe that some apparently complex outputs, which bear some resemblance to human artifacts, can arise from particular algorithms operating in nature, without any intervention of a designer.

    But, at a better analysis, we realize that in all such cases the kolmogorov complexity of the output is however low.”

    Again, why? That is not an assumption. Our “analysis” shows that the kolmogorov complexity is lower, if we take into account the generating algorithm. Where is the assumption?

    “You can’t apply it blindly to human artifacts. You know they are human designed artifacts and you’ve defined your metric such that it can only measure human design, then you surprise yourself by detecting design where you already know it exists. That is not empirical science.”

    Wrong. I can apply anything blindly. I can receive a list of strings, and be asked to apply dFSCI calculation to them. The list can be made of true human artifacts and of randomly generated strings, but I am blind to that. My point is that, if I can recognize a function, and if the functional string exhibits dFSCI, it will be a human artifact. IOWs, I will not have a false positive.

    “Your “procedure” is nothing more than some simple math wrapped around an argument from incredulity tied to an assumption of your conclusion. Empirical observations do not enter into it.”

    Frankly, I cannot imagine a more gratuitous and wrong statement. But you are free to go on like that. I will try not to be disappointed.

  68. Mathgrrl:

    “The problem, as I have repeatedly pointed out, is that you cannot claim that your metric can detect design where it isn’t known to be present and simultaneously define your metric to be only measurable when design is present. The very act of applying your metric with such a definition assumes the presence of design.”

    Well, here maybe we can point to some more real misunderstanding.

    You should have understood, now, that the final judgemnent of affirming the presence of dFSCI is a categorical binary judgement: we say it is there, or not.

    The categorization, however, is made on a quantitative metric: the bits of functional complexity. And provided that the other conditions (definable function and non known generating algorithm) are satisfied.

    Now, here is where you are confused: the metric is indeed the measurement of functional complexity.

    Now, functional complexity can be measured on any string for which a function has been defined.

    Then, we categorize the result as dFSCI being present (that is, FC is higher than our conventional threshold) or absent.

    So, I can apply my metric to all cases, be they designed objects or not. According to the value of functional complexity, I classify the results as exhibiting dFSCI or not. And, on empirical premises, I infer design for the cases with high functional complexity (and the others requirements).

    Is that clear?

  69. Mathgrrl:

    “As I have noted, dFSCI is a measure of ignorance, not intelligence. It is not objective. When new knowledge is gained, dFSCI can disappear. It is useless as a metric.”

    Nonsense. Any metric can be made more accurate by new data. dFSCI is very objective for the context where it is measured. If the context changes, because a new understanding is gained (for instance, a path to the result which is NSable is demonstrated), the metric will be applied to the new, more accurate context.

    That’s a general problem of methodology, and has nothing to do with the metric itself.

  70. Mathgrrl:

    “By eliminating “chance” and “necessity” you are identifying “design” as the only other logical option. You have admitted that you cannot conceive of another alternative. Therefore, dFSCI provides no additional information.”

    You must be kidding. It provides the additional information of identifying for which objects it is reasonable to infer design, and for which there is no reason to do that.

    “You know before calculating it that the artifact you are considering is designed.”

    No.

    “If you weren’t able to eliminate “chance” and “necessity” you wouldn’t calculate dFSCI in the first place because of its very definition.”

    No. I eliminate necessity by a careful analysis of possible algorithms which can give that kind of result. But I eliminate chance by the calculation of functional complexity. So, it is senseless to say that I “wouldn’t calculate dFSCI if I weren’t able to eliminate chance”.

    I eliminate chance by calculating functional complexity, and by affirming the presence of dFSCI if the value of functional complexity is high enough.

    But I thought that all of that should be obvious and explicit in my definition. Why is it that I have to repeat a lot of times the same simple things?

  71. Mathgrrl:

    “Here you are correct. You must know those things in before calculating dFSCI. You cannot simply assume them.”

    I must know that:

    a) “the variation is beyond any realistic probabilistic resources”. That is ensured by a correct calculation of functional complexity, and an appropriate threshold.

    b) “no necessity mechanism is known, or even credibly imaginable, that can, alone or in association with RV, explain that output.” That’s easy, If a mechanism is known, it is known. If mechanisms are proposed, like the neo-darwinian mechanism, I can evaluate if it really explains what it says it explains. That analysis is a fundamental part of the ID theory. If a mechanism has been credibly imagined, it can be analyzed. If you want to be tedious, I can say that the mechanism must have been “credibly imagined”, becasue I see that if I see “imaginable” you come again with the objection of any possible future mechanism.

    Well, I will be clear once and for all. Any necessity mechanism must be explicitly shown or at list proposed in enough detail that it can be objectively evaluated. The mere hope that some future mechanism can explain what is not explained now is not a scientific attitude for me. I don’t share Pertushka’s blind faith that the present scientific principles and methods can explain all.

  72. Mathgrrl:

    “Unless and until you meet the burden of proof that you yourself recognized, you cannot calculate dFSCI for a real world biological system and therefore you cannot make claims about what your dFSCI calculations mean.”

    I make the claims that I make. You have strange and unreliable interpretations of my claims, IMO. Again, that’s your choice. But my claims remain unchanged.

  73. Petrushka:

    “Once it is admitted that no actual calculation has been made for a specific biological object, it becomes obvious that all the claims are bullshit.”

    As I have shown a definite calculation for almost 30 different protein families, your statement is bullshit.

  74. gpuccio: Can’t you even conceive that an IDists can “occasionally” say reasonable things?

    I do believe that ID’ists can say reasonable things, but I think the frustration I feel is shared by others here, that the conclusions reached by ID’ists isn’t warranted on the evidence the ID’ist himself has presented.

    Your side embraces what we DON’T know about evolution as evidence against what we DO know which is pointless. If you discover Z it may be a good argument for or against X, but NOT knowing something is NOT a positive or negative to X at all.

    The ID side asks valid questions about transitionals and intermediates, missing links and other solid evidence for evolution. This is good science.

    Your side gives no positive evidence for ID, only negative arguments against evolution. This is bad science.

    If the designers were aliens or in any way constrained beings, there would be artifacts left by them such as partial designs, obsolete designs, schematics, specs and other documentation.

    Why don’t we see that?

    Why don’t you look for these things?

    Why is ID the only field that draws its own line in the sand that it won’t cross?

    No researcher in any other field has managed to temper their curiousity as ID’ists do.

    Why are you the only discipline that refuses to follow the path you are on to it’s prime component, which in your case is the designer?

  75. Hamlet?

    ++++++++++++++++++++++

    Cute, but disingenuous. Limit yourself to 150 bits, or quit using the 150 bit criterion.

    Refresh our memories of a specific calculation, and remind us of how you demonstrated that the coding sequence could not be reached by incremental change.

    Remember, this determination must be made before you begin the calculation.

  76. gpuccio,

    “That is not an observation, that is assuming your conclusion.”

    Why? Well, maybe the fault is of my phrasing. I must be really careful with all of you, you never try to understand what I am trying to say in a context. Lo. let’s be more precise:

    “We reflect that such an amazing flexibility of configuration in digital strings, aimed at a definite, original purpose, is not generated by so called “natural” settings in any of the cases we can observe, and not even by our computing machines we can observe (at least, not in original form).”

    Are you OK with that?

    You are still assuming your conclusion that “natural” mechanisms are insufficient to generate the configurations we see in real world biological systems. We observe a number of mechanisms identified by modern evolutionary theory and we observe that those mechanisms can generate functionality (e.g. Lenski’s citrate experiment, synthesis of nylonase, evolution of the mammalian middle ear, etc.). We find that we can model those mechanisms and generate functionality such as parasitism and hyper-parasitism in simulators like Tierra.

    Nowhere do we observe non-human intelligent agents influencing the development of real world biological systems nor do we find any artifacts that even hint at such entities.

    You are assuming your conclusion based on an argument from incredulity.

  77. gpuccio,

    “You can’t apply it blindly to human artifacts. You know they are human designed artifacts and you’ve defined your metric such that it can only measure human design, then you surprise yourself by detecting design where you already know it exists. That is not empirical science.”

    Wrong. I can apply anything blindly.

    Willfully blinding yourself doesn’t count.

    You know for a fact that human artifacts are a product of human design. It is therefore logically impossible to apply your metric “blindly”. In a true blinded study you would not know whether or not a particular artifact was the product of intelligent agency. Your very definitions prevent you from conducting such a study.

  78. gpuccio,

    “The problem, as I have repeatedly pointed out, is that you cannot claim that your metric can detect design where it isn’t known to be present and simultaneously define your metric to be only measurable when design is present. The very act of applying your metric with such a definition assumes the presence of design.”

    Well, here maybe we can point to some more real misunderstanding.

    You should have understood, now, that the final judgemnent of affirming the presence of dFSCI is a categorical binary judgement: we say it is there, or not.

    Then it makes no sense for you to talk of “bits” of dFSCI.

    The categorization, however, is made on a quantitative metric: the bits of functional complexity. And provided that the other conditions (definable function and non known generating algorithm) are satisfied.

    Now, here is where you are confused: the metric is indeed the measurement of functional complexity.

    Now, functional complexity can be measured on any string for which a function has been defined.

    Then, we categorize the result as dFSCI being present (that is, FC is higher than our conventional threshold) or absent.

    So, I can apply my metric to all cases, be they designed objects or not. According to the value of functional complexity, I classify the results as exhibiting dFSCI or not.

    You appear to be changing your definition again. You have not previously distinguished between dFSCI and “functional complexity”. In fact, you have suggested that you can measure dFSCI in bits.

    And, on empirical premises, I infer design for the cases with high functional complexity (and the others requirements).

    You keep using that word “empirical” incorrectly. You clearly do not use empirical observations to determine if a particular value of “functional complexity” demonstrates dFSCI. You have explicitly said that dFSCI is defined to be the product of intelligent agency (or, rather, the complement of “chance” and “necessity” which is logically equivalent).

    dFSCI remains useless as a metric because it can only exist for designed artifacts by your definition.

    Now, if your contention is that certain levels of “functional complexity”, however you define that, are indicative of design and that “functional complexity” can be measured in systems such as Tierra, I would be most interested in peforming that calculation. Is that your contention?

  79. gpuccio,

    As I have noted, dFSCI is a measure of ignorance, not intelligence. It is not objective. When new knowledge is gained, dFSCI can disappear. It is useless as a metric.”

    Nonsense. Any metric can be made more accurate by new data. dFSCI is very objective for the context where it is measured. If the context changes, because a new understanding is gained (for instance, a path to the result which is NSable is demonstrated), the metric will be applied to the new, more accurate context.

    You are admitting that dFSCI is a measure of ignorance about the history of the system under consideration. This goes back to my point that, unless you can eliminate all possible “chance” and “necessity” mechanisms, you cannot calculate a value for dFSCI.

  80. gpuccio,

    “By eliminating “chance” and “necessity” you are identifying “design” as the only other logical option. You have admitted that you cannot conceive of another alternative. Therefore, dFSCI provides no additional information.”

    You must be kidding. It provides the additional information of identifying for which objects it is reasonable to infer design, and for which there is no reason to do that.

    It doesn’t do that at all. By claiming that dFSCI exists, you are implicitly asserting that “design” exists by the very definition of dFSCI. This is even more apparent now that you’ve said that dFSCI is a binary indicator. It’s just another way of saying “This was designed.” which you’ve already said when you claim “This is not the product of ‘chance’ or ‘necessity’.”

    Whether you can see it yourself or not, your definition is circular and your conclusions are assumed in your premises.

  81. gpuccio,

    “Unless and until you meet the burden of proof that you yourself recognized, you cannot calculate dFSCI for a real world biological system and therefore you cannot make claims about what your dFSCI calculations mean.”

    I make the claims that I make. You have strange and unreliable interpretations of my claims, IMO. Again, that’s your choice. But my claims remain unchanged.

    Unchanged and unsupported. You have the burden of proof and you are refusing to bear it. There is therefore no reason to take your claims seriously.

    You want to be able to claim that dFSCI exists if no one else can prove it doesn’t. This is clearly a Designer of the Gaps argument.

  82. gpuccio: And I don’t agree on the absolute value you give to entailments, although I do believe that ID has a lot of entailments. But I really disagree with your ideology of science on that point.

    While your claims depend on a negative argument, complete knowledge of the relevant domain, hypothetico-deduction can peer into the great abyss of ignorance and reach some reasonable, albeit tentative, conclusions. Halley’s prediction of the Comet wasn’t just a guess, you know, but a consequence, i.e. entailment, of the Theory of Gravity. It doesn’t ‘prove’ the Universal Theory of Gravity, but it does lend dramatic support.

    Mathgrrl: As I have noted, dFSCI is a measure of ignorance, not intelligence. It is not objective. When new knowledge is gained, dFSCI can disappear. It is useless as a metric.

    gpuccio: Nonsense. Any metric can be made more accurate by new data.

    We may use a better ruler to more accurately measure the length of an ordinary object, but the length doesn’t usually just up and disappear because we learn something new about the object.

    gpuccio: I eliminate necessity by a careful analysis of possible algorithms which can give that kind of result. But I eliminate chance by the calculation of functional complexity.

    So this:

    1. The sequence must have a recognizable function.
    2. The sequence must not be due to known necessity mechanisms.
    3. We take the -log2 of the ratio of sequences that exhibit the function to the number of possible sequences.

    Which is equivalent to this:

    Given a complex (not chance) functional structure, No known necessity mechanism (not necessity) makes design the best explanatory inference.

    Or:

    1. Functional.
    2. Not necessity.
    3. Not chance.
    Therefore design.

    As functional is just a type of specification.

    1. Specification.
    2. Not necessity.
    3. Not chance.
    Therefore Dembski.

    In any case, it’s a typical Gap argument. It requires being able to reasonably exclude all necessity mechanisms.

    On the necessity front:

    gpuccio: First of all, our understanding of biochemical scenarios is more than enough to exclude reasonable necessity mechanism, and not only a “gap” argument.

    Yes, our understanding of biochemistry is now so complete that biochemical journals struggle to fill their pages with new findings. Meanwhile, ID journals can’t publish their new discoveries fast enough to keep up with the vast quantity of ID output.

    In any case, you claim that evolution is not capable of generating this dFSCI, so you really haven’t resolved the issue. It still hinges on that claim and has little, if anything, to do with the fancy calculations.

  83. Toronto:

    “I do believe that ID’ists can say reasonable things, but I think the frustration I feel is shared by others here, that the conclusions reached by ID’ists isn’t warranted on the evidence the ID’ist himself has presented.”

    I think you must accept that we have different views of things. I don’t thimk I will be convinced by your arguments, and not because I am closed to them, but because I really find them wrong. You can obviously think the same of me. It’s called “agree to disagree”, and I believe it’s a mark of a civil confrontation.

    “The ID side asks valid questions about transitionals and intermediates, missing links and other solid evidence for evolution. This is good science.

    Your side gives no positive evidence for ID, only negative arguments against evolution. This is bad science.”

    Just your interpretation of things. As you know, I disagree.

    “Why are you the only discipline that refuses to follow the path you are on to it’s prime component, which in your case is the designer?”

    Unwarranted, wrong statement. Like all the others. But I cannot go on forever always saying the same things. Again, let’s agree to disagree.

  84. Pterushka:

    “Cute, but disingenuous. Limit yourself to 150 bits, or quit using the 150 bit criterion.”

    Why? You asked for an example, and I gave it. Why do you evade?

    The rule is that functional information must be higher than 150 bits. Functional information in Hamlet certainly is.

    You know, higher than 150 bits means higher than 150 bits.

  85. Mathgrrl:

    “You are still assuming your conclusion that “natural” mechanisms are insufficient to generate the configurations we see in real world biological systems.”

    No, I observe that no known “natural” mechanism can do that. I assume nothing.

    “We observe a number of mechanisms identified by modern evolutionary theory and we observe that those mechanisms can generate functionality (e.g. Lenski’s citrate experiment, synthesis of nylonase, evolution of the mammalian middle ear, etc.). We find that we can model those mechanisms and generate functionality such as parasitism and hyper-parasitism in simulators like Tierra.”

    I have already criticized thsoe points. Analyzing the false statements of darwinisms is one otf the tasks of ID. None of your arguments denies the validity of dFSCI as an empirical marker of design, and none of them shows that biological information can be explained, and that it is not dFSCI.

    “Nowhere do we observe non-human intelligent agents influencing the development of real world biological systems nor do we find any artifacts that even hint at such entities.”

    We infer that. We infer many things that we cannot observe, in science. But the artifacts are there: biological information.

    “You are assuming your conclusion based on an argument from incredulity.”

    False, but you are free to believe it. As I have said to Toronto, I cannot spend my time denying your false statements forever.

  86. Mathgrrl:

    “Willfully blinding yourself doesn’t count.”

    Strange, I thought that was integral part of scientific methodology.

    “You know for a fact that human artifacts are a product of human design. It is therefore logically impossible to apply your metric “blindly”. In a true blinded study you would not know whether or not a particular artifact was the product of intelligent agency. Your very definitions prevent you from conducting such a study.”

    I really think you have lost your reason. I have said that I can apply the procedure to a series of objects about which I don’t know whether they are human artifacts or not. Do you still understand simple english?

  87. Mathgrrl:

    “Then it makes no sense for you to talk of “bits” of dFSCI.”

    I am really tired of these arguments. It’s frustrating. You stick to any possible false “argument”. Bits are the threshold for functional information. I have suggested a 150 bits “threshold”. Can you understand the word “threshold”? A threshold is used to get a binary category from a continuous measurement.

    Functional information is what is measured. “Complex” functional information is affirmed if functional information is above the threshold. All this is very clear in my definition. Why have I to explain again, infinitely, all the details?

  88. Mathgrrl:

    “You appear to be changing your definition again. You have not previously distinguished between dFSCI and “functional complexity”. In fact, you have suggested that you can measure dFSCI in bits.”

    No. Absolutely not. I have changed nothing!

    Please. look at the definition again.

    I am not responsible if you don’t understand things, even if they are clearly stated.

  89. Mathgrrl:

    “You keep using that word “empirical” incorrectly. You clearly do not use empirical observations to determine if a particular value of “functional complexity” demonstrates dFSCI. You have explicitly said that dFSCI is defined to be the product of intelligent agency (or, rather, the complement of “chance” and “necessity” which is logically equivalent).”

    Go on with this nonsense. This is no more a discussion.

  90. Mathgrrl:

    “dFSCI remains useless as a metric because it can only exist for designed artifacts by your definition.”

    As explained, the metric is of functional complexity. Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

  91. Mathgrrl:

    “Now, if your contention is that certain levels of “functional complexity”, however you define that, are indicative of design and that “functional complexity” can be measured in systems such as Tierra, I would be most interested in peforming that calculation. Is that your contention?”

    Functional complexity can be calculated in outputs. It can certainly be calculated in output of Tierra, but only for the part which originated from RV. You have certainly not helped to understand how to do that in reality. I have asked many specific things about Tierra, and you have not answered.

  92. Mathgrrl:

    “You are admitting that dFSCI is a measure of ignorance about the history of the system under consideration. This goes back to my point that, unless you can eliminate all possible “chance” and “necessity” mechanisms, you cannot calculate a value for dFSCI.”

    This is nonsense again. dFSCI is measured according to what we know. You would probably argue that we cannot have a theory of anything, unless we can eliminate all possible alternative theories.

    Nonsense. Mere bad intellectual propaganda. Science is empirical, and works with what is known, to provide a “best explanation”.

  93. Mathgrrl:

    “It doesn’t do that at all. By claiming that dFSCI exists, you are implicitly asserting that “design” exists by the very definition of dFSCI. This is even more apparent now that you’ve said that dFSCI is a binary indicator. It’s just another way of saying “This was designed.” which you’ve already said when you claim “This is not the product of ‘chance’ or ‘necessity’.”

    Whether you can see it yourself or not, your definition is circular and your conclusions are assumed in your premises.”

    Blah, blah, blah…

  94. “Unchanged and unsupported. You have the burden of proof and you are refusing to bear it. There is therefore no reason to take your claims seriously.

    You want to be able to claim that dFSCI exists if no one else can prove it doesn’t. This is clearly a Designer of the Gaps argument.”

    You are starting to sound like Maya. No intention to offend any of the two.

  95. Zachriel:

    “While your claims depend on a negative argument, complete knowledge of the relevant domain, hypothetico-deduction can peer into the great abyss of ignorance and reach some reasonable, albeit tentative, conclusions. Halley’s prediction of the Comet wasn’t just a guess, you know, but a consequence, i.e. entailment, of the Theory of Gravity. It doesn’t ‘prove’ the Universal Theory of Gravity, but it does lend dramatic support.”

    I agree. I am not saying that entailments are useless. I am saying that you cannot build a general theory of science on them. It’s not the same thing, as you will certainly understand.

  96. Zachriel:

    “We may use a better ruler to more accurately measure the length of an ordinary object, but the length doesn’t usually just up and disappear because we learn something new about the object.”

    Different context. Information and meaning require a different approach. Again, the reductionists just deny what exists (consciousness, inforamtion and meaning), or state that it cannot be known by their self-made rules.

    Well, that’s true. Those things cannot be understood by their self made rules. Their self made rules are wrong in that context.

  97. Zachriel:

    “Yes, our understanding of biochemistry is now so complete that biochemical journals struggle to fill their pages with new findings. Meanwhile, ID journals can’t publish their new discoveries fast enough to keep up with the vast quantity of ID output.”

    Silly. You can do better.

  98. Zachriel:

    “In any case, it’s a typical Gap argument. It requires being able to reasonably exclude all necessity mechanisms.”

    Only in your imagination.

  99. Zachriel:

    “In any case, you claim that evolution is not capable of generating this dFSCI, so you really haven’t resolved the issue. It still hinges on that claim and has little, if anything, to do with the fancy calculations.”

    a) Strict, rigorous, effective falsification of the wrong neo darwinian model

    AND

    b) inference of design by what you call “fancy calculations”

    = ID theory.

    Both parts are necessary.

  100. Gpuccio:

    I have said that I can apply the procedure to a series of objects about which I don’t know whether they are human artifacts or not.

    Ewcuse me for butting in, but if you can indeed do this, why not give us a demonstration? Is there one procedure for any object? Surely one demonstration, rather than just claiming the ability, will silence all critics!

  101. gpuccio: Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

    What you are asserting is what we are debating.

    Our position is that the “complexity we see in biology arose without design”.

    You then try and present as EVIDENCE to support your position that the “complexity we see in biology is due to design”, the FACT that the “complexity we see in biology is due to design”.

    You have simply stated that you are right.

  102. gpuccio: Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

    Toronto: You have simply stated that you are right.

    It’s certainly reads circular. What he’s really attempting is an extrapolation, an argument by analogy. We have two classes of functional complexity; human designed, and biological. As he rejects evolution as an explanation of the latter, he extrapolates the former to cover both cases.

    Whether gpuccio recognizes it or not, this is a classic Gap argument. Lacking a satisfactory explanation, he inserts his preferred metaphysical paste to fill the gap.

    gpuccio: First of all, our understanding of biochemical scenarios is more than enough to exclude reasonable necessity mechanism, and not only a “gap” argument.

    Zachriel: Yes, our understanding of biochemistry is now so complete that biochemical journals struggle to fill their pages with new findings. Meanwhile, ID journals can’t publish their new discoveries fast enough to keep up with the vast quantity of ID output.

    gpuccio: Silly. You can do better.

    It’s apropos. You claimed that our depth of understanding of biochemistry is such that we can exclude necessity. Yet, biochemistry is a very active field. In any case, the way to peer into the depths of human ignorance is by proposing and testing hypotheses. ID is notably sterile in this regard.

    gpuccio: a) Strict, rigorous, effective falsification of the wrong neo darwinian model AND b) inference of design by what you call “fancy calculations” = ID theory.

    Clause a) means all known necessity mechanisms. That’s why it’s a Gap argument. We can recognize it because the more ignorant we are, the more likely to conclude design. It also hinges on rejecting evolutionary theory.

    We mentioned this before, but you never responded; the complex and functional movement of the classical planets. Lacking a satisfactory mechanism, and by analogy with human-designed devices, such as astrolabes, we conclude design. Discover gravity, and suddenly the FSCI evaporates along with the design conclusion.

  103. Zachriel: We have two classes of functional complexity; human designed, and biological. As he rejects evolution as an explanation of the latter, he extrapolates the former to cover both cases.

    That’s an accurate description.

    What I don’t understand is that after someone writes a sentence like that and then reads it themselves, how can they not see what they’ve done?

  104. ..and by “a sentence like that”, of course I mean this one:

    gpuccio: Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

  105. gpuccio: Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.
    ————————
    Of course there are lots of empirical observations of the Designer at work on proteins.

  106. Nonsense. Any metric can be made more accurate by new data. dFSCI is very objective for the context where it is measured. If the context changes, because a new understanding is gained (for instance, a path to the result which is NSable is demonstrated), the metric will be applied to the new, more accurate context.

    That’s a general problem of methodology, and has nothing to do with the metric itself.
    ———————————-

    Need I point out that the supposedly objective metric will produce false positives in every instance where historical knowledge is lacking?

    The consequences for science can be seen in the behavior of ID/creationists, who invariably look for gaps rather than look for regular phenomena.

    Add write “There be dragons” on maps, rather than explore the regions.

  107. a new understanding is gained

    —————————-

    This is called moving the goalposts. No new understanding is gained, nor is there a new context. The calculation of dFSCI was wrong from the beginning. There is no way for a metric to be objective when it depends of ignorance.

    Now if dFSCI actually predicted a finding, that would be interesting, but historically, the critics of evolution have predicted that intermediates are impossible or will not be found, and historically they have been wrong.

  108. More confirmation of Keefe and Szostak’s work.

    Sort of undermines Gpuccio’s speculation :

    It is true, however, that nobody, at present, can exactly calculate the size of the target set in any specific case. We simply don’t know enough about proteins.

    So, we are left with a difficulty: to calculate the probability of our functional event, we have the denominator, the search space, which is extremely huge, but we don’t have the numerator, the target space. Should we be discouraged?

    Not too much. It is true that we don’t know exactly the numerator, but we can have perfectly reasonable ideas about its order of magnitude. In particular we can be reasonably certain that the size of the target space will never be so big as to give a final probability which is in the boundaries, just to make an example, of Dembski’s UPB. Not for a 300 aa protein. And a 300 aa protein is not a very long protein. (I will not enter in details here for brevity, but here the search space is 20^300; even if it were 10^300, we still would need a target space of at least 10^150 functional proteins to ensure a probability for the event of 1:10^150, and such a huge functional space is really inconceivable, at the light of all that we know about the restraints for protein function.)

    Link

    Seems functionality is widespread in random protein sequences, gpuccio!

  109. Sorry, just realised that my previous comment does not make it clear that I am referring to Petrushka’s comment and the linked paper by Fisher et al.

  110. Seems functionality is widespread in random protein sequences, gpuccio!
    ———————

    Is that in reference to this comment by GP:
    ————————

    A simple question: if the natural search space of proteins is so rich in functiona as you and Zachriel seem to believe, how is it that intelligent protein engineering, in many years, has not yet found one single new protein fold which may be said really useful to something?

    Why no intermediaries?

    Or this:
    ———————-

    And anyway, even more in general, the darwinian hypothesis is that minimal function must be selectable by NS (IOWs, must be able to confer a positive differential reproduction) to be optimized by evolution. The Szostac protein does not meet this requirement, by far it does not meet it, not even in its “refined” form, least of all in its original form.

    And the refinement itself was accomplished through artificial intelligent selection in the lab. It could never have happened in a natural biological system. The ability to bind loosely ATP is barely enough to select the molecules in a very sensitive experimental device. That has nothing to do with conferring positive differential reproduction to a living being.

    But I am afraid you will again deny even these elementary things.

    Why no intermediaries?

  111. I seem to hear the shuffle of far off goalposts being moved.

    Now the discussion will shift from the sparseness of function space to the fact that the artificial genes were designed.

  112. Alan:

    Welcome to the discussion.

    Excuse me 🙂 , but in the course of the three threads I have given many examples.

    For proteins, I have given the cimputation made in the Durstin paper for 35 protein families, 28 of which (if I remember well) comply with my threshold of 150 bits.

    For human artifacts, I have given the example of Hamlet, attempting an approximate calculation of its functional complexity.

  113. Toronto:

    “Our position is that the “complexity we see in biology arose without design”.

    You then try and present as EVIDENCE to support your position that the “complexity we see in biology is due to design”, the FACT that the “complexity we see in biology is due to design”.”

    As usual, I have never said such a thing.

    It is rather tiring going on being called to defend what I have not said.

    What I have said is that biological information is the set about which there is controversy, and no final knowledge about the cause.

    dFSCI can be proven empirically to be a reliable indicator of design in the set of human artifacts (including objects which could appear to be human artifacts, but are not).

    Therefore I apply dFSCI in favor of a design inference for biological information. As Zachriel (and only he) seems to have understood (after I have said it n times), it is an inference by analogy.

    Maybe some time you will understand too 🙂 .

  114. Zachriel:

    “It’s certainly reads circular.”

    No.

    “What he’s really attempting is an extrapolation, an argument by analogy”

    Yes. But the correct term is “inference”.

    “As he rejects evolution as an explanation of the latter, he extrapolates the former to cover both cases.”

    I don’t “reject evolution”. I find, for very reasonable motives, that the neo darwninian model is not a valid explanation.

    “Whether gpuccio recognizes it or not, this is a classic Gap argument. Lacking a satisfactory explanation, he inserts his preferred metaphysical paste to fill the gap.”

    And this is a classic wrong epistemological argument. And, if you don’t take it as an offence, trivial anti ID propaganda.

    “the way to peer into the depths of human ignorance is by proposing and testing hypotheses. ID is notably sterile in this regard.”

    I have already commented on both these points. Repeating our positions forever will not help.

    “Clause a) means all known necessity mechanisms.”

    No. Already commented on that.

    “That’s why it’s a Gap argument.”

    No.

    “We can recognize it because the more ignorant we are, the more likely to conclude design.”

    Wrong. It’s exactly the overwhelming growth of knowledge about biological complexity which has made the design hypothesis imperative.

    “It also hinges on rejecting evolutionary theory.”

    Yes. True. Sure. Absolutely so. (meaning neo darwinain model, obviously)

    “We mentioned this before, but you never responded; the complex and functional movement of the classical planets. Lacking a satisfactory mechanism, and by analogy with human-designed devices, such as astrolabes, we conclude design. Discover gravity, and suddenly the FSCI evaporates along with the design conclusion.”

    There is nothing to respond. This is a bad analogy, and nothing else. The scenarios are completely different, and the arguments are completely different.

  115. Toronto:

    “That’s an accurate description.”

    No.

    “What I don’t understand is that after someone writes a sentence like that and then reads it themselves, how can they not see what they’ve done?”

    I restate:

    Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

    What have I done? I have stated a simnple truth.

  116. Petrushka:

    “Of course there are lots of empirical observations of the Designer at work on proteins.”

    Complete non sequitur, as usual. You are a real specialist in diversions…

  117. Alan:

    I have difficulties in understanding to which paper you are referring.

    Could you please provide the limk? Thank you.

  118. gpuccio: Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

    What have I done? I have stated a simnple truth.

    But you haven’t finished that “simple truth”.

    If you tried to equate human engineering to the complexity we see in biology you have to compare on a point to point basis.

    For instance, when humans design, they specify first. We don’t see specs from the designer related to biology.

    Humans make mistakes which we see all the time. We don’t see that in biology.

    ID accepts micro-evolution but human designs have never been able to do that.

    ID determines the “S” in dFSCI after the fact. From you to Dembski and all ID’ers in-between, the “S” is never considered in the design until after the design has been completed.

    You actually have no way of determining “S” at all. You can only determine “F” after the design is complete and you actually see it “functioning”.

    Show me how to determine “S” without regard to “F”.

  119. Functional complexity above the threshold exists only in designed artifacts, but according to empirical observations, and not by definition.

    What have I done? I have stated a simnple truth.

    —————————

    Where are the empirical observations of biological objects being designed?

  120. Zachriel: What he’s really attempting is an extrapolation, an argument by analogy

    gpuccio: Yes. But the correct term is “inference”.

    Can you provide a clear reason why extrapolation doesn’t apply, but inference does?

    Zachriel: I don’t “reject evolution”. I find, for very reasonable motives, that the neo darwninian model is not a valid explanation.

    Of course you reject evolution. You say you have reasons to do so.

    Zachriel: the way to peer into the depths of human ignorance is by proposing and testing hypotheses. ID is notably sterile in this regard.

    gpuccio: I have already commented on both these points. Repeating our positions forever will not help.

    First you said you weren’t beholden to some particular view of the scientific method. Then, you said there were entailments to ID, after all. Then, instead of providing an entailment of ID, you provided a supposed falsified entailment of evolutionary theory.

    gpuccio: a) Strict, rigorous, effective falsification of the wrong neo darwinian model AND b) inference of design by what you call “fancy calculations” = ID theory.

    Zachriel: Clause a) means all known necessity mechanisms. That’s why it’s a Gap argument.

    gpuccio: No. Already commented on that.

    You mean even if you falsify the “neodarwinian model”, but haven’t falsified other proposed models, that it doesn’t matter?

    Zachriel: We mentioned this before, but you never responded; the complex and functional movement of the classical planets. Lacking a satisfactory mechanism, and by analogy with human-designed devices, such as astrolabes, we conclude design. Discover gravity, and suddenly the FSCI evaporates along with the design conclusion.

    gpuccio: There is nothing to respond. This is a bad analogy, and nothing else. The scenarios are completely different, and the arguments are completely different.

    It’s not an analogy, but an argument. We have a complex functional phenomena without an explanation. We conclude design.

Leave a comment