Assessing dFSCI (3) high complexity

This third post about dFSCI deals with this third part of the definition

c) Whose complexity is higher than 150 bits

The meaning and value of the intelligent design definition of complexity has been debated thousands of times in papers, blogs, live debates etc all over the world.  I have nothing original to add – so this post is really just a gathering together of well-known arguments.  The general form of the ID position is:

There are cases when, given some assumptions, the probability of producing an outcome using known processes based on natural laws and chance is astronomically low.  This probability is expressed as bits of information and called complex when it is low enough. 

In the case of dFSCI this calculation can take a more precise form.  Gpuccio has explained it many times, for example here (I have corrected a few typos):

Let’s make an example. We want to measure dFSCI in a protein, an enzyme. We recognize that the enzyme accelerates a specific reaction. So, we define that as its function. Then we define an arbitrary, but reasonable, threshold for that function (for example, that the reaction must take place at least at a certain rate in standard conditions), and if that condition is verified we give a value of 1 to the specification coefficient, otherwise we give it a value of 0. In that way, for any molecule tested for that function, the function will be present or absent.

3) Then comes the measurement of the complexity of the protein. That’s the most difficult part. There are at least two ways to do that. One is valid in principle, but can be applied only with some approximation to proteins, at least until we have better understanding of them.

The general principle is that the complexity is the ratio between the functional space and the search space. The search space for a protein is easy to calculate, it is 20^length of the protein in AAs.
The functional space, or target space, is the difficult part: it can be defined as the number of sequences of the same length which, if tested, would exhibit the function according to our definition.

Obviously, the measurement of the target space cannot empirically be made that way. So, we have to make reasonable inferences based on what we know of proteins and of the relation between structure and function. This is a subject of research and debate, and we are certainly making progress towards a better understanding.

If we have a reasonable assumption about the size of the functional space, the complexity of that protein can be easily calculated and expressed in bits, exactly like any other complexity (Kolmogorov complexity, Shannon’s entropy).

I think it is fairly clear how to do this calculation, even if it is practically rather difficult.  But what does it mean?  Effectively what has been estimated is the answer to this question:

If you have a protein of length N and you allocate an amino acid at random  to each position (i.e. each of the 20 possible amino acids is equally likely to be allocated to a given position and is allocated independently of the amino acids allocated to the other positions) what is the probability you will end up with a protein that performs the defined function?  Not surprisingly the answer is astronomically low.

A common response is that in fact new proteins do not evolve like this.  They comprise small changes to already existing (and functioning) proteins.  So we are only concerned with the probability of a few positions changing to create the new function.  The ID proponent will typically respond to this by saying that proteins are actually in islands.  While the proteins in an island are very similar, there are enormous gaps between islands with many, many differences in the proteins.  So a better measure of complexity might be the probability of getting from one island to its “nearest neighbour” by “random” changes in amino acids.  Again, I have no reason to dispute that in some cases this probability is incredibly low.

The argument then finishes on the lines of – it is massively improbable that the gap was bridged just by random change, we have no credible account of how evolution bridged this gap e.g. a series of steps each with a selective advantage, therefore the gap must have been facilitated by a designer.

I have three problems with this (none of them at all original).

1) I am not an evolutionary biologist but I suspect that this model does not do justice to the range of different ways that a protein may change.  The underlying DNA may undergo insertions, deletions, transpositions, duplications etc which may make massive changes to a protein, but in such a way that the probabilities of certain proteins in certain positions is far from independent of their neighbours.  If one protein family differs from another in 20 relevant positions this may not require 20 changes. It may be accomplished in just two or three changes and this would not be apparent from just looking at the end result.

2) Even if the probability of evolving a function is very low  this is not necessarily a problem.  Evolution is not looking for a target. It just stumbles on useful changes (very occasionally).  As many philosophers of science have pointed out – Modus Tollens does not apply to low probabilities.  Just because observation O is highly improbable given hypothesis H, it does not follow that H is improbable (more on this in the next post on functional specification)

3) Even if we were to conclude that current evolutionary models cannot account for bridging the gap it does not follow that therefore a designer was involved.  All that follows is that we do not know how the gap was bridged!

The ID community will respond saying that we also have positive evidence for design.  There are situations outside biology which are also have the characteristics of dFSCI and they are all cases of human design.  I have discussed this elsewhere.

Advertisements

1 Response to “Assessing dFSCI (3) high complexity”


  1. 1 Toronto October 5, 2010 at 10:24 pm

    I consider dFSCI or any of it’s derivatives to be a case of sleight-of-hand so that the actual evolutionary argument can be be misrepresented.

    CSI is “kairosfocus”‘s strawman.

    ID is like ordering a meal from a menu.

    Evolution is like going to a buffet where you don’t know what you want until you see it in front of you.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: