William Dembski on Bayes

There is a long running dispute in the world of statistics between those who think that Bayesian inference (or the closely related comparison of likelihoods) is a better way to assess hypotheses than classical hypothesis testing which is derived from the work of RA Fisher. This dispute is relevant to the Intelligent Design.  William Dembksi’s method for determining if something has been designed is a rather idiosyncratic variation on classical hypothesis testing.  Bayesian inference is an alternative method of detecting design and if it is superior then it makes the ID methodology redundant and may even prove it fallacious. The ID method relies on disproving one hypothesis (chance) and concluding the alternative (design) must be correct. Bayesian inference compares one or more hypotheses.  As such it requires a clear definition of the design hypothesis and this is something the ID world is keen to avoid.

Dembski is well aware of the likelihood approach and has tried to refute it by raising a number of objections elsewhere, notably in chapter 33 of his book “The Design Revolution” which is reproduced on his web site. But there is one objection that he raises which he considers the most damning of all and which he repeats virtually word for word in the more recent paper. He believes that the approach of comparing likelihoods presupposes his own account of specification.

He illustrates his objection with a well worn example in the ID debate — the case of the New Jersey election commissioner Nicholas Caputo who is accused of rigging ballot lines. It was Caputo’s task to decide which candidate comes first on a ballot paper in an election and he is meant to do this without bias towards one party or another. Dembski does not have the actual data but assumes a hypothetical example where the party of the first candidate on the ballot paper follows this pattern for 41 consecutive elections (where D is democrat and R is republican)

DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

This is clearly conforms to a pattern which is very demanding for the hypothesis that Caputo was equally likely to make a Republican or Democrat first candidate. In fact it conforms to a number of such patterns for 41 elections, for example:

There is only one republican as first candidate.
One party is only represented once.
There are two or less republicans.
There is just one republican and it is between the 15th and 30th election.
Includes 40 or more Democrats.
And so on.

Dembski has decided that the relevant pattern is the last one. (This is interesting in itself as it is a single-tailed test and assumes the hypothesis that Caputo was biased towards Democrats. Another alternative might simply have been that Caputo was biased — direction unknown — in which case the pattern should have been “one party is represented at least 40 times”). His argument is that when comparing the likelihoods of two hypotheses (Caputo was biased towards Democrats or Caputo was unbiased) generating this sequence, we would not compare the probability of the two hypotheses generating this specific event but the probability of the two hypotheses generating an event which conforms to the pattern. And we have to use his concept of a specification to know what the pattern is. But this just isn’t true. Using Bayesian inference it is not necessary to consider which of the many possible patterns the result conforms to. We need only consider the likelihood of the observed result given the different candidate hypotheses.

 

For example, under the hypothesis that Caputo was equally likely to choose any candidate the probability of

DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

is very low (but actually the same as any other string of results).  It is (0.5)^41.

However, under the hypothesis that he was biased towards choosing democrats – say 9/10 chance – then the probability of

DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

is still very low but much higher than under the previous hypothesis. The question of what pattern the data conforms to does not arise. In fact this is a major selling  point of Bayesian inference.

 

In fact the  boot is on the other foot.  The whole concept of choosing a pattern which is a rejection area for the chance hypothesis is dependent on the what alternative hypothesis is being considered.   Suppose that instead of suspecting Caputo of favouring one party or another we suspect him of being lazy and simply not changing the order from one election to another — with the occasional exception. The “random” hypothesis remains the same – he selects the party at random each time. The same outcome:

DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

counts against the random hypothesis but for a different reason — it has only two changes of party.  It falls into the rejection area “two or less changes of party” The string:

DDDDDDDDDDDDDDDDDDDDDDRRRRRRRRRRRRRRRRRRRR

would now count even more heavily against the random hypothesis – whereas it would have been no evidence for Caputo being biased.

So now we have two potential patterns that the outcome matches and could be used against the random hypothesis. How do we decide which one to use? On the basis of the alternative hypothesis that might better explain the outcomes that conform to the pattern.

The Bayesian approach is so compelling that Dembski himself inadvertently uses it elsewhere in the same chapter of The Design Revolution. When trying to justify the use of specification he writes “If we can spot an independently given pattern…. in some observed outcome and if possible outcomes matching that pattern are, taken jointly, highly improbable …., then it’s more plausible that some end-directed agent or process produced the outcome by purposefully conforming it to the pattern than that it simply by chance ended up conforming to the pattern.”

Advertisements

6 Responses to “William Dembski on Bayes”


  1. 1 Neil Rickert August 20, 2011 at 5:39 am

    I have never been a fan of Bayesian methods, though I know about them.

    I see the problem with ID use of probabilities as a different one. If one is using Fisher’s methods or, more commonly, the Neyman Pearson methods which derive from Fisher, it is usually considered important that you design the statistical model before any random sampling is done.

    The trouble with what the ID arguments, is that they are based on a model that is constructed after the fact. Sometime, as in retrospective studies, one has to make do. But it is usually appropriate to use conditional probabilities rather than direct probabilities. And that’s where I see the main failing of the ID use of probabilities.

  2. 2 Mark Frank August 20, 2011 at 6:02 am

    What’s your problem with Bayesian methods?

    Anyhow – my main concern is Dembski’s objection which he recently referred to – so presumably he still sticks to it.

    The problems with the argument from “CSI” seem to me very similar to some of the deep problems with pure Fisherian testing. Plus there are additional problems that uniquely belong to ID. Neyman Pearson goes some way to alleviating these problems as it does at least take account of alternative hypotheses provided you have a well defined parameter you are testing. But of course ID has no such well defined parameter.

  3. 3 Neil Rickert August 20, 2011 at 1:50 pm

    What’s your problem with Bayesian methods?

    In terms of practice, they are hard to use.

    My bigger objection is they provide a universal explanation. It is often asserted by philosophers that learning by animals or children is due to using Bayesian methods. I find that highly implausible.

    The trouble with ID use of probability, is that it wants to draw conclusions from highly improbable events that occur in circumstances where every event is highly improbable. Consider the Caputo case that you describe. Any sequence is just as improbable.

    The ID people say: A highly improbable event occurred, in circumstances where every event is highly improbable. Therefore an intelligent agent must be involved.

    In the Caputo case, however, we say: A highly improbable event occurred in circumstances where every event is improbable. And since the event happened to benefit Caputo, therefore we suspect that selection for benefit was involved. The ID people, by contrast, are trying to rule out selection for benefit (natural selection). And, worse still for the ID argument, we cannot even estimate probabilities for their alternative hypothesis (that an external agent did it).

    • 4 Mark Frank August 21, 2011 at 4:45 pm

      Bayesian methods are hard to use (although becoming much easier with modern computers) but that strikes me as beside the point. Bayesian methods are the only ones that even try to answer the question – what is the most the probability this hypothesis is true. Who cares if another method is easier if it gives the wrong answer! Alternative methods such as Fisher and NP are heuristics that pragmatically work in certain limited situations. Detecting the OOL is certainly not one of those situations.

  4. 5 Flint August 20, 2011 at 11:12 pm

    One problem I have with the Caputo case is, there are many districts nationally that have been “safe” districts for one party or the other for a very long time. Redistricting is generally applied in such a way as to make certain districts even safer. So we can’t ask whether this pattern is so directional, or even so stable, just by coincidence or corrupt manipulation – we need a good deal of additional information about the voters in the district themselves. Maybe we could do a comparison with other districts with very similar demographics and stability.

    What statistics would we use, if the same runner always wins the race, to determine whether that runner is genuinely faster, or whether the person recording the race results is corrupt?

  5. 6 Petrushka August 23, 2011 at 7:48 pm

    I have a problem with the way ID proponents use the big number argument.

    They are willing to claim evolution is unlikely due to improbability, but unwilling to consider the problem of big numbers faced by a putative designer.

    We know of no way, even in principle, of efficiently modelling protein folding. the only computer that can fold a protein without doing massive amounts of calculation is chemistry itself.

    This is magnified when you consider the shear numbers of calculations that would be required to design a living thing. And that would be dwarfed by the ongoing problem of fitting living things into a constantly changing ecosystem.Id like to see ID proponents demonstrate the feasibility of implementing design by any means other than Darwinian algorithms.

    They could start by demonstrating a method other than GA that would solve the traveling salesman problem.

    I’ll make the challenge simple. Design of living things is impossible by any means other than evolution. Demonstrate this is untrue by providing a counterexample.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: