(Note: my original article title was inaccurate as to the position discussed, so I've changed it. The original title was "Is Trinitarian Theism Necessarily More Improbable Than Basic Theism?" The gist of the article was always disputing the notion that trinitarian theism must necessarily be no more probable than basic theism, however, i.e. necessarily as-or-less probable.)

Our friend David Marshall, author of several books on Christian cultural apologetics, writing at his blog Christ The Tao, has recently been responding to the "20+ Questions For Theists" asked by naturalistic atheist Jeffery Jay Lowder (another longtime correspondent of mine, from back in the days before weblogs) as part of Jeff's "Evidential Arguments For Naturalism" series at the Secular Outpost (i.e. Internet Infidels) blog.

(Whew, that was kind of a long provenance trail. Sorry.)

Since I don't already have enough large projects I'm working on, naturally I'm thinking of also working up *another* series to analyze his arguments (unless another Cadrist beats me to it perhaps). This is probably evidence of me being crazy, but what the hell. {g} Jeff has always been a fine opponent and I'd be glad to work with him again.

Anyway, while commenting on David's post, Jeff wrote this:

"Since Christian theism entails theism, it cannot be more probable than (generic) theism. If B entails A and A is improbable, then B or any other set of beliefs which entail A are necessarily improbable."

My immediate reply was, and is, that this seems like saying neo-Darwinian gradualism entails biological evolutionary theory, so it cannot be more probable than generic b.e.t (and maybe less so, proportionate to the number of details thanks to multiplication of probabilities).

I know there are some opponents of biological evolutionary theory (usually theists) who go this route, but I'm pretty sure proponents of the neo-Darwinian synthesis don't accept that as a principle reply. I'm also sure they have good reasons not to do so! (My own technical difficulties with neo-Darwinian gradualism aren't quite that sort.) Is this a sauce for geese and ganders situation?--if so, does it count in both directions?--if not, why not?

Eleven pages of discussion after the jump!

As anyone who has taken junior-high mathematics should know, when fractional probabilities are multiplied together the result will always be less than any of the original fractions. 0.9 (or 90%) x 0.8 x 0.7 doesn't equal more than 90%, or any of those fractions individually, or an average of those fractions (80%), or even the next tenth fraction lower (60%).

It equals 0.5, 50%.

Engineers and a number of other scientists and technicians (including casinos) use this property to calculate how often (on average!) we can expect a particular result of a set of random or quasi-random events when those events are connected dependently with one another in various ways.

So, to give a popular example, casinos know that a pair of six-sided dice on a perfectly randomized throw (or near enough to never mind) can be expected to land with two six-pip faces top high (for a total of twelve pips) an average of once every 36 rolls. That isn't

*guaranteed*, but that's what the casino is willing to bet will happen, on average, on any practically super-large number of dice rolls. There may be streaks where it happens much more often (which bettors may bet on), and streaks where it happens much less often (which bettors may also bet on!), but as long as no one is tampering with the randomness of the dice the casino is willing to bet on that result in the long run. (Which is why bettors like to try to tamper with the dice!--by throwing them in skillful ways for example.) Consequently, the house will pay bettors a result worse than 36 multiplied by however much money the bettor bet for a win. Maybe they'll pay 25 to 1 for example. That way there's a house edge, and the casino will definitely (or almost certainly) make a profit (and a pretty significant one) over the long run.

Are there other applications for this principle? Well, yes and no. Let's say someone asks what the probability is that I will shatter a basketball backboard with a dunk. That probability is literally zero unless I dunk, but (other things being equal) will be 100% certain if I dunk with X-amount of force at the proper angle, depending on the fracture strength of the backboard material (which will be affected by other factors such as stress from having withstood previous dunks, or players hanging off the rim, etc.)

But what is the probability that I can dunk the ball in the proper way to cause a shatter? Good luck trying to figure out! That would require running numerous repeated tests to get some kind of clear idea of how effective I was at even dunking the ball in the first place (because if I miss there will definitely be no shatter), much moreso how effective I am at generating the forces necessary to shatter a board with such-n-such composition.

Once we've run a sufficiently large number of tests to get an idea of how the data tends to repeat, we could figure out the mathematical probability of my being able to shatter the backboard--

*all other things being equal!*Which is an important qualification, because that probability estimate won't be worth much if I spend two months working through the Beachbody's "Insanity" fitness program!

So, figuring up a legitimate percentage chance of my shattering a backboard with a basketball dunk isn't strictly impossible, but it's very impractical; not least because the system being examined isn't something with inherently and rigidly repeatable characteristics--unlike rolling a pair of dice which, while there are ways to influence the result outside the system of the dice, features discreetly concrete constraints built into the system of their behavior.

But what if a person (myself or anyone else) wants to form an opinion of the likelihood that I will be able to shatter the backboard? Is there nothing that can help with that?

Of course there is. We can build intuitive estimates of likelihood before the attempt, factoring in past data so far as we're aware of it; and we can build intuitive estimates of how likely it

*was*after the attempt, in order to get some feeling of appreciation about how improbable the result was. Which in turn can give us grounds for suspecting we've missed some data in our estimate. "Wait, Jason just finished the 'Insanity' fitness program?! Well, that explains why he can shatter a backboard now!" (I'll take a moment to clarify that I haven't finished that fitness program, nor can I shatter a backboard, nor am I even interested in doing so.)

But notice that I was talking about

**intuitions**and

**feelings**. I don't mean that the result was thereby irrational, because feelings are data, too, and we can use them for reasoning. I do mean that such data are not even

*remotely*proper for arriving at a mathematical probability of success or failure.

At most what can happen is that, for purposes of expression, we may assign a feeling of intuitive likelihood a corresponding percentage strength based quite literally (as humorist Pat McManus once wrote) on a study where (in effect) we write down a bunch of numbers until we get one that seems right. A feeling of "pretty sure" might be assigned a representative chance of 80%, for example; or of 86% if we're being funny about being particular!

But, can't those numbers be used in some way that legitimately corresponds with how we ought to regard the likelihood of a complex situation being true?

Well, yes and no.

Yes, it's true that in a rigorous consideration before (or after) the fact of the likelihood or unlikelihood of a complex event occurring, it's a good idea to keep in mind that in order for Z to happen, first A has to happen, and then B which can’t happen without A, and then C, and so on, each of which may (or may not!) have some intrinsic probability or improbability of happening (the probability per se of someone leaning over to flip both dice to 6 and then adding a die flipped to 1 for a total of 13 is utterly incalculable and so can have neither intrinsic probability nor improbability, not even 50%); so that the particular result Z would

*insofar as such an estimate goes*always be significantly more unlikely than any of its preceding factors. That’s a result of cautious conservatism, which is usually but not always a good idea.

But also no, there are often huge qualifications against this sort of inferential likelihood estimate applying. If we’re talking about a situation of complex detail where the total result isn’t a case of this and then that and then the other thing happening, then even pseudo-multiplication of pseudo-probabilities isn’t the proper way to arrive at a result of likelihood of the truth of the composite result. The whole result would be better intuitively graded by the least expected likelihood of an element. There isn’t any mathematical way to assess the probability that if God exists God is three distinct Persons (not mere modes) of one single substantial reality, but to the extent that this seems unlikely or impossible a person will reasonably reject trinitarian theism as being certainly or probably untrue. If that detail isn’t included in the proposal, then the strength of the proposal will stand on what is the most objectionable detail of the proposal left over.

But on the other hand, someone can reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, and supernaturalistic theism seems even less likely to me, and trin-theism even less likely than that, so I feel like trinitarian theism is even less likely than naturalistic theism to be true.” Or again someone might reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.”

Now, this happens to be what I expect is probably Jeff’s position. But notice that my expectation about what is probably Jeff’s position

*has only the most superficial resemblance to mathematical probability!*It isn’t mathematical probability at all, even if I assign a percentage representing the strength of my expectations about what Jeff meant.

No doubt (or at least hopefully!) I arrived at that expectation about what Jeff meant by drawing an inference from prior data where Jeff was talking about what he believes and doesn’t believe regarding those topics. But if I felt 90% sure he meant that somewhere, and 95% sure he meant that somewhere else, and 100% sure he meant that in a third place, would it be even remotely proper for me to estimate the likelihood he’s talking about the same position now by multiplying together the pseudo-probabilities I assigned to my expectations about what he meant earlier?! Should I think now that if I was those levels of sure about what he meant before, I ought to be 86% sure (or the felt-strength equivalent thereof) now!? Of course not!

Should I be 100% sure now on such grounds I just gave? Nope.

Should I be 100% sure now on such grounds if I happened to be 100% sure what he meant the other three times? Only if I have some reason to discount the possibility that he may have changed his mind about one or more portions of his belief since the last time I read him. And if I regard him changing his mind as being improbable, should I factor

*that*mathematically into my expectation? I’ll either end up with a “probability” result proportionately smaller even than my small expectation he changed his mind since then (for example 100% * 100% * 100% * 1% that he changed his mind = 1% that he still means the same thing now!), or with a result higher (and ridiculously higher) than 100% certainty! (100% * 100% * 100% / 1% = 10,000%!)

Mathematical probability simply isn’t the right way for me to estimate whether Jeff means “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.” And neither can it properly describe the resulting less-probable feeling of trinitarian theism being true compared to sheer theism being true.

It may seem there is a train of reducing probability, but if (as many people do) Jeff happens to believe that supernaturalistic theism would be very probable or certain if theism is true, then that would still be no mathematical ground for thinking trinitarian theism was more probably true after all--even though the trin-theism doctrinal set also involves supernat-theism being true.

Yet there

*is*a properly legitimate feeling involved in inductively comparing likelihoods here, so that in some sense Jeff can have at least an initial expectation (and maybe even more than an initial expectation) that trinitarian theism is even less likely to be true than supernaturalistic theism, and maybe also supernaturalistic theism than sheer theism.

And now anyone familiar with the field (I expect probably Jeff included!) will be replying “Bayesian Theory” or “Bayes’ Theorem” (or something similar).

But Bayesian Theory isn’t about multiplying and dividing fractions in order to reach a mathematic probability estimate.

That’s how it is often popularly represented, and so we see people like (to take an example from my own side of the aisle) Richard Swineburne lecturing audiences about how Bayes’s Theorem can be used to demonstrate that there is a greater than 90% probability that Jesus was resurrected from the grave. (Thomas Bayes himself was not only a philosopher and mathematician but a Christian preacher and minister.) He ought to know better than to use Bayes that way: he literally wrote (or rather edited) the book on it in 2002, and in his monograph for that collection (

*Bayes’s Theorem*, Oxford University Press, 2002) he acknowledges that the Theorem shouldn’t be used as a math operation. (A point that Elliot Sober, in his own article for the book, hammers home repeatedly.)

It’s easy to try to do so, because it

*looks*so temptingly like a math operation; and because there are in fact legitimate math operations which look quite similar to it:

P(h|e&k) = (P(e|h&k) P(h|k)) / P(e|k)

Can’t those elements (such as P(h|k)) be assigned fractional values of percentage likelihood? Yes, just like we can assign felt intuitive likelihoods or unlikelihoods a fraction representing how strongly we feel about the estimate.

Well, and can’t we also assign fractional values of percentage likelihood to the sub-elements, like P(h)? Yep, we can do that, too.

Well, doesn’t that mean we should be arriving at P(e|k) by dividing the fractional percentage of P(e) by the fractional percentage P(k)? And similarly for the other elements? And doesn’t that mean we ought to be multiplying the resulting percentage of P(e|h&k) by P(h|k) and then dividing the result by the factional percentage of P(e|k) in order to arrive at P(h|e&k)?!

The hell no.

P(e|k), for example, is supposed to represent the expected likelihood of new evidence (e) being true or having been obtained in light of current evidence (k) is discovered regardless of whether hypothesis (h) is true or not. In other words, how well does new evidence (e) fit with current evidence (k)?

But if the new evidence fits well, or doesn’t fit well, wouldn’t that make the hypothesis more or less likely to be true respectively? No, it might mean that the new evidence is gumpy or in other ways shouldn’t be included with current evidence. P(e|k) is

*irrelevant*to (h) being true.

But the formula

*looks*like we’re supposed to divide by the P(e|k) fraction to affect our estimation of whether (h)ypothesis is true granting new (e)vidence and (k)urrent evidence. (I don’t know why “c” wasn’t used.)

*But it isn’t really a mathematical formula!*This can be demonstrated by supposing that we find it is impossible for new (e) to have really occurred granting that current evidence (k) did occur. An impossible probability is 0%.

If Bayes’ Theorem was really a mathematical operation of multiplying and dividing fractions to get a result, any impossible new (e) in relation to current evidence (k) would instantly mean that the Probability that any hypothesis featuring the truth of both new and current evidence must not only be 100% probable (i.e. certainly true) but infinite-plus percent probable! (Because anything, even the lowest non-infinitely small fraction, divided by zero equals positive infinity.)

Which is utterly ridiculous.

It’s still utterly ridiculous even if we’re only talking about pseudo-probabilities, because the underlying proposed logical relationship would be the same.

It’s still utterly ridiculous even if we’re only talking about P(e|k) being improbable instead of impossible. The lower the probability, not only the more certain (h) would be by comparison, but results for very low probabilities would still exceed and even greatly exceed 100%.

The fact that multiplying any two fractions other than 100% certainty together “above” the “dividing sign” would result in a lesser fraction which might keep the fraction “below” the “dividing sign” from producing a result greater than 100%, doesn’t matter in the least. (Although that would bring up the whole other problem of those two logical elements not really being related to each other that way, either. But for relative brevity I’ll skip that discussion.)

What Bayes’ Theorem attempts to do is to describe (better than a competing theorem proposed by David Hume around the same time) what the process of inductive reasoning logically looks like, and specifically how we adjust our estimates of the likelihood of the truth of an idea when we receive new evidence. That’s very important and kind of useful (inasmuch as a description of what all mentally healthy humans already do every day is kind of useful), but that’s all.

The process actually goes like this:

Assuming I already have some opinion of how likely hypothesis (h) is, based on current evidence (or evidence set) (k)...

...and then I run across new evidence (or evidence set) (e)...

...do I think it is likely that the new evidence would have been found given the truth of h and k? Then I adjust my estimate of P(h|k) upward, and that’s my new P(h|e&k).

On the other hand, if new evidence (e) doesn’t fit h&k very well, I would adjust my expectation that (h) is true downward, and that becomes my new P(h|e&k) estimate instead.

At the same time I ought to consider, without regard to whether (h) is true or false, whether (e) fits with (k). If it doesn’t, then that might be a problem for (h) or it might not. But if I don’t think P(e|k) is very high in itself yet I find (h) helps (e) fit into (k), then that ought to definitely increase my estimation of (h) being true: hypothesis (h) would in that case solve what would otherwise be an evidential conflict between (e) and (k).

On the other hand, if I decide (e) fits well with (k) even without regard to (h), I shouldn’t really use (e) to increase my estimation of (h) likely being true. Although I could bump up my estimate of (h) being true a little maybe if (e) looks proportionately even a little more likely to have occurred with (h) being true than without (h) being true.

On yet

*another*hand, however, if (e) looks improbable to me granting the truth of (k) and (h), yet (e) fits (k) pretty well (to whatever extent), then (e) has strength

*against*(h) being true, and I ought to revise my estimate of (h)’s likelihood downward. But then again, if (e) doesn’t fit well with (k) and (h) doesn’t seem to affect the matter one way or other, then maybe the problem is with (e) after all and I had better recheck my new data; anyway I shouldn’t hold that against (h). But then

*again*, if (e) doesn’t fit with (k), and especially doesn’t fit with (k) assuming the truth of (h), then that counts in an important (if backhanded) way toward (h) being actually true! (Because (h) predicted the new data wouldn’t fit well with the current data.)

If you’re saying, “Wait, but those aren’t inversely proportionate relationships between P(e|h&k) and P(e|k)!”--then congratulations for being able to follow out all that better than most people can! But also right, yep, those relationships are kind-of inverse, but not necessarily in proportionate in strength to one another. Because

*it isn’t a mathematical operation!*

The upshot is that I don’t think there is any system of analysis where simply by virtue of the system (per se) trinitarian theism must

*necessarily*be less probable than basic theism. Or as Jeff put it, “Since Christian theism entails theism, it cannot be more probable than (generic) theism. If B entails A and A is improbable, then B or any other set of beliefs which entail A are necessarily improbable.”

Multiplication of hypotheses (or let us say, proposed doctrinal details)

*for no reason*would be grounds for someone to legitimately regard the result as being necessarily more unlikely than a simpler hypothesis (per Ockham’s Razor), but not necessarily more improbable in a fractionally mathematic sense (except in a trivially pseudo-mathematical way). But no proponent of theism at any time ever regarded the extra details of this-or-that variant of theism (especially including trinitarian theism) as being included for no reason. The details have always historically been included for what the proponents thought were important and sufficient reasons.

Nor can multiplication of probabilities always be compounded for a resulting improbability of the proposals. That only works in certain special cases, and the question of trin-theism vs. basic theism isn’t one of those cases: such a question isn’t like figuring out the probability of rolling three twelves in a row compared to rolling one twelve on any given roll, for example, or even like figuring out the probability of me shattering a backboard with a basketball dunk. It isn’t even like figuring out the probability of getting (and keeping and sufficiently spreading) 16 morphological mutations between chimps and humans within a six million year span. It’s more like the question of whether neo-Darwinian gradualism is more or less probable than some simpler process of biological evolutionary theory.

Someone might intuitively feel

*before going to the analysis*that the more detailed proposal should be more unlikely than a simpler related (or perhaps directly alternate) proposal. And that can be a reasonable first-response suspicion. But it isn’t a necessary ground for the more complex proposal being really more improbable than the simpler one. If the more complex proposal turns out to avoid or outright solve more logical problems, and fits together better with established data elsewhere than a simpler proposal would, then the more complex proposal ought to be regarded as more plausibly true than the simpler. If basic theism fails to sufficiently address problems of moral grounding, divine consciousness in relation to an other, and process feasibility of creating not-God systems and entities (like spatio-temporal Nature, and creatures within Nature); and if these are serious problems for regarding basic theism as unlikely in proportion; but if trinitarian theism addresses these problems better than basic theism does; then trinitarian theism ought to be regarded as more probable by proportion (or at least more plausible even if still necessarily impossible for other reasons) than basic theism. Similarly, the reason we have neo-Darwinian gradualism currently is because more basic forms of b.e.t (including Darwin’s own version) ended up having crippling logical and evidential problems which neoDargrad addresses relatively more successfully in various ways.

Even if the doctrines were proposed ad hoc to save the hypothesis (as is sometimes although ignorantly claimed about trin-theism; and as was in fact historically the case in regard to some of the extra details of the neo-Darwinian gradualist synthesis), so long as they address the problems better the result ought to be regarded as more likely than the more basic alternative. But if the doctrines were already arrived at for other reasons independently of the problem, that would be more impressive for a rational comparison of likelihoods between the two proposals. And more especially so again, if the reasons for arriving at the more complex details turned out to be good reasons to this or that extent (and/or turned out to be experimentally discoverable after all: the extra details, even if originally proposed ad hoc, were confirmed.)

Consequently, when my friend Jeff replies, “if you don't believe me [about trinitarianism being necessarily never more than as probable as basic theism], check the math,”--well, if he means the math of a logical comparison between various kinds of theism and orthodox trinitarian theism, I’ve already very extensively done that on the metaphysical side of things (where I find the logical math adds up better to theism than atheism, too--by which I don’t mean in a mathematical fashion in any case!); and I continue to check the math in other ways as well. I don’t settle for a mere fractional multiplication of hypotheses, if that’s what he means; and when I run a Bayesian analysis I do so in a way that allows the possibility of increasing or decreasing my estimate of the likelihood of this or that hypothesis, rather than in a way which can only lead to unreal results in principle (i.e. by treating the elements as fractions in a mathematical formula.) The results of which I find still point more toward ortho-trin being true than basic theism only.

This is also why I disagree with Jeff when he follows up with the acknowledgment, “And yes, for the same reason, I think your counter-example is correct: neo-Darwinian gradualism entails evolutionary theory, so it cannot be more probable than generic ET.”

I do think neo-Darwinian gradualism has some severe and even crippling technical problems, both in principle and in regard to evidential data; and I think some of those problems arise precisely from details specific to neo-Dar-grad. And those problems affect how likely I now regard neo-Darwinian gradualism to be true.

But I also know enough about neo-Dar-grad (which I grew up believing, as did all the Christians I most admired while growing up) to say that I cannot imagine any significant way in which it could be accurately described as being necessarily no more probable than generic biological evolutionary theory. It is

*very*clearly a superior theory to less detailed alternatives (so far), and between nDgrad and general b.e.t I regard the former as being substantially and significantly more likely to be true.