Is Trinitarian Theism Necessarily No More Probable Than Basic Theism?
(Note: my original article title was inaccurate as to the position discussed, so I've changed it. The original title was "Is Trinitarian Theism Necessarily More Improbable Than Basic Theism?" The gist of the article was always disputing the notion that trinitarian theism must necessarily be no more probable than basic theism, however, i.e. necessarily as-or-less probable.)
Our friend David Marshall, author of several books on Christian cultural apologetics, writing at his blog Christ The Tao, has recently been responding to the "20+ Questions For Theists" asked by naturalistic atheist Jeffery Jay Lowder (another longtime correspondent of mine, from back in the days before weblogs) as part of Jeff's "Evidential Arguments For Naturalism" series at the Secular Outpost (i.e. Internet Infidels) blog.
(Whew, that was kind of a long provenance trail. Sorry.)
Since I don't already have enough large projects I'm working on, naturally I'm thinking of also working up another series to analyze his arguments (unless another Cadrist beats me to it perhaps). This is probably evidence of me being crazy, but what the hell. {g} Jeff has always been a fine opponent and I'd be glad to work with him again.
Anyway, while commenting on David's post, Jeff wrote this:
My immediate reply was, and is, that this seems like saying neo-Darwinian gradualism entails biological evolutionary theory, so it cannot be more probable than generic b.e.t (and maybe less so, proportionate to the number of details thanks to multiplication of probabilities).
I know there are some opponents of biological evolutionary theory (usually theists) who go this route, but I'm pretty sure proponents of the neo-Darwinian synthesis don't accept that as a principle reply. I'm also sure they have good reasons not to do so! (My own technical difficulties with neo-Darwinian gradualism aren't quite that sort.) Is this a sauce for geese and ganders situation?--if so, does it count in both directions?--if not, why not?
Eleven pages of discussion after the jump!
As anyone who has taken junior-high mathematics should know, when fractional probabilities are multiplied together the result will always be less than any of the original fractions. 0.9 (or 90%) x 0.8 x 0.7 doesn't equal more than 90%, or any of those fractions individually, or an average of those fractions (80%), or even the next tenth fraction lower (60%).
It equals 0.5, 50%.
Engineers and a number of other scientists and technicians (including casinos) use this property to calculate how often (on average!) we can expect a particular result of a set of random or quasi-random events when those events are connected dependently with one another in various ways.
So, to give a popular example, casinos know that a pair of six-sided dice on a perfectly randomized throw (or near enough to never mind) can be expected to land with two six-pip faces top high (for a total of twelve pips) an average of once every 36 rolls. That isn't guaranteed, but that's what the casino is willing to bet will happen, on average, on any practically super-large number of dice rolls. There may be streaks where it happens much more often (which bettors may bet on), and streaks where it happens much less often (which bettors may also bet on!), but as long as no one is tampering with the randomness of the dice the casino is willing to bet on that result in the long run. (Which is why bettors like to try to tamper with the dice!--by throwing them in skillful ways for example.) Consequently, the house will pay bettors a result worse than 36 multiplied by however much money the bettor bet for a win. Maybe they'll pay 25 to 1 for example. That way there's a house edge, and the casino will definitely (or almost certainly) make a profit (and a pretty significant one) over the long run.
Are there other applications for this principle? Well, yes and no. Let's say someone asks what the probability is that I will shatter a basketball backboard with a dunk. That probability is literally zero unless I dunk, but (other things being equal) will be 100% certain if I dunk with X-amount of force at the proper angle, depending on the fracture strength of the backboard material (which will be affected by other factors such as stress from having withstood previous dunks, or players hanging off the rim, etc.)
But what is the probability that I can dunk the ball in the proper way to cause a shatter? Good luck trying to figure out! That would require running numerous repeated tests to get some kind of clear idea of how effective I was at even dunking the ball in the first place (because if I miss there will definitely be no shatter), much moreso how effective I am at generating the forces necessary to shatter a board with such-n-such composition.
Once we've run a sufficiently large number of tests to get an idea of how the data tends to repeat, we could figure out the mathematical probability of my being able to shatter the backboard--all other things being equal! Which is an important qualification, because that probability estimate won't be worth much if I spend two months working through the Beachbody's "Insanity" fitness program!
So, figuring up a legitimate percentage chance of my shattering a backboard with a basketball dunk isn't strictly impossible, but it's very impractical; not least because the system being examined isn't something with inherently and rigidly repeatable characteristics--unlike rolling a pair of dice which, while there are ways to influence the result outside the system of the dice, features discreetly concrete constraints built into the system of their behavior.
But what if a person (myself or anyone else) wants to form an opinion of the likelihood that I will be able to shatter the backboard? Is there nothing that can help with that?
Of course there is. We can build intuitive estimates of likelihood before the attempt, factoring in past data so far as we're aware of it; and we can build intuitive estimates of how likely it was after the attempt, in order to get some feeling of appreciation about how improbable the result was. Which in turn can give us grounds for suspecting we've missed some data in our estimate. "Wait, Jason just finished the 'Insanity' fitness program?! Well, that explains why he can shatter a backboard now!" (I'll take a moment to clarify that I haven't finished that fitness program, nor can I shatter a backboard, nor am I even interested in doing so.)
But notice that I was talking about intuitions and feelings. I don't mean that the result was thereby irrational, because feelings are data, too, and we can use them for reasoning. I do mean that such data are not even remotely proper for arriving at a mathematical probability of success or failure.
At most what can happen is that, for purposes of expression, we may assign a feeling of intuitive likelihood a corresponding percentage strength based quite literally (as humorist Pat McManus once wrote) on a study where (in effect) we write down a bunch of numbers until we get one that seems right. A feeling of "pretty sure" might be assigned a representative chance of 80%, for example; or of 86% if we're being funny about being particular!
But, can't those numbers be used in some way that legitimately corresponds with how we ought to regard the likelihood of a complex situation being true?
Well, yes and no.
Yes, it's true that in a rigorous consideration before (or after) the fact of the likelihood or unlikelihood of a complex event occurring, it's a good idea to keep in mind that in order for Z to happen, first A has to happen, and then B which can’t happen without A, and then C, and so on, each of which may (or may not!) have some intrinsic probability or improbability of happening (the probability per se of someone leaning over to flip both dice to 6 and then adding a die flipped to 1 for a total of 13 is utterly incalculable and so can have neither intrinsic probability nor improbability, not even 50%); so that the particular result Z would insofar as such an estimate goes always be significantly more unlikely than any of its preceding factors. That’s a result of cautious conservatism, which is usually but not always a good idea.
But also no, there are often huge qualifications against this sort of inferential likelihood estimate applying. If we’re talking about a situation of complex detail where the total result isn’t a case of this and then that and then the other thing happening, then even pseudo-multiplication of pseudo-probabilities isn’t the proper way to arrive at a result of likelihood of the truth of the composite result. The whole result would be better intuitively graded by the least expected likelihood of an element. There isn’t any mathematical way to assess the probability that if God exists God is three distinct Persons (not mere modes) of one single substantial reality, but to the extent that this seems unlikely or impossible a person will reasonably reject trinitarian theism as being certainly or probably untrue. If that detail isn’t included in the proposal, then the strength of the proposal will stand on what is the most objectionable detail of the proposal left over.
But on the other hand, someone can reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, and supernaturalistic theism seems even less likely to me, and trin-theism even less likely than that, so I feel like trinitarian theism is even less likely than naturalistic theism to be true.” Or again someone might reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.”
Now, this happens to be what I expect is probably Jeff’s position. But notice that my expectation about what is probably Jeff’s position has only the most superficial resemblance to mathematical probability! It isn’t mathematical probability at all, even if I assign a percentage representing the strength of my expectations about what Jeff meant.
No doubt (or at least hopefully!) I arrived at that expectation about what Jeff meant by drawing an inference from prior data where Jeff was talking about what he believes and doesn’t believe regarding those topics. But if I felt 90% sure he meant that somewhere, and 95% sure he meant that somewhere else, and 100% sure he meant that in a third place, would it be even remotely proper for me to estimate the likelihood he’s talking about the same position now by multiplying together the pseudo-probabilities I assigned to my expectations about what he meant earlier?! Should I think now that if I was those levels of sure about what he meant before, I ought to be 86% sure (or the felt-strength equivalent thereof) now!? Of course not!
Should I be 100% sure now on such grounds I just gave? Nope.
Should I be 100% sure now on such grounds if I happened to be 100% sure what he meant the other three times? Only if I have some reason to discount the possibility that he may have changed his mind about one or more portions of his belief since the last time I read him. And if I regard him changing his mind as being improbable, should I factor that mathematically into my expectation? I’ll either end up with a “probability” result proportionately smaller even than my small expectation he changed his mind since then (for example 100% * 100% * 100% * 1% that he changed his mind = 1% that he still means the same thing now!), or with a result higher (and ridiculously higher) than 100% certainty! (100% * 100% * 100% / 1% = 10,000%!)
Mathematical probability simply isn’t the right way for me to estimate whether Jeff means “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.” And neither can it properly describe the resulting less-probable feeling of trinitarian theism being true compared to sheer theism being true.
It may seem there is a train of reducing probability, but if (as many people do) Jeff happens to believe that supernaturalistic theism would be very probable or certain if theism is true, then that would still be no mathematical ground for thinking trinitarian theism was more probably true after all--even though the trin-theism doctrinal set also involves supernat-theism being true.
Yet there is a properly legitimate feeling involved in inductively comparing likelihoods here, so that in some sense Jeff can have at least an initial expectation (and maybe even more than an initial expectation) that trinitarian theism is even less likely to be true than supernaturalistic theism, and maybe also supernaturalistic theism than sheer theism.
And now anyone familiar with the field (I expect probably Jeff included!) will be replying “Bayesian Theory” or “Bayes’ Theorem” (or something similar).
But Bayesian Theory isn’t about multiplying and dividing fractions in order to reach a mathematic probability estimate.
That’s how it is often popularly represented, and so we see people like (to take an example from my own side of the aisle) Richard Swineburne lecturing audiences about how Bayes’s Theorem can be used to demonstrate that there is a greater than 90% probability that Jesus was resurrected from the grave. (Thomas Bayes himself was not only a philosopher and mathematician but a Christian preacher and minister.) He ought to know better than to use Bayes that way: he literally wrote (or rather edited) the book on it in 2002, and in his monograph for that collection (Bayes’s Theorem, Oxford University Press, 2002) he acknowledges that the Theorem shouldn’t be used as a math operation. (A point that Elliot Sober, in his own article for the book, hammers home repeatedly.)
It’s easy to try to do so, because it looks so temptingly like a math operation; and because there are in fact legitimate math operations which look quite similar to it:
P(h|e&k) = (P(e|h&k) P(h|k)) / P(e|k)
Can’t those elements (such as P(h|k)) be assigned fractional values of percentage likelihood? Yes, just like we can assign felt intuitive likelihoods or unlikelihoods a fraction representing how strongly we feel about the estimate.
Well, and can’t we also assign fractional values of percentage likelihood to the sub-elements, like P(h)? Yep, we can do that, too.
Well, doesn’t that mean we should be arriving at P(e|k) by dividing the fractional percentage of P(e) by the fractional percentage P(k)? And similarly for the other elements? And doesn’t that mean we ought to be multiplying the resulting percentage of P(e|h&k) by P(h|k) and then dividing the result by the factional percentage of P(e|k) in order to arrive at P(h|e&k)?!
The hell no.
P(e|k), for example, is supposed to represent the expected likelihood of new evidence (e) being true or having been obtained in light of current evidence (k) is discovered regardless of whether hypothesis (h) is true or not. In other words, how well does new evidence (e) fit with current evidence (k)?
But if the new evidence fits well, or doesn’t fit well, wouldn’t that make the hypothesis more or less likely to be true respectively? No, it might mean that the new evidence is gumpy or in other ways shouldn’t be included with current evidence. P(e|k) is irrelevant to (h) being true.
But the formula looks like we’re supposed to divide by the P(e|k) fraction to affect our estimation of whether (h)ypothesis is true granting new (e)vidence and (k)urrent evidence. (I don’t know why “c” wasn’t used.) But it isn’t really a mathematical formula! This can be demonstrated by supposing that we find it is impossible for new (e) to have really occurred granting that current evidence (k) did occur. An impossible probability is 0%.
If Bayes’ Theorem was really a mathematical operation of multiplying and dividing fractions to get a result, any impossible new (e) in relation to current evidence (k) would instantly mean that the Probability that any hypothesis featuring the truth of both new and current evidence must not only be 100% probable (i.e. certainly true) but infinite-plus percent probable! (Because anything, even the lowest non-infinitely small fraction, divided by zero equals positive infinity.)
Which is utterly ridiculous.
It’s still utterly ridiculous even if we’re only talking about pseudo-probabilities, because the underlying proposed logical relationship would be the same.
It’s still utterly ridiculous even if we’re only talking about P(e|k) being improbable instead of impossible. The lower the probability, not only the more certain (h) would be by comparison, but results for very low probabilities would still exceed and even greatly exceed 100%.
The fact that multiplying any two fractions other than 100% certainty together “above” the “dividing sign” would result in a lesser fraction which might keep the fraction “below” the “dividing sign” from producing a result greater than 100%, doesn’t matter in the least. (Although that would bring up the whole other problem of those two logical elements not really being related to each other that way, either. But for relative brevity I’ll skip that discussion.)
What Bayes’ Theorem attempts to do is to describe (better than a competing theorem proposed by David Hume around the same time) what the process of inductive reasoning logically looks like, and specifically how we adjust our estimates of the likelihood of the truth of an idea when we receive new evidence. That’s very important and kind of useful (inasmuch as a description of what all mentally healthy humans already do every day is kind of useful), but that’s all.
The process actually goes like this:
Assuming I already have some opinion of how likely hypothesis (h) is, based on current evidence (or evidence set) (k)...
...and then I run across new evidence (or evidence set) (e)...
...do I think it is likely that the new evidence would have been found given the truth of h and k? Then I adjust my estimate of P(h|k) upward, and that’s my new P(h|e&k).
On the other hand, if new evidence (e) doesn’t fit h&k very well, I would adjust my expectation that (h) is true downward, and that becomes my new P(h|e&k) estimate instead.
At the same time I ought to consider, without regard to whether (h) is true or false, whether (e) fits with (k). If it doesn’t, then that might be a problem for (h) or it might not. But if I don’t think P(e|k) is very high in itself yet I find (h) helps (e) fit into (k), then that ought to definitely increase my estimation of (h) being true: hypothesis (h) would in that case solve what would otherwise be an evidential conflict between (e) and (k).
On the other hand, if I decide (e) fits well with (k) even without regard to (h), I shouldn’t really use (e) to increase my estimation of (h) likely being true. Although I could bump up my estimate of (h) being true a little maybe if (e) looks proportionately even a little more likely to have occurred with (h) being true than without (h) being true.
On yet another hand, however, if (e) looks improbable to me granting the truth of (k) and (h), yet (e) fits (k) pretty well (to whatever extent), then (e) has strength against (h) being true, and I ought to revise my estimate of (h)’s likelihood downward. But then again, if (e) doesn’t fit well with (k) and (h) doesn’t seem to affect the matter one way or other, then maybe the problem is with (e) after all and I had better recheck my new data; anyway I shouldn’t hold that against (h). But then again, if (e) doesn’t fit with (k), and especially doesn’t fit with (k) assuming the truth of (h), then that counts in an important (if backhanded) way toward (h) being actually true! (Because (h) predicted the new data wouldn’t fit well with the current data.)
If you’re saying, “Wait, but those aren’t inversely proportionate relationships between P(e|h&k) and P(e|k)!”--then congratulations for being able to follow out all that better than most people can! But also right, yep, those relationships are kind-of inverse, but not necessarily in proportionate in strength to one another. Because it isn’t a mathematical operation!
The upshot is that I don’t think there is any system of analysis where simply by virtue of the system (per se) trinitarian theism must necessarily be less probable than basic theism. Or as Jeff put it, “Since Christian theism entails theism, it cannot be more probable than (generic) theism. If B entails A and A is improbable, then B or any other set of beliefs which entail A are necessarily improbable.”
Multiplication of hypotheses (or let us say, proposed doctrinal details) for no reason would be grounds for someone to legitimately regard the result as being necessarily more unlikely than a simpler hypothesis (per Ockham’s Razor), but not necessarily more improbable in a fractionally mathematic sense (except in a trivially pseudo-mathematical way). But no proponent of theism at any time ever regarded the extra details of this-or-that variant of theism (especially including trinitarian theism) as being included for no reason. The details have always historically been included for what the proponents thought were important and sufficient reasons.
Nor can multiplication of probabilities always be compounded for a resulting improbability of the proposals. That only works in certain special cases, and the question of trin-theism vs. basic theism isn’t one of those cases: such a question isn’t like figuring out the probability of rolling three twelves in a row compared to rolling one twelve on any given roll, for example, or even like figuring out the probability of me shattering a backboard with a basketball dunk. It isn’t even like figuring out the probability of getting (and keeping and sufficiently spreading) 16 morphological mutations between chimps and humans within a six million year span. It’s more like the question of whether neo-Darwinian gradualism is more or less probable than some simpler process of biological evolutionary theory.
Someone might intuitively feel before going to the analysis that the more detailed proposal should be more unlikely than a simpler related (or perhaps directly alternate) proposal. And that can be a reasonable first-response suspicion. But it isn’t a necessary ground for the more complex proposal being really more improbable than the simpler one. If the more complex proposal turns out to avoid or outright solve more logical problems, and fits together better with established data elsewhere than a simpler proposal would, then the more complex proposal ought to be regarded as more plausibly true than the simpler. If basic theism fails to sufficiently address problems of moral grounding, divine consciousness in relation to an other, and process feasibility of creating not-God systems and entities (like spatio-temporal Nature, and creatures within Nature); and if these are serious problems for regarding basic theism as unlikely in proportion; but if trinitarian theism addresses these problems better than basic theism does; then trinitarian theism ought to be regarded as more probable by proportion (or at least more plausible even if still necessarily impossible for other reasons) than basic theism. Similarly, the reason we have neo-Darwinian gradualism currently is because more basic forms of b.e.t (including Darwin’s own version) ended up having crippling logical and evidential problems which neoDargrad addresses relatively more successfully in various ways.
Even if the doctrines were proposed ad hoc to save the hypothesis (as is sometimes although ignorantly claimed about trin-theism; and as was in fact historically the case in regard to some of the extra details of the neo-Darwinian gradualist synthesis), so long as they address the problems better the result ought to be regarded as more likely than the more basic alternative. But if the doctrines were already arrived at for other reasons independently of the problem, that would be more impressive for a rational comparison of likelihoods between the two proposals. And more especially so again, if the reasons for arriving at the more complex details turned out to be good reasons to this or that extent (and/or turned out to be experimentally discoverable after all: the extra details, even if originally proposed ad hoc, were confirmed.)
Consequently, when my friend Jeff replies, “if you don't believe me [about trinitarianism being necessarily never more than as probable as basic theism], check the math,”--well, if he means the math of a logical comparison between various kinds of theism and orthodox trinitarian theism, I’ve already very extensively done that on the metaphysical side of things (where I find the logical math adds up better to theism than atheism, too--by which I don’t mean in a mathematical fashion in any case!); and I continue to check the math in other ways as well. I don’t settle for a mere fractional multiplication of hypotheses, if that’s what he means; and when I run a Bayesian analysis I do so in a way that allows the possibility of increasing or decreasing my estimate of the likelihood of this or that hypothesis, rather than in a way which can only lead to unreal results in principle (i.e. by treating the elements as fractions in a mathematical formula.) The results of which I find still point more toward ortho-trin being true than basic theism only.
This is also why I disagree with Jeff when he follows up with the acknowledgment, “And yes, for the same reason, I think your counter-example is correct: neo-Darwinian gradualism entails evolutionary theory, so it cannot be more probable than generic ET.”
I do think neo-Darwinian gradualism has some severe and even crippling technical problems, both in principle and in regard to evidential data; and I think some of those problems arise precisely from details specific to neo-Dar-grad. And those problems affect how likely I now regard neo-Darwinian gradualism to be true.
But I also know enough about neo-Dar-grad (which I grew up believing, as did all the Christians I most admired while growing up) to say that I cannot imagine any significant way in which it could be accurately described as being necessarily no more probable than generic biological evolutionary theory. It is very clearly a superior theory to less detailed alternatives (so far), and between nDgrad and general b.e.t I regard the former as being substantially and significantly more likely to be true.
Our friend David Marshall, author of several books on Christian cultural apologetics, writing at his blog Christ The Tao, has recently been responding to the "20+ Questions For Theists" asked by naturalistic atheist Jeffery Jay Lowder (another longtime correspondent of mine, from back in the days before weblogs) as part of Jeff's "Evidential Arguments For Naturalism" series at the Secular Outpost (i.e. Internet Infidels) blog.
(Whew, that was kind of a long provenance trail. Sorry.)
Since I don't already have enough large projects I'm working on, naturally I'm thinking of also working up another series to analyze his arguments (unless another Cadrist beats me to it perhaps). This is probably evidence of me being crazy, but what the hell. {g} Jeff has always been a fine opponent and I'd be glad to work with him again.
Anyway, while commenting on David's post, Jeff wrote this:
"Since Christian theism entails theism, it cannot be more probable than (generic) theism. If B entails A and A is improbable, then B or any other set of beliefs which entail A are necessarily improbable."
My immediate reply was, and is, that this seems like saying neo-Darwinian gradualism entails biological evolutionary theory, so it cannot be more probable than generic b.e.t (and maybe less so, proportionate to the number of details thanks to multiplication of probabilities).
I know there are some opponents of biological evolutionary theory (usually theists) who go this route, but I'm pretty sure proponents of the neo-Darwinian synthesis don't accept that as a principle reply. I'm also sure they have good reasons not to do so! (My own technical difficulties with neo-Darwinian gradualism aren't quite that sort.) Is this a sauce for geese and ganders situation?--if so, does it count in both directions?--if not, why not?
Eleven pages of discussion after the jump!
As anyone who has taken junior-high mathematics should know, when fractional probabilities are multiplied together the result will always be less than any of the original fractions. 0.9 (or 90%) x 0.8 x 0.7 doesn't equal more than 90%, or any of those fractions individually, or an average of those fractions (80%), or even the next tenth fraction lower (60%).
It equals 0.5, 50%.
Engineers and a number of other scientists and technicians (including casinos) use this property to calculate how often (on average!) we can expect a particular result of a set of random or quasi-random events when those events are connected dependently with one another in various ways.
So, to give a popular example, casinos know that a pair of six-sided dice on a perfectly randomized throw (or near enough to never mind) can be expected to land with two six-pip faces top high (for a total of twelve pips) an average of once every 36 rolls. That isn't guaranteed, but that's what the casino is willing to bet will happen, on average, on any practically super-large number of dice rolls. There may be streaks where it happens much more often (which bettors may bet on), and streaks where it happens much less often (which bettors may also bet on!), but as long as no one is tampering with the randomness of the dice the casino is willing to bet on that result in the long run. (Which is why bettors like to try to tamper with the dice!--by throwing them in skillful ways for example.) Consequently, the house will pay bettors a result worse than 36 multiplied by however much money the bettor bet for a win. Maybe they'll pay 25 to 1 for example. That way there's a house edge, and the casino will definitely (or almost certainly) make a profit (and a pretty significant one) over the long run.
Are there other applications for this principle? Well, yes and no. Let's say someone asks what the probability is that I will shatter a basketball backboard with a dunk. That probability is literally zero unless I dunk, but (other things being equal) will be 100% certain if I dunk with X-amount of force at the proper angle, depending on the fracture strength of the backboard material (which will be affected by other factors such as stress from having withstood previous dunks, or players hanging off the rim, etc.)
But what is the probability that I can dunk the ball in the proper way to cause a shatter? Good luck trying to figure out! That would require running numerous repeated tests to get some kind of clear idea of how effective I was at even dunking the ball in the first place (because if I miss there will definitely be no shatter), much moreso how effective I am at generating the forces necessary to shatter a board with such-n-such composition.
Once we've run a sufficiently large number of tests to get an idea of how the data tends to repeat, we could figure out the mathematical probability of my being able to shatter the backboard--all other things being equal! Which is an important qualification, because that probability estimate won't be worth much if I spend two months working through the Beachbody's "Insanity" fitness program!
So, figuring up a legitimate percentage chance of my shattering a backboard with a basketball dunk isn't strictly impossible, but it's very impractical; not least because the system being examined isn't something with inherently and rigidly repeatable characteristics--unlike rolling a pair of dice which, while there are ways to influence the result outside the system of the dice, features discreetly concrete constraints built into the system of their behavior.
But what if a person (myself or anyone else) wants to form an opinion of the likelihood that I will be able to shatter the backboard? Is there nothing that can help with that?
Of course there is. We can build intuitive estimates of likelihood before the attempt, factoring in past data so far as we're aware of it; and we can build intuitive estimates of how likely it was after the attempt, in order to get some feeling of appreciation about how improbable the result was. Which in turn can give us grounds for suspecting we've missed some data in our estimate. "Wait, Jason just finished the 'Insanity' fitness program?! Well, that explains why he can shatter a backboard now!" (I'll take a moment to clarify that I haven't finished that fitness program, nor can I shatter a backboard, nor am I even interested in doing so.)
But notice that I was talking about intuitions and feelings. I don't mean that the result was thereby irrational, because feelings are data, too, and we can use them for reasoning. I do mean that such data are not even remotely proper for arriving at a mathematical probability of success or failure.
At most what can happen is that, for purposes of expression, we may assign a feeling of intuitive likelihood a corresponding percentage strength based quite literally (as humorist Pat McManus once wrote) on a study where (in effect) we write down a bunch of numbers until we get one that seems right. A feeling of "pretty sure" might be assigned a representative chance of 80%, for example; or of 86% if we're being funny about being particular!
But, can't those numbers be used in some way that legitimately corresponds with how we ought to regard the likelihood of a complex situation being true?
Well, yes and no.
Yes, it's true that in a rigorous consideration before (or after) the fact of the likelihood or unlikelihood of a complex event occurring, it's a good idea to keep in mind that in order for Z to happen, first A has to happen, and then B which can’t happen without A, and then C, and so on, each of which may (or may not!) have some intrinsic probability or improbability of happening (the probability per se of someone leaning over to flip both dice to 6 and then adding a die flipped to 1 for a total of 13 is utterly incalculable and so can have neither intrinsic probability nor improbability, not even 50%); so that the particular result Z would insofar as such an estimate goes always be significantly more unlikely than any of its preceding factors. That’s a result of cautious conservatism, which is usually but not always a good idea.
But also no, there are often huge qualifications against this sort of inferential likelihood estimate applying. If we’re talking about a situation of complex detail where the total result isn’t a case of this and then that and then the other thing happening, then even pseudo-multiplication of pseudo-probabilities isn’t the proper way to arrive at a result of likelihood of the truth of the composite result. The whole result would be better intuitively graded by the least expected likelihood of an element. There isn’t any mathematical way to assess the probability that if God exists God is three distinct Persons (not mere modes) of one single substantial reality, but to the extent that this seems unlikely or impossible a person will reasonably reject trinitarian theism as being certainly or probably untrue. If that detail isn’t included in the proposal, then the strength of the proposal will stand on what is the most objectionable detail of the proposal left over.
But on the other hand, someone can reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, and supernaturalistic theism seems even less likely to me, and trin-theism even less likely than that, so I feel like trinitarian theism is even less likely than naturalistic theism to be true.” Or again someone might reasonably say, “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.”
Now, this happens to be what I expect is probably Jeff’s position. But notice that my expectation about what is probably Jeff’s position has only the most superficial resemblance to mathematical probability! It isn’t mathematical probability at all, even if I assign a percentage representing the strength of my expectations about what Jeff meant.
No doubt (or at least hopefully!) I arrived at that expectation about what Jeff meant by drawing an inference from prior data where Jeff was talking about what he believes and doesn’t believe regarding those topics. But if I felt 90% sure he meant that somewhere, and 95% sure he meant that somewhere else, and 100% sure he meant that in a third place, would it be even remotely proper for me to estimate the likelihood he’s talking about the same position now by multiplying together the pseudo-probabilities I assigned to my expectations about what he meant earlier?! Should I think now that if I was those levels of sure about what he meant before, I ought to be 86% sure (or the felt-strength equivalent thereof) now!? Of course not!
Should I be 100% sure now on such grounds I just gave? Nope.
Should I be 100% sure now on such grounds if I happened to be 100% sure what he meant the other three times? Only if I have some reason to discount the possibility that he may have changed his mind about one or more portions of his belief since the last time I read him. And if I regard him changing his mind as being improbable, should I factor that mathematically into my expectation? I’ll either end up with a “probability” result proportionately smaller even than my small expectation he changed his mind since then (for example 100% * 100% * 100% * 1% that he changed his mind = 1% that he still means the same thing now!), or with a result higher (and ridiculously higher) than 100% certainty! (100% * 100% * 100% / 1% = 10,000%!)
Mathematical probability simply isn’t the right way for me to estimate whether Jeff means “I don’t believe it’s possible or probable for any kind of theism to be true, but if I believed theism to be certainly or probably true I’d believe supernaturalistic theism to be more likely true than naturalistic theism (i.e. pantheism). But trinitarian theism sounds too implausible or impossible to me, so I wouldn’t believe that even if I believed supernat-theism to be certainly or probably true.” And neither can it properly describe the resulting less-probable feeling of trinitarian theism being true compared to sheer theism being true.
It may seem there is a train of reducing probability, but if (as many people do) Jeff happens to believe that supernaturalistic theism would be very probable or certain if theism is true, then that would still be no mathematical ground for thinking trinitarian theism was more probably true after all--even though the trin-theism doctrinal set also involves supernat-theism being true.
Yet there is a properly legitimate feeling involved in inductively comparing likelihoods here, so that in some sense Jeff can have at least an initial expectation (and maybe even more than an initial expectation) that trinitarian theism is even less likely to be true than supernaturalistic theism, and maybe also supernaturalistic theism than sheer theism.
And now anyone familiar with the field (I expect probably Jeff included!) will be replying “Bayesian Theory” or “Bayes’ Theorem” (or something similar).
But Bayesian Theory isn’t about multiplying and dividing fractions in order to reach a mathematic probability estimate.
That’s how it is often popularly represented, and so we see people like (to take an example from my own side of the aisle) Richard Swineburne lecturing audiences about how Bayes’s Theorem can be used to demonstrate that there is a greater than 90% probability that Jesus was resurrected from the grave. (Thomas Bayes himself was not only a philosopher and mathematician but a Christian preacher and minister.) He ought to know better than to use Bayes that way: he literally wrote (or rather edited) the book on it in 2002, and in his monograph for that collection (Bayes’s Theorem, Oxford University Press, 2002) he acknowledges that the Theorem shouldn’t be used as a math operation. (A point that Elliot Sober, in his own article for the book, hammers home repeatedly.)
It’s easy to try to do so, because it looks so temptingly like a math operation; and because there are in fact legitimate math operations which look quite similar to it:
P(h|e&k) = (P(e|h&k) P(h|k)) / P(e|k)
Can’t those elements (such as P(h|k)) be assigned fractional values of percentage likelihood? Yes, just like we can assign felt intuitive likelihoods or unlikelihoods a fraction representing how strongly we feel about the estimate.
Well, and can’t we also assign fractional values of percentage likelihood to the sub-elements, like P(h)? Yep, we can do that, too.
Well, doesn’t that mean we should be arriving at P(e|k) by dividing the fractional percentage of P(e) by the fractional percentage P(k)? And similarly for the other elements? And doesn’t that mean we ought to be multiplying the resulting percentage of P(e|h&k) by P(h|k) and then dividing the result by the factional percentage of P(e|k) in order to arrive at P(h|e&k)?!
The hell no.
P(e|k), for example, is supposed to represent the expected likelihood of new evidence (e) being true or having been obtained in light of current evidence (k) is discovered regardless of whether hypothesis (h) is true or not. In other words, how well does new evidence (e) fit with current evidence (k)?
But if the new evidence fits well, or doesn’t fit well, wouldn’t that make the hypothesis more or less likely to be true respectively? No, it might mean that the new evidence is gumpy or in other ways shouldn’t be included with current evidence. P(e|k) is irrelevant to (h) being true.
But the formula looks like we’re supposed to divide by the P(e|k) fraction to affect our estimation of whether (h)ypothesis is true granting new (e)vidence and (k)urrent evidence. (I don’t know why “c” wasn’t used.) But it isn’t really a mathematical formula! This can be demonstrated by supposing that we find it is impossible for new (e) to have really occurred granting that current evidence (k) did occur. An impossible probability is 0%.
If Bayes’ Theorem was really a mathematical operation of multiplying and dividing fractions to get a result, any impossible new (e) in relation to current evidence (k) would instantly mean that the Probability that any hypothesis featuring the truth of both new and current evidence must not only be 100% probable (i.e. certainly true) but infinite-plus percent probable! (Because anything, even the lowest non-infinitely small fraction, divided by zero equals positive infinity.)
Which is utterly ridiculous.
It’s still utterly ridiculous even if we’re only talking about pseudo-probabilities, because the underlying proposed logical relationship would be the same.
It’s still utterly ridiculous even if we’re only talking about P(e|k) being improbable instead of impossible. The lower the probability, not only the more certain (h) would be by comparison, but results for very low probabilities would still exceed and even greatly exceed 100%.
The fact that multiplying any two fractions other than 100% certainty together “above” the “dividing sign” would result in a lesser fraction which might keep the fraction “below” the “dividing sign” from producing a result greater than 100%, doesn’t matter in the least. (Although that would bring up the whole other problem of those two logical elements not really being related to each other that way, either. But for relative brevity I’ll skip that discussion.)
What Bayes’ Theorem attempts to do is to describe (better than a competing theorem proposed by David Hume around the same time) what the process of inductive reasoning logically looks like, and specifically how we adjust our estimates of the likelihood of the truth of an idea when we receive new evidence. That’s very important and kind of useful (inasmuch as a description of what all mentally healthy humans already do every day is kind of useful), but that’s all.
The process actually goes like this:
Assuming I already have some opinion of how likely hypothesis (h) is, based on current evidence (or evidence set) (k)...
...and then I run across new evidence (or evidence set) (e)...
...do I think it is likely that the new evidence would have been found given the truth of h and k? Then I adjust my estimate of P(h|k) upward, and that’s my new P(h|e&k).
On the other hand, if new evidence (e) doesn’t fit h&k very well, I would adjust my expectation that (h) is true downward, and that becomes my new P(h|e&k) estimate instead.
At the same time I ought to consider, without regard to whether (h) is true or false, whether (e) fits with (k). If it doesn’t, then that might be a problem for (h) or it might not. But if I don’t think P(e|k) is very high in itself yet I find (h) helps (e) fit into (k), then that ought to definitely increase my estimation of (h) being true: hypothesis (h) would in that case solve what would otherwise be an evidential conflict between (e) and (k).
On the other hand, if I decide (e) fits well with (k) even without regard to (h), I shouldn’t really use (e) to increase my estimation of (h) likely being true. Although I could bump up my estimate of (h) being true a little maybe if (e) looks proportionately even a little more likely to have occurred with (h) being true than without (h) being true.
On yet another hand, however, if (e) looks improbable to me granting the truth of (k) and (h), yet (e) fits (k) pretty well (to whatever extent), then (e) has strength against (h) being true, and I ought to revise my estimate of (h)’s likelihood downward. But then again, if (e) doesn’t fit well with (k) and (h) doesn’t seem to affect the matter one way or other, then maybe the problem is with (e) after all and I had better recheck my new data; anyway I shouldn’t hold that against (h). But then again, if (e) doesn’t fit with (k), and especially doesn’t fit with (k) assuming the truth of (h), then that counts in an important (if backhanded) way toward (h) being actually true! (Because (h) predicted the new data wouldn’t fit well with the current data.)
If you’re saying, “Wait, but those aren’t inversely proportionate relationships between P(e|h&k) and P(e|k)!”--then congratulations for being able to follow out all that better than most people can! But also right, yep, those relationships are kind-of inverse, but not necessarily in proportionate in strength to one another. Because it isn’t a mathematical operation!
The upshot is that I don’t think there is any system of analysis where simply by virtue of the system (per se) trinitarian theism must necessarily be less probable than basic theism. Or as Jeff put it, “Since Christian theism entails theism, it cannot be more probable than (generic) theism. If B entails A and A is improbable, then B or any other set of beliefs which entail A are necessarily improbable.”
Multiplication of hypotheses (or let us say, proposed doctrinal details) for no reason would be grounds for someone to legitimately regard the result as being necessarily more unlikely than a simpler hypothesis (per Ockham’s Razor), but not necessarily more improbable in a fractionally mathematic sense (except in a trivially pseudo-mathematical way). But no proponent of theism at any time ever regarded the extra details of this-or-that variant of theism (especially including trinitarian theism) as being included for no reason. The details have always historically been included for what the proponents thought were important and sufficient reasons.
Nor can multiplication of probabilities always be compounded for a resulting improbability of the proposals. That only works in certain special cases, and the question of trin-theism vs. basic theism isn’t one of those cases: such a question isn’t like figuring out the probability of rolling three twelves in a row compared to rolling one twelve on any given roll, for example, or even like figuring out the probability of me shattering a backboard with a basketball dunk. It isn’t even like figuring out the probability of getting (and keeping and sufficiently spreading) 16 morphological mutations between chimps and humans within a six million year span. It’s more like the question of whether neo-Darwinian gradualism is more or less probable than some simpler process of biological evolutionary theory.
Someone might intuitively feel before going to the analysis that the more detailed proposal should be more unlikely than a simpler related (or perhaps directly alternate) proposal. And that can be a reasonable first-response suspicion. But it isn’t a necessary ground for the more complex proposal being really more improbable than the simpler one. If the more complex proposal turns out to avoid or outright solve more logical problems, and fits together better with established data elsewhere than a simpler proposal would, then the more complex proposal ought to be regarded as more plausibly true than the simpler. If basic theism fails to sufficiently address problems of moral grounding, divine consciousness in relation to an other, and process feasibility of creating not-God systems and entities (like spatio-temporal Nature, and creatures within Nature); and if these are serious problems for regarding basic theism as unlikely in proportion; but if trinitarian theism addresses these problems better than basic theism does; then trinitarian theism ought to be regarded as more probable by proportion (or at least more plausible even if still necessarily impossible for other reasons) than basic theism. Similarly, the reason we have neo-Darwinian gradualism currently is because more basic forms of b.e.t (including Darwin’s own version) ended up having crippling logical and evidential problems which neoDargrad addresses relatively more successfully in various ways.
Even if the doctrines were proposed ad hoc to save the hypothesis (as is sometimes although ignorantly claimed about trin-theism; and as was in fact historically the case in regard to some of the extra details of the neo-Darwinian gradualist synthesis), so long as they address the problems better the result ought to be regarded as more likely than the more basic alternative. But if the doctrines were already arrived at for other reasons independently of the problem, that would be more impressive for a rational comparison of likelihoods between the two proposals. And more especially so again, if the reasons for arriving at the more complex details turned out to be good reasons to this or that extent (and/or turned out to be experimentally discoverable after all: the extra details, even if originally proposed ad hoc, were confirmed.)
Consequently, when my friend Jeff replies, “if you don't believe me [about trinitarianism being necessarily never more than as probable as basic theism], check the math,”--well, if he means the math of a logical comparison between various kinds of theism and orthodox trinitarian theism, I’ve already very extensively done that on the metaphysical side of things (where I find the logical math adds up better to theism than atheism, too--by which I don’t mean in a mathematical fashion in any case!); and I continue to check the math in other ways as well. I don’t settle for a mere fractional multiplication of hypotheses, if that’s what he means; and when I run a Bayesian analysis I do so in a way that allows the possibility of increasing or decreasing my estimate of the likelihood of this or that hypothesis, rather than in a way which can only lead to unreal results in principle (i.e. by treating the elements as fractions in a mathematical formula.) The results of which I find still point more toward ortho-trin being true than basic theism only.
This is also why I disagree with Jeff when he follows up with the acknowledgment, “And yes, for the same reason, I think your counter-example is correct: neo-Darwinian gradualism entails evolutionary theory, so it cannot be more probable than generic ET.”
I do think neo-Darwinian gradualism has some severe and even crippling technical problems, both in principle and in regard to evidential data; and I think some of those problems arise precisely from details specific to neo-Dar-grad. And those problems affect how likely I now regard neo-Darwinian gradualism to be true.
But I also know enough about neo-Dar-grad (which I grew up believing, as did all the Christians I most admired while growing up) to say that I cannot imagine any significant way in which it could be accurately described as being necessarily no more probable than generic biological evolutionary theory. It is very clearly a superior theory to less detailed alternatives (so far), and between nDgrad and general b.e.t I regard the former as being substantially and significantly more likely to be true.
Comments
If A entails B, then Pr(A) <= Pr(B).
Substituting "Christian theism" for A and "theism" for B, we get:
If Christian theism entails theism, then Pr(Christian theism) <= Pr(theism).
Except that Ernest Adams was giving a theory about classically deductive argument forms where the certainty of the premises is actually dubious.
Basically his point is that a formally valid deductive argument does not import deductive certainty to the conclusion when the premises are acknowledged to be only probable. At best the valid conclusion can only be as good as THE SUM OF the improbabilities of its premises. That might be 100% certainty, but obviously it could not be more than 100% certainty; and more importantly the validity of the form does not automatically grant 100% certainty to the conclusion if the premises are regarded as being in any way improbable.
As the author Dorthy Edgington reports his work, "He taught us something important about classically valid [deductive] arguments as well: that they are, in a special sense to be made precise, probability-preserving. [...] The question arises: how certain can we be of the conclusion of the [formally valid deductive] argument, given that we think, but are not sure, that the premises are true? [...] Adams shows this: if (and only if) an argument is valid, then in no probability distribution does the improbability of its conclusion exceed the sum of the improbabilities of its premises [where the improbability of a statement is regarded as one minus its probability]."
Adams' point was not intended to be a blanket principle for claiming that no complex claim can be more probable than a simpler claim entailed by the complex claim. Rather, for example:
P1: There is a 30% chance that Socrates is a man.
P2: There is a 30% chance that all men are mortal.
C2: There cannot be more than a 60% chance that Socrates is mortal.
The deductive validity of the argument does not guarantee that Socrates is mortal. On the other hand, if there is a 100% chance that all men are mortal, there cannot be a 130% chance that Socrates is mortal, even though the sums of the improbability add up to more than 100%; but neither can the deductive validity of the form guarantee Socrates is mortal, because there was still only 30% chance that Socrates is a man. (Nor does that mean the argument has anything to say about whether Socrates is mortal, or not, if he turns out not to be a man, of course.)
JRP
At worst, a conclusion from two 99% premises would be only 98% likely if the logic is otherwise deductively valid (and if the premises are sufficiently exhaustive of course). The conclusion might still be 100% true. Simlarly, a conclusion from 100 premises each rated at 99% might still be 100% even though the lower boundary of composite probability reduces to near zero. The thinker does not have practical entitlement to expect worst-case odds of 30% in that case, for example, but he still might be entirely correct. Adams' argument does not set a limit of maximum probability for the conclusion, only a minimum.
Thus, "Adam's result vindicates deductive reasoning from uncertain premises, provided that they are not too uncertain, and there are not too many of them."
To restate my previous example if the product of improbabilities not the sum was meant, in the worst case scenario the thinker is entitled to be no worse than 18% sure (.3 * .6) that Socrates is mortal. Or no worse than 30% sure if there is 100% certainty that all men are mortal.
This all goes quite out the window if we're only supposed to be talking about intuitive likelihood estimates of course; and but I talk about the problems with that extensively in my article.
JRP
JRP
JRP
But supposing for purposes of illustration that there was only one premise to the argument:
P1. Basic Theism is true.
C1. Christian Theism is true.
(Obviously the conclusion is false unless BT = CT and vice versa, but run with it for a moment.)
Adams' principle of probability preservation means that if P1 is regarded as X% probable, then (so long as the conclusion is logically valid and the premises are sufficiently exhaustive etc.) the thinker is entitled to regard Christian theism as no worse than X% probable. It might still be more probable than that!
Or relatedly, if P1 is regarded as X% improbable, the thinker is entitled to regard Christian theism as no having no greater improbability than X%. (whether X% is regarded as probable or improbable is a matter of perspective. 1% would be highly improbable and lowly probable, 99% would instead be highly probable and lowly improbable.)
This should be especially obvious since in this example Premise1 happens to be tautologically the same as Conclusion1. If (in this case) Basic theism is 99% sure, Christian theism is no worse than 99% sure, and might be 100% sure. If on the other hand basic theism is 99% improbable, Christian theism is no worse than 99% improbable and might be 0% improbable (or 100% sure).
JRP
Premise 1.) The Independent Fact of all reality is essentially actively rational, i.e. (at least) Basic theism is true.
P2.) The Independent Fact exists self-existently. (Otherwise it would not be independent, being dependent for its existence on something else other than itself. Also entailed is that there is not an infinite regress of contingent facts, and there are not multiple self-existent independent facts, i.e. cosmological dualism is not true.)
P3.) The evident system of spatio-temporal Nature is not the Independent Fact but rather (one way or another) exists dependently on the IF. (i.e. supernaturalism is true, not naturalism.)
If P3 and P2 were true but P1 was false, then supernaturalistic atheism would be true; if P2 were true but P1 and P3 were both false, then naturalistic atheism would be true; and so on.
Conclusion 1.) The Independent Fact is an essentially self-generating rational action which is not our system of Nature (which instead depends on the IF for its existence). From P1, P2, P3.
P4.) Self-generating and agreeing to be self-generated are two distinct rational actions.
C2.) The IF is at least two distinctly identifiable rational actions entailed in the same overall rational action at the level of the IF's own self-existence. (from P4, C1)
P5.) Rational activity is the essence of personhood.
C3.) The IF is at least two distinctly identifiable persons (or personal states) entailed in the same overall personal reality at the level of the IF's own self-existence. i.e. at least binitarian theism (distinct from bi-theism or mere theistic modalism) is true. (from P5, C2)
Assume for purposes of illustration that the logic is valid and that any other identifiable premises are irrelevant to the result. Also assume for purposes of illustration that the premises have been previously established insofar as the thinker regards as possible.
Assume finally for purposes of illustration that the thinker regards each Premise as being at best 50% probable. (Also assume that it makes any sense at all to say that such premises are such as to be described by mathematical probability.)
Adams' dictum says that the thinker is entitled to regard C3 as being no worse than 3.125% probable. There is no upper limit to how probable the conclusion may actually be, aside from 100% probable. But neither can the formal validity of the argument import deductive certainty to the result, seeing as how at least one of the premises is regarded as merely probable.
If the premises were such that we were actually talking about mathematical quantification of probability, then of course the thinker would also have an upper limit of 3.125% probability of the conclusion being true as well as a lower limit of the same: under those circumstances we should only expect binitarian theism to turn out to be true 3.125% of the time in any super-large repeated set of such premises occuring! But since we're talking about intuitive likelihood estimation (and not about quantifiably identifiable probabilities), that upper limit does not apply; the result is more of a spread: the person ought to feel no less than the equivalent of a little sure about the result, but could still legitimately feel up to near certain, maybe even feel practically certain (so long as deductive certainty wasn't formally claimed) about the conclusion. Or anywhere in between.
If there were 10 such premises which the thinker rated at 50% likelihood, then there would be no practical lower limit of strength of feeling about the conclusion, however: it would be ridiculous to say that a person was entitled to feel practically zero certain at worst about the conclusion!
JRP
If B entails A, then (in a Venn diagram) all of B is inside A. Thus, it follows that Pr(B) <= Pr(A).
I could be wrong, but I believe there is no controversy among mathematicians or philosophers with the previous sentence whatsoever.
All right, I'll draw a target Venn diagram with the correct answer as a nail in the center holding up the target patter, surrounded by circles centered on the nail with increasingly large diameters. Each smaller circle (down to the nail) can be regarded as a subset of the largest circle (the diameter of the target pattern itself).
The probability of hitting the nail with a rifle shot is necessarily no greater than the probability of hitting the target at all, nor no greater than hitting any of the nested circle sets within the target boundary.
But that isn't about the propriety of the nail as the most correct possible target; that kind of probability is merely an a priori skill factor of the shooter, or a description of the relative area taken up by the nail compared to any larger area set on the target.
A detailed set of concepts that features better validity and incorporates evident facts better than a simpler set of concepts, is not going to be less or even equally probably true than the simpler set. It's going to be more probably true than the simpler set, even though the complex set includes all the positive details of the simpler set.
It's a category error to import the mathematical notion of probability as a constraint on the intuitive likelihood estimate of the result.
A philosopher who denigrates the more complex set a priori as less probable isn't paying attention to how we solve problems to reach truth (or the closest approximatations possible to us, where applicable) in real life. The relevant question is whether there are good reasons for the extra details.
Thus the editing function of Ockham's Razor. The most likely answer isn't merely the simplest answer. The most likely answer is the simplest answer that covers the most relevant facts. Irrelevant details or hypotheses should be razored off the theory.
(Although even then the extra details shouldn't be just disparaged: they might still be true but irrelevant to the immediate problem at hand. Again, the most pertinent question is whether there is good reason for any extra detail to be included. The famous answer of Laplace to Napoleon would be an example of that: Laplace thought he did not need the hypothesis of God, or more precisely God's intervention in planetary gravity mechanics, to address the problem at hand (the stability of solar orbits), but that didn't mean he didn't believe in God or thought he had no good reasons at all to believe in God or even that he thought the hypothesis of God solved no other other problems. He lived and died in good communion with the Roman Catholic Church anyway, despite a frivolous remark about Pope Callistus III.)
JRP
It's certainly possible (a priori at least) that a more detailed set which includes the details of the simpler set will also import weaknesses of the simpler set. In fact, assuming there is a weakness of the simpler set due to a detail, and assuming this detail is included in the more complex set, the weakness will remain--assuming that the new details either do not sufficiently address the weakness or actually make the weakness worse!
If those assumptions are true, then (practically by tautology) there is in fact a sense in which the more complex set can be no more likely to be true (up to even being no more possibly true) than the more basic set.
But if only an unlikelihood is thus imported, the more detailed set can only be overall no more likely if the new details themselves carry no greater benefit to the likelihood of the more complex proposition. This is exactly why I spent so much time in the main article talking about the Bayesian procedure of inductive inference: new evidence (or similarly relevant details) may, in the judgment of the thinker, legitimately upgrade the inductive expectation of the hypothesis to being more likely despite the current evidence still retaining a problematic datum (by virtue of the more complex B still being a subset of A).
Now, if the problematic detail is an impossibility, then (depending on what kind of impossibility it is, i.e. if it is such that formally no proposable further details could resolve it) a more detailed doctrinal set or theory will carry the fatal impossibility along with it. Even if the more complex set features value-additive details, the impossiblity will trump them (again depending on the kind of impossibility it is).
The proper procedure then would be to import the value-additive details (insofar as they retain their value, which they might depending on how they were grounded before or on how they might be differently grounded now) into a new theory or doctrinal set without the fatal contradiction, so that they will not be lost but will enrich the new set.
JRP