How Should I Be A Sceptic -- belief and reason

[Introductory note from Jason Pratt: the previous entry in this series of posts can be found here. The first entry can be found here.]


Having explained why, as a Christian, I do not hold to what many people (Christian and sceptic) have considered the 'party line' that reason and faith are mutually exclusive, I will now explore this issue from a deeper philosophical perspective.

A Christian (or other religious theist) who accepts a faith/reason disparity will usually do so for religious reasons. His argument that these two aspects must be mutually exclusive (or at least need not have anything to do with each other) will be grounded on positions and presumptions which usually proceed from a devout loyalty to God's status, or from authority of specifically religious leaders, or from the structure of religious ritual, or some combination thereof.

And a sceptic who accepts a faith/reason disparity might do so only because, as far as he can tell, his opposition has chosen that ground. However, since I obviously do not advocate a faith/reason disparity, this type of sceptic would agree that I can continue with an attempt to build an argument that might arrive at God's existence and characteristics. (Though he might perhaps be able to nix my attempt later on other grounds, of course.)

But some sceptics (and even some people who profess God's existence) accept a faith/reason disparity on different grounds. So, I will need to consider whether (and why) I should consider this to be a facetious division under any conditions, even apart from specifically religious grounding.

The word 'faith' can hold a number of discreet (yet related) meanings. These meanings often become fused (and confused!), and this makes it hard to have a straight discussion about what faith 'is'.

I will try to disentangle this mare's nest by talking not of 'faith', but of 'belief' and 'trust'. And, since I have not yet even begun to infer the existence and character of Someone for us to put personal trust in, I will be concentrating on the ‘belief’ aspect of ‘faith’ in immediately forthcoming entries.

The event we call 'belief' either can be a person's active acceptance of an inference; or it can be an impression of perceived 'reality' to which future mental events will correspond. The second condition--the 'impression'--would be an 'irrational' belief, because it was produced purely as an automatic response to a combination of prior events. [See first comment below for an extended footnote here.]

So, to use an old Robin Williams comedy routine as an example: the chemical known as cocaine could, in interaction with my neurochemistry, release certain electrochemical impulses. And these impulses could be connected by physical association to other reactions currently taking place in my brain, which are resulting from the sensory impressions produced by my being on a golf course.

As a result, a 'belief' might develop within me to this effect: there is a snake in the hole of the 14th green.

This 'belief' would be a real, objective event happening in my brain, and in my psychology of perception. But it would be an irrational belief (in the stringent and particular sense in which I am using the word ‘irrational’), because it would have been produced purely as an unintended by-product of non-rational biochemical reactions.

Please notice: this does not mean the content of my belief would necessarily be false! There might in fact be a snake in the hole of the 14th green.

But if there was a snake in that hole as an actual fact, it nevertheless would have had virtually no connection to my belief (in this example), except in terms of incidental environmental linkage: the particular 'shape' of my delusion would have depended on my being on the golf course, where such things as 'greens', 'cups', and 'snakes' may be found. [See second comment below for a footnote here.]

As a persistent state or event in my psychology, this belief could itself be a building block, either for more irrational beliefs or for rational beliefs (as far as they go).

For instance, the cocaine, or the chain-reaction it started, might continue by 'using' this new mental state as the basis for a new round of association. ("Someone is out to get me and has put a snake in the hole!") This new belief would, by virtue of its cause(s), be just as irrational as the first one, although no less an objectively real event (considered as itself).

Or, I might actively analyze this first belief-impression and draw inferences from it to new conclusions: for example, "If snake is in hole, then dangerous to be near hole. If dangerous, I could get hurt. If I don't want to get hurt, stay away from hole." As a result of accepting this inference, I could then actively arrive at a new belief: "I should stay away from the hole."

Notice that this inference is valid and true, as far as it goes. It becomes false only if the first qualifier ("if snake is in hole") becomes a presumption ("snake is in hole") and only if that presumption itself happens to be false. (The form of the inference would still be valid, however, even though the conclusion was falsified thanks to false initial data.)

However, is this second mental state rational or irrational?

If I say my second belief ("I should stay away from the hole") is rational as opposed to irrational, what can I mean? Why can the second belief ("I should stay away from the hole") be 'rational', as opposed to the first 'irrational' belief ("A snake is in that hole")?

Does it depend on whether the second belief matches reality?

No. The snake may or may not be there: I may have made a mistake. But a mistake is not necessarily irrational. If I am adding up one hundred and twenty-seven different figures, and I take a break in the middle to answer the phone, and then start up again at the wrong place, my process is not therefore rendered irrational. This will be so, even if the cornerstone position is a mistaken assertion ("a snake is in the hole").

Remember that the belief in question-of-rationality here, is not whether a snake is in the hole, but whether it is dangerous for me to get near the hole. I have already admitted (as far as this example has gone) that the original belief ("a snake is in the hole") is a non-rationally produced chemical by-product of cocaine's interaction with my neurochemistry. Such an event (in the terms I have been describing it) is not an inference, although it can produce psychological states similar to states produced by inferences. [See third comment below for an extended footnote here.] The question is whether my subsequent belief ("I should stay away from the hole") is irrational, and if so under what conditions.

Well then, is it a question of whether the original cornerstone belief is itself irrationally produced--does that necessarily make the subsequent mental event ("Snake, thus dangerous" or "If snake, then dangerous") irrational?

No. The first belief has already been established as a bit of data in my mind; I am using that bit of data (although I may not recognize its non-rational source) as part of the inference.

To understand this, consider the characteristics of that original mental event--the cocaine-induced delusion that there is a snake in the hole. The physical reactions and counterreactions linked to the emergence of the belief, are not much different in physical representation than those which would accompany an inference from data.

Here are two examples of inference events: I look in the hole and see something I then judge to be a snake. Or, I hear a report of a snake in the hole from someone, and afterward I judge from other evidence the reliability of this person's report.

Either example leaves behind a persistent physical state in my brain that is not much different from what a cocaine-induced delusion leaves behind. In fact, either example might even leave the exact same result. [Footnote: an observation that will also have an important bearing on a discussion of supernature and evidence much later.] If that is so, however, then what is the qualitative difference?

The difference is my intent, or my initiative.

The cocaine has no intent. Its chemicals are just going about their non-intentional ‘business’; which happened, in conjunction with non-intentional sensory input, to produce a belief-by-association ("a snake is in the hole").

But the second belief ("I should stay away from the hole") is different, because by default I am presuming that 'I' (whatever it means to be 'myself') am initiating an action of inference.

Doubtless, the entire process is not an action I am initiating; there are still non-intended reactions and counterreactions taking place (the sensory input reactions in my head, for instance). Also, some philosophers and scientists would claim that my ability to initiate actions is itself derived entirely from non-intentional automatic reactions and counterreactions. [Footnote: I will discuss this contention much later. My point here is that I agree, that at least some non-intentive behaviors are taking place inside my head even when I am thinking ‘rationally’.]

But however it got there, that second belief ("I should stay away from the hole") represents at least one action on my part, not merely reactions. [See fourth comment below for a footnote here.]

Now, as I have already illustrated, a belief's quality of 'rational' or 'irrational' does not necessarily need to involve positive accuracy about the objectively real facts. There may or may not be a snake in that hole. Even if my belief is rational, I might be mistaken. On the other hand, even if my belief is non-rationally produced, I might still be 'correct'; even though only by accident.

However, most people in most circumstances accept and understand that a non-rationally produced belief cannot be trusted very far to deliver an answer worth listening to, in and of itself. It may exhibit many other qualities; but a non-rationally produced belief cannot be trusted with respect to what it 'claims' to be--even if the belief happens to be accurate with respect to facts, or even beneficial.

Such a belief might possibly be trusted on grounds different from what the belief tacitly claims to be, of course. This is an important distinction, and I will discuss it in my continuation next week.


[Next week: so I, my brother Spencer, a snake, and a bunch of women golfers, walk into a bar... er, onto the 14th green... {g}]

Comments

Jason Pratt said…
.......[first extended footnote here]

Common usage of 'irrational', even among specialists, can fluctuate between meaning a willful choice to accept incorrect logic (and/or a willful choice to refuse correct logic), or an accidental acceptance of faulty logic. Furthermore, sometimes it simply is used for meaning 'invalid'; and occasionally it will be used for meaning 'derived from purely automatic behavior'.

In order to avoid the temptation to switch back and forth between such wide usages, and especially in order to avoid the externalistic fallacy (where the analyst’s reasoning becomes mistaken for the rationality of the object beng analyzed), I have chosen to use 'irrational' in a very specific sense: as a transition state of a nominally non-automatic entity into virtually full automatic behavior. I am not proposing an entity is rational, non-rational or irrational based on whether or not that entity is applying my own notions (even if those notions are accepted by a majority of thinkers) of what counts as valid 'logic'. (So for instance, I do not argue the question of a computer's rationality based on 'logical' or 'illogical' behavior by the computer.)

This admittedly begs the question somewhat, as to whether an entity can possibly exhibit non-automatic behaviors; but as I will discuss in a later chapter, virtually everyone everywhere admits this happens with respect to their own selves (at the least)--even when they deny the possibility of non-automatic behavior! My discussion here can take place somewhat aside from such issues, though. These chapters represent my own thoughts on these topics in a linked progression, and so this chapter can be useful in suggesting preliminary outlines of principles and implications which will need developing more fully later as a parallel argument, but without (I think) necessarily accepting any 'dangerous' implications from those principles at this time: the immediate large-scale purpose of this chaper is, after all, only to check whether some kind of necessary disjunction between reasoning and belief per se stands in the way of reasoning to a belief on metaphysical topics, such as an acceptance of theism or atheism.
Jason Pratt said…
.......[second deferred footnote here]

I will discuss primary environmental linkages to such a belief later in this chapter. I am not claiming the 'irrationality' of this belief depends on the lack of primary environmental linkages; this simply happens to be a facet of my first example.
Jason Pratt said…
.......[third extended footnote here]

Admittedly, some scholars (especially atheistic ones) would claim that this event is (or at least could be) an inference. Thus, as a self-critical warning, I must acknowledge begging an important question here, which I will have to address later in my second section. But this will not be a problem for my larger-scale question at this time. That question is 'Can a belief be the result of reasoning?' If the answer is 'yes' (in whatever way we decide we should understand 'reasoning', though for practical purposes I'm working with one particular way here), then obviously there can be no intrinsic opposition between belief and reason.

Still, I'll have to be careful about how I use the material in this chapter--I shouldn't smuggle it, as if already settled, into my 14th chapter for instance.
Jason Pratt said…
.......[fourth extended footnote here]

Some philosophers and scientists, past and present, have attempted to claim that humans do not initiate events at all. I will postpone a technical discussion of this notion until my second section; and content myself for the moment with the observation that even these people will claim they themselves are initiatively responsible for their own positions--when they want their own ideas to be taken seriously, for instance.
Anonymous said…
This is interesting stuff; but I find myself getting distracted by the rather convoluted example of a cocaine induced belief in golf snakes; might I suggest a more practical, real world example? I understand that gambling addicts very often start out as winners; a "big win" early in their gambling experience convinces them that they can beat the odds. There is clearly irrationality here, since in the long ruin the odds cannot be defeated, but the belief rests on solid evidence; a win; which can be reinforced by subsequent wins and "lucky streaks."

Not sure if that quite fits what you're trying to do here, but I find it's a good analogy for irrational belief systems in general, and an interesting observation in any case...
Jason Pratt said…
Herm,

Well, it’s admittedly a colorful example. But I chose it because it does in fact have practical realworld application. Robin Williams used it as a joke (paranoid golfers on cocaine: “Why are all these people following me around!? ... Go look in the hole, I think there’s a snake in the hole!”) but people do suffer delusions as reactions to neurochemistry--sometimes even more colorful than imagining a snake is in a hole. I once, while in a flu-induced fever, quite literally saw a snot-covered rattlesnake jump off a ceiling fan at me, not many years before I wrote the original draft of this material. (It could have even been the preceding summer.)

The example has a potential realworld application in another fashion, too; for there are sceptics who are fond of classifying religious belief as delusion!--in essence, we are misled by mere reaction to faulty environmental stimulus, into mis-believing there is a snake in that hole. (I had written almost 600 pages worth of analysis on Richard Dawkins’ _The Blind Watchmaker_ the previous year before my original draft of this work. Fun project. {g} A friend of mine nearly killed her poor printer printing it out for her own perusal; her dormmates must have been ready to kill her by the end--this was back in the days of screechy impact printers...)

I didn’t want to presumptively reject the notion that religious belief might be delusion. And I was intrigued about the relationship, and possible distinction, between a reactively induced piece of data and what could be done with that data.

Plus, to be perfectly honest, my brother and I had in fact once left a snake (deceased) in the cup of a green at Pinecrest Country Club, with a foursome of older women playing behind us and about to arrive. And yes, that was a somewhat cruel thing to do (even though we figured they could handle it well, which either they did or they never saw the snake-in-the-hole {g}), which I’ve long since repented of. It was the 6th hole, not the 14th; but because this chapter eventually suggested my line of approach for the second section of the book, beginning in the 14th chapter, I retconned the hole number when building the example. {g}

Thus the genesis (and the exodus, I guess {g}) of the snake in the hole example.

Anyway. Any similar example trying to cover similar levels of detail will end up similarly convoluted, I’m afraid; otherwise I’d be leaving out details I wanted to think about. I might be able to redesign the topical delivery of the same material better, though.


{{I understand that gambling addicts very often start out as winners; a "big win" early in their gambling experience convinces them that they can beat the odds.}}

This is more likely an example of one of the kinds of ‘irrationalities’ I mentioned in my first extended footote: a willful choice to accept incorrect logic (and/or a willful choice to refuse correct logic), or perhaps an accidental acceptance of faulty logic. Categorically, these are not the same kind of ‘irrationality’ I’m talking about in the snake in the hole example. Nor, categorically, are they the same kind of 'irrationality' as each other!

Your example could be categorically re-presented, though, to be similar in concept to the kind of irrationality I’m talking about (and restricting my discussion to) in this and subsequent entries. For the topic of the example to be parallel, we’d have to have a heavy endorphin rush inadvertently rewiring association neurons to the effect that an association of ‘winability’ occurs in the mind; with either an action to be taken by the thinker on that piece of mental data, or else further reactions to merely follow in automatic mental processing once the data is in place. (Both of which could theoretically occur in the same entity in regard to the same piece of data, of course.) A vodka screwdriver might, in principle (though less probably), inadvertently rewire associative neurons in much the same fashion.

I could follow the examples out in parallel from this, but it would remain pretty much as convoluted as the snake in the hole example, I think.

Now--there is another way in which the casino example could be related to what I’m examining with the snake in the hole example. For as noted in my main post, the data-to-be-used could (in principle) arrive in exactly the same place (with the same neurological result) whether it actively arrived or reactively arrived. Right now, with the snake in the hole, I’m considering the process in view of the latter; but not many entries from now I’ll be considering the process in view of the former kind of situation. At that time I’ll be talking about fideism: a raw assertion to ‘believe’ X religious proposition. This would be analogous to a gambler’s intention to “just believe” that he can win against house odds. It wouldn’t be analogous, though, to an accidental impressive impression (so to speak) that he can win against house odds (this would be categorically similar to my snake in the hole example); and neither of those would be analogous to mistaken inferences from a win and subsequent wins and lucky streaks.

(That kind of ‘irrationality’ would be simple category error on the part of the person’s inference, compounded by ignorance of key data. In other words, it is in fact completely rational, in more than one sense of that word. It only looks irrational to us because it would be irrational for us in our position to be doing the same thing. Calling it “irrational” turns out to be a subtle form of the externalistic fallacy: we’re groundlessly, and by accident, importing our situation into the situation of the object and judging its behaviors thereby. This is a key reason why I explicitly resolved to stick with one clearly definable version of ‘irrationality’ for the duration of my analysis: one that helps me avoid slipping into the externalistic fallacy, if I apply it carefully.)

Incidentally, casino gambling features prominently in several of my examples in the book; including in an excerpt (from Section Four) already posted this summer, here. (I cautiously disrecommend plopping right in the middle of a long-progressing argument, though. I offer it only for interest sake) Another significant example occurs in Section Three, when I discuss various ways of playing casino blackjack as a way of illustrating some aspects of system/supersystem relationships.


Anyway, thanks for the interesting comment!--I appreciated it very much!

JRP

Popular posts from this blog

How Many Children in Bethlehem Did Herod Kill?

Where did Jesus say "It is better to give than receive?"

The Bogus Gandhi Quote

A Non-Biblical Historian Accepts the Key "Minimum Facts" Supporting Jesus' Resurrection

Exodus 22:18 - Are Followers of God to Kill Witches?

Discussing Embryonic Stem Cell Research

Jewish writings and a change in the Temple at the time of the Death of Jesus

Revamping and New Articles at the CADRE Site

Asherah: Not God's Wife

A Botched Abortion Shows the Lies of Pro-Choice Proponents