• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

The Limitations of Reason

shunyadragon

shunyadragon
Premium Member
Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right? No, in reality one's presuppositions are hold sway in arguments, and not reason, evidence and logic.

The following article from the New Yorker is an interesting take on the problem.

Why Facts Don’t Change Our Minds

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
 

Polymath257

Think & Care
Staff member
Premium Member
So the real question is how do we ALL guard against confirmation bias?

How many of us try to acknowledge when those we disagree with have good points? How many are willing to change our minds based on the arguments of someone with whom we disagree? How many are so wedded to our views that we see all those with other views as not just honestly mistaken, but as manipulative and deliberately dishonest in their claims?
 

shunyadragon

shunyadragon
Premium Member
I believe this is in related to Maslow's hierarchy of needs. I do not agree with all Maslow's conclusions, but the proposed hierarchy is revealing about what motivates humans in their decision making process. In the Levels of Love and belongs ans Self Esteem the primary reward system is governed by a 'Sense of Belonging,' which often precludes any change do to reason, logic and evidence,

The attainment of the level of 'Self Actualization' leads people to make more of their decisions based on independence from a 'Sense of Belonging,' and more on reason, logic and evidence.
 

beenherebeforeagain

Rogue Animist
Premium Member
Back when I learned Rhetoric (1970s), I learned that effective argumentation in a public setting (where most rhetoric occurs), depends on three modes, which are often used in combination:

Logos: demonstrating the facts under consideration to the other party or audience

Pathos: attempting to appeal to the emotions of the other party or audience

Ethos: the reputation of those involved in the discussion to the other party or audience.

There is a fourth term, Kairos, which means taking advantage of that fleeting moment when an opening appears in the discussion to drive home a point, when the other party or the audience becomes open to change through use of the other three.

None are usually sufficient in and of themselves to "win" by changing the beliefs/attitudes of the other party or the audience, and even if one does all four well, it still may not work.

The whole point of Rhetoric is to undermine your opponent's (or the audience's) pre-existing biases and win them to your position. And of course, one needs to be aware of their own biases, and how to defend against them, to prevent the undermining of your position.
 

sun rise

The world is on fire
Premium Member
Back when I learned Rhetoric (1970s), I learned that effective argumentation in a public setting (where most rhetoric occurs), depends on three modes, which are often used in combination:

Logos: demonstrating the facts under consideration to the other party or audience

Pathos: attempting to appeal to the emotions of the other party or audience

Ethos: the reputation of those involved in the discussion to the other party or audience.

There is a fourth term, Kairos, which means taking advantage of that fleeting moment when an opening appears in the discussion to drive home a point, when the other party or the audience becomes open to change through use of the other three.

None are usually sufficient in and of themselves to "win" by changing the beliefs/attitudes of the other party or the audience, and even if one does all four well, it still may not work.

The whole point of Rhetoric is to undermine your opponent's (or the audience's) pre-existing biases and win them to your position. And of course, one needs to be aware of their own biases, and how to defend against them, to prevent the undermining of your position.
The cases where I've seen people change their minds can be "a seed is planted which at some point causes a bit of openness". Or sometimes it's someone perceived as "on my side" doing something that would be anathema if done by someone on the other side - this is the "only Nixon could go to China" case for those old enough to have lived through that era or learned about it in school.
 

shunyadragon

shunyadragon
Premium Member
I believe this is in related to Maslow's hierarchy of needs. I do not agree with all Maslow's conclusions, but the proposed hierarchy is revealing about what motivates humans in their decision making process. In the Levels of Love and belongs ans Self Esteem the primary reward system is governed by a 'Sense of Belonging,' which often precludes any change do to reason, logic and evidence,

The attainment of the level of 'Self Actualization' leads people to make more of their decisions based on independence from a 'Sense of Belonging,' and more on reason, logic and evidence.

To those who view this as optimistic I want to clarify the nature of Self Actualization. It is unlikely that anyone can claim to attain this 'peak' of the pyramid, but nonetheless the more Self Actualized one is the more independent one is to presuppositions that would that would impede their decision making process for change using logic, reasoning and evidence.

I consider Socratic skepticism important in the reasoning process, though not the answer.
 
Last edited:

osgart

Nothing my eye, Something for sure
Nothing has to naturally follow from anything. It's only in objective experience, and testing can we see that things result consistently from other things. But logic seems to serve purposes, more than describe actuality. Logic alone doesn't prove anything. It's only in the realm of The obvious or what becomes obvious can one reason effectively with logic. Everything else is speculation, or imagination. You may have the seem of possibility in one group, and be far out there in another group, but until it's tested in reality no one truly knows.

So maybe all this talk on rf is for entertainment value. A healthy dose of speculation in it's proper place doesn't hurt anyone though. But human grasp is extremely limited for testing religious questions. So perhaps whatever works best overall is the attitude to take.

Some people think science will fill every gap and absence. And with superintelligences no doubt humanity will gain only more and more knowledge of the material and the physical. But there will always be religion for what can never be explained whatsoever. And matters of belief happen or they dont; they can't be forced.
 

David T

Well-Known Member
Premium Member
Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right? No, in reality one's presuppositions are hold sway in arguments, and not reason, evidence and logic.

The following article from the New Yorker is an interesting take on the problem.

Why Facts Don’t Change Our Minds

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Yes... Now I have to take aim at your opening statement.
"Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right?"

I personally have massive experience that the above only exists in extremely narrow, specific ways clearly defined and that probably makes up a tiny fractional % of one's life. I have been married and have kids.
 

shunyadragon

shunyadragon
Premium Member
Yes... Now I have to take aim at your opening statement.
"Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right?"

I personally have massive experience that the above only exists in extremely narrow, specific ways clearly defined and that probably makes up a tiny fractional % of one's life. I have been married and have kids.

This is not clear, and I have difficulty seeing where it is relevant.
 

Nakosis

Non-Binary Physicalist
Premium Member
Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right? No, in reality one's presuppositions are hold sway in arguments, and not reason, evidence and logic.

The following article from the New Yorker is an interesting take on the problem.

Why Facts Don’t Change Our Minds

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”


People in my experience, don't like to admit they don't know something.

The problem with these experiments is they are given "knowledge", false knowledge, then told later that the knowledge they based their decision on was false. Ok, but you're taking away knowingness and leaving a void. Offering nothing to replace that knowingness. Consciously, subconsciously the mind is not going to empty itself of that knowingness just because they're being told it's false.

You have to replace it with actual knowledge. Now, after being shown how easy it was to fool them, they are going to be hypercritical of any new knowledge. They are no longer going to accept anything you tell them. That ship has sail, when they found you fooled them the first time.

They accepted the information when their mind was opened. Now that you've put them on the defensive, their mind ain't never going to be open as it once was to new information.
 
I've posted this several times before, so apologies to those who have read it, but it is another useful angle on the issue

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups... These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions...

Why do we see the world this way? Most species do not and cannot. Even those that have linear hierarchies do not. Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.

Ancestrally, evolving the neural code to crack these problems supercharged the ability to successfully compete for access to reproductively limiting resources. Fatefully, we are descended solely from those better equipped with coalitional instincts. In this new world, power shifted from solitary alphas to the effectively coordinated down-alphabet, giving rise to a new, larger landscape of political threat and opportunity: rival groups or factions expanding at your expense or shrinking as a result of your dominance.

... You are a member of a coalition only if someone (such as you) interprets you as being one, and you are not if no one does. We project coalitions onto everything, even where they have no place, such as in science. We are identity-crazed... To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one's friends, and one's cherished group identity. This freezes belief revision.


[in light of this flaw] No one is behaving either ethically or scientifically who does not make the best case possible for rival theories with which one disagrees.
 
Ethos: the reputation of those involved in the discussion to the other party or audience.

I'm not sure if this is just me being pedantic as it could be considered to mean the same thing (if so, sorry:) ) but IMO it's better expressed as credibility rather than reputation. Reputation forms part of credibility, but it is also a dynamic process of demonstrating positive characteristics (in the eyes of the audience).

So your ethos can be boosted by doing such things like aligning your interests with the audience, demonstrating you are in the same in-group, promoting credentials and personal/institutional alignments, seeming knowledgeable and fair minded, and probably most importantly being likeable (it's very hard to persuade someone who thinks you are a ****).

It's by far the most important of the classical methods of persuasion.
 
So the real question is how do we ALL guard against confirmation bias?

I wonder how much it can be reduced? It certainly can't be eliminated, and probably not even close.

Removing emotion is one of the important factors, but that can be hard to do on certain topics which automatically stimulate an emotional response. On issues that relate to identity and social status we (subconsciously?) want to avoid revising these at all cost. Cognitive biases can distort our perception of reality, meaning we have to play with the cards we are dealt.

Trying to disprove your own arguments would help, but is time consuming and excessively effortful on most issues of limited importance/interest. Trying to think like someone else is not always something we are cognitively capable of either.

Avoiding low quality information is important. But most people are unwilling to forego their mass media consumption and daily diet of news/current affairs. Also there is no way to judge high quality information outside of careful, systematic analysis which is again time consuming and effortful (and prone to confirmation bias).

Unfortunately we cannot transcend our genetic limitations, even though we can, to some extent, limit their negatives.
 

icehorse

......unaffiliated...... anti-dogmatist
Premium Member
Back to the OP. I'd also suggest reading Kahneman's "Thinking, Fast and Slow". As you suggest, we all struggle to overcome confirmation bias. It's arguable that Kahneman is the world's leading authority on bias, and he confesses that he struggles with it himself.

With all that said, I think that "reasoning" is one of our best tools to try to overcome bias.
 

David T

Well-Known Member
Premium Member
Yes, but not relevant to the thread.
Indeed it is. The reality is at the most fundem talent level family login, and reason has zero to do with much of anything.. Logic and reason as its formed up is totally related to and a full expression of sociability. Science itself as it functions today is really just mutual narrative agreement/disageeement. It's all inculturated.
 

sayak83

Veteran Member
Staff member
Premium Member
Argument should be based on reason, evidence and logic, and change in ones world view should be based on this, right? No, in reality one's presuppositions are hold sway in arguments, and not reason, evidence and logic.

The following article from the New Yorker is an interesting take on the problem.

Why Facts Don’t Change Our Minds

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Where reason fails, compassion, care and love often succeeds. Show genuine care and warmth for the people you disagree with on issues, and many gulfs can be breached.
 
Top