• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Conscience, Our Only Moral Authority

joe1776

Well-Known Member
Or alternatively, your ego causes the cognitive dissonance that forces you to dismiss counterarguments out of hand as they threaten the emotional satisfaction you receive from considering yourself to have solved several of humanities great problems in one fell swoop. Just like how a fundamentalist can't see the obvious flaws in their logic even though other people can.
As a person of 'above average' intelligence, you have to accept this is at least a possibility, yes? You agree no human is immune to this cognitive phenomenon?
Yes, of course, I'll allow the possibility but a possibility so small that it's mathematically insignificant.

You also have to accept that, in general, any person who has come up with a grand theory regarding issues of great complexity and sees absolutely no logical arguments against their position is far more likely suffering from epistemic arrogance than to be correct. Maybe you are the outlier, but all the others thought they were the outlier too. Agreed?
Oh no, now you're reaching. You're fabricating an imaginary statistic out of nothing.

You will also probably agree that, in general, someone who recognises the logical challenges to their position is far more likely to be correct than someone who thinks there are none, and is certainly more likely to be self-critical and intellectually honest. Agreed?
I don't know about that statistic. However, to not recognize a logical counter-argument would be a dumb thing to do because 1) It would make one appear dumb, stubborn or both. 2) Losing a battle doesn't necessarily mean the war is lost.

You have dismissed the following 'logical challenges' out of hand. It is possible to believe your arguments are stronger, but to see these as not even anything worth considering is very telling to me.
I'm interested in your arguments. I'm not interested in your opinions on our arguments.

You treat the brain as a non-complex system that can be understood reductively like a man-made machine (this contradicts modern scientific understanding of the brain)
When we last visited this argument, you asked me to agree to a definition of a complex system. I did. Then you offered an argument that had nothing at all to do with complex systems. Yet you find it "very telling" that I didn't find your argument on complex systems worth considering?

You claim you can make predictions with certainty about the long term future based on incomplete information (this is generally considered to be impossible)
I don't claim certainty about anything. I claim likelihood and I give reasons which present an argument. You're obviously frustrated by them because you can't counter my arguments effectively.

You have never provided any scientific evidence that your 'universal conscience' exists, just cherry-picked quotes from scientists who explicitly disagree with your overall conclusions (hence you are making a lot of subjective assumptions and inferences which greatly increases the chance of you being wrong).
The scientific evidence isn't conclusive yet but the readers of this post should be aware of research done by Harvard psychologists aimed at proving a universal moral sense has been underway for about 15 years and the results so far are on target.

You functionally deny neurodiversity exists (if it does, you cannot assume a universal conscience)
Why don't you post an argument supporting this claim? In fact, why don't you post an argument on anything? I've never seen one.

If any one of the following is true: the brain is a complex system, you have incomplete information about the brain and human society, neurodiversity exists, it is impossible that your degree of certainty is warranted. If all 3 are true, the degree of contingency increases exponentially and you confidence level decreases in line with this.
Here's a challenge for you: I'll bet you can't give me one discovery unique to the study of complex systems. Something we didn't already know. If you can, I promise to take your argument seriously.

No need to address them as I know you won't/can't. They are just for the "unbiased observers" that definitely exist and definitely waste their time reading our silly arguments.
Why would I not be able to address your comments? It sounds like you thought of them as unassailable.
 
Here's a challenge for you: I'll bet you can't give me one discovery unique to the study of complex systems. Something we didn't already know. If you can, I promise to take your argument seriously.

Why would you think it difficult to do this?

It is used in medicine, neuroscience, epidemiology, AI, network science, engineering, economics, logistics, weather, biology, evolution, etc.

There are literally thousands of scientific papers that exist presenting novel findings via the application of components on complex systems theories.

Here is a paper on epidemiology analysing the spread of diseases via transportation networks:

http://www.uvm.edu/pdodds/research/papers/others/2013/brockmann2013a.pdf

One on air traffic control (read the conclusion if you want to see the specific points that can only be understood via a complex systems approach)

https://westminsterresearch.westmin...4-00058 - revised submission - full paper.pdf

One on animals adapt to their that refers to literally dozens of specific findings regarding how animals adapt to their ecosystems:

Insights from the study of complex systems for the ecology and evolution of animal populations

Each paper references dozens of further papers that present novel findings. If they did not present novel findings, they wouldn't get published in the first place


Is this sufficient, or will you say they don't count as you have to actually read a few sentences?
 
I don't claim certainty about anything.

You do it frequently, unless your thread: "Global harmony is inevitable" in which you frequently state global harmony is inevitable wasn't in fact about global harmony being inevitable.

Global Harmony is Inevitable

Why don't you post an argument supporting this claim? In fact, why don't you post an argument on anything? I've never seen one.

I'll cover all of the above points: neurodiversity, complex systems, why it's foolish and hubristic to express high degrees of certainty based on incomplete information in complex domains, etc. with the same example if it helps you understand the connection.

If neurodiversity exists (which it objectively does), then how can you be highly confident that neurodiversity doesn't impact moral intuition in different ways among neurodiverse individuals.

Morality is an emergent property of the human brain that depends on no single cognitive function, but via the complex interaction between numerous competing cognitive functions.

About a decade ago, when neuroscientists first began to investigate the moral brain, the question was this: Where in the brain is morality? A key assumption was that uniquely moral cognition could in fact be discovered in the brain, distinct from other kinds of cognition... So far, the uniquely moral brain has appeared nowhere—perhaps because it does not exist...

This research revealed morality to rely on multiple domain-general processes, which are housed in many parts of the brain. Therefore, one kind of answer to the original question posed by moral neuroscientists is “everywhere”—morality is virtually everywhere in the brain...

The moral brain can be found in the emotional brain and the social brain, as we have seen, and, undoubtedly many other brain regions and brain systems that support cognitive capacities such as valuation...

Moreover, moral judgments across different contexts, cultures, and individuals are certain to depend differently on these cognitive inputs [plenty of studies have shown people from different cultural backgrounds perceive the social world in different ways - for example]


https://www.researchgate.net/profil...-is-morality-Everywhere-and-maybe-nowhere.pdf



The first step toward understanding the neural basis of human morality is acknowledging that it is embodied in, and thus operates based on the principles of, the functional architecture of the human brain, a complex system consisting of a wide array of neural circuits that operate as functional modules and are selectively engaged by environmental demands..

The emerging insights from this research point toward a model in which automatic processing in parallel neural circuits, many of which are associated with social emotions, evaluate the actions and intentions of others. Through various mechanisms of competition, only a subset of these circuits ultimately causes a decision or an action...

In a decentralized architecture, there is no central authority to make decisions or select actions. Instead, competition for limited resources resolves the conflict, resulting in amplified activity within the dominant subset of modules that then biases decision-making or action selection...

when easy personal dilemmas induce a strong emotional response, emotional activity outcompetes cognitive appraisals. Or representations of actions may be outcompeted by representations of intentions, which in turn dictate moral valence. In these cases, the limited resources that constrain amplification of activity to a subset of neural circuits (a phenomenon that is not well understood) may be sufficient to govern competition. As competition ramps up, other circuits, such as frontal regions involved in conflict detection and cognitive control [20], may be activated to inhibit activity in certain circuits, ultimately influencing a decision or impacting an action by biasing competition

In the split-brain experiment described previously, the left hemisphere made judgments solely based on the outcome of an action and not based on mental state, perhaps due to a loss of input from important belief attribution [15 ] and intention-modeling [18] networks in the right hemisphere... Similarly, by hypnotizing healthy subjects to feel disgust at neutral words, Wheatley and Haidt [24] were able to manipulate the subjects’ moral judgments [note: feeling disgust towards stimuli is a learned behaviour with a subjective component]...


Once captured as cultural norms or laws, these ideas feedback through development and learning mechanisms to fine-tune the workings of the underlying neural circuitry [25]. Indeed, recent findings indicate that cultural influences have a substantial effect on cognitive processes [26,27], including moral processing (Hauser, unpublished data)...

differences in opinion on moral topics within a given society may be based on the sensitivities of specific neural circuits that process various moral dimensions


The functional brain architecture of human morality


So:

a) morality is an emergent property of the complex interaction of multiple parts of the brain
b) these parts of the brain are impacted by neurodiversity and differences in neural development
c) they are also impacted by culture and experience
d) which of the competing systems 'wins' is situationally contingent
e)these difference produce different moral judgement

How can you justify, beyond reasonable doubt, that the moral judgements of conscience is not subjective given the wealth of scientific evidence showing that it is indeed subjective?
 

joe1776

Well-Known Member
So:
a) morality is an emergent property of the complex interaction of multiple parts of the brain
b) these parts of the brain are impacted by neurodiversity and differences in neural development
c) they are also impacted by culture and experience
d) which of the competing systems 'wins' is situationally contingent
e)these difference produce different moral judgement

How can you justify, beyond reasonable doubt, that the moral judgements of conscience is not subjective given the wealth of scientific evidence showing that it is indeed subjective?
I can see now where you've gone wrong with your counter-argument that the brain is a complex system. I'll start by re-posting something I posted for you earlier:

Factor 1: the intuitive conscience
Factor 2: the facts of the situation to be judged
Factor 3: biases

When we are considering a moral question, our brains are involved with all three of these factors. Depending on the situation, several brain functions might be in use. If we are personally involved, or we know the people who are, the emotional parts of our brains will cause biases.

The reasoning function of our brains is always involved especially in sorting out the facts of the situation. It also contributes to cultural biases since those biases are learned with experience. If we foolishly consider the guidance of criminal laws, the moral teachings of religion, or the work of ethical philosophers, the memory functions of our brains will be in play.

So, yes, under fMRI, the brain considering a moral question would appear to be complex. But you are jumping to the conclusion that the complexity rules out simple causes. It doesn't. It only makes them difficult to find. It requires the patient examination of many moral situations.

For example, if a man kills another man in a clear case of self-defense, we expect the killing to be found justifiable in every culture. But if a woman kills her husband in a clear case of self-defense and is found guilty of murder, we can look for other evidence to support our suspicion that a cultural bias against women is in play in this miscarriage of justice.
 
Factor 1: the intuitive conscience

You have never presented as much as a shred of evidence that this even exists as a specific neurological function independently of other neurological functions that can thus be understood via methodological reductionism.

I just provided numerous scientific sources that reject your unsupported assertion.

You can only subjectively experience the 'feeling' of moral intuition, which may be an illusion that is an emergent product of numerous competing neurological functions.

What is you argument, that this is false:

So far, the uniquely moral brain has appeared nowhere—perhaps because it does not exist...

The moral brain can be found in the emotional brain and the social brain, as we have seen, and, undoubtedly many other brain regions and brain systems that support cognitive capacities such as valuation...

Moreover, moral judgments across different contexts, cultures, and individuals are certain to depend differently on these cognitive inputs [plenty of studies have shown people from different cultural backgrounds perceive the social world in different ways - for example]

Factor 2: the facts of the situation to be judged
Factor 3: biases

And again you have never offered a shred of evidence that these exist independently of whatever the process that causes moral intuition.

Why should anyone believe we have a "moral intuition" part of the brain/neural circuit and a "cultural bias" part of the brain/neural circuit that have independent existences.

So, yes, under fMRI, the brain considering a moral question would appear to be complex. But you are jumping to the conclusion that the complexity rules out simple causes. It doesn't. It only makes them difficult to find. It requires the patient examination of many moral situations.

If you go back and read the quotes, or even better the whole linked article (it's only 4 pages), you will see that this is a strawman.

The article is not simply talking about fMRI, it contains numerous references to specific experiments that show how the differing parts of the brain impact moral decisions.

In the split-brain experiment described previously, the left hemisphere made judgments solely based on the outcome of an action and not based on mental state, perhaps due to a loss of input from important belief attribution [15 ] and intention-modeling [18] networks in the right hemisphere... Similarly, by hypnotizing healthy subjects to feel disgust at neutral words, Wheatley and Haidt [24] were able to manipulate the subjects’ moral judgments [note: feeling disgust towards stimuli is a learned behaviour with a subjective component]...

Once captured as cultural norms or laws, these ideas feedback through development and learning mechanisms to fine-tune the workings of the underlying neural circuitry [25]. Indeed, recent findings indicate that cultural influences have a substantial effect on cognitive processes [26,27], including moral processing (Hauser, unpublished data)...

differences in opinion on moral topics within a given society may be based on the sensitivities of specific neural circuits that process various moral dimensions


Have you got an argument against these findings, or do you simply assert your own opinion as fact despite the evidence against it?

It requires the patient examination of many moral situations.

So far, not a single scientist who has done this has supported your assertions. Funny that.
 

joe1776

Well-Known Member
You have never presented as much as a shred of evidence that this even exists as a specific neurological function independently of other neurological functions that can thus be understood via methodological reductionism.

I've given you a brief logical argument in the OP which you've never countered. Here it is again. Would you like to try making a counter-argument?

Learning begins with the senses: An effect (as in cause-and-effect) must first be sensed. It must be seen, tasted, smelled, heard or felt before we can use reasoning to learn from it. Science, humanity's best attempt at learning, must begin with an observed effect.

Since the difference between right and wrong and fair and unfair can't be seen, tasted, smelled or heard, it must be 'felt.' We feel an unpleasant sensation produced by the pain function of our brains which can be interpreted as 'wrong' or 'unfair' depending on the situation. If we don't feel that signal, we can assume the act was justified or fair. We call this faculty 'conscience.'


As for evidence, I know you're aware of the research that has found that our moral judgments are the product of intuition. You're also aware of Harvard's Moral Sense Test which was created on the hypothesis that a universal moral sense exists. So, coming from you of all people, the charge that I've "never presented as much as a shred of evidence that this even exists" is comical.

So far, not a single scientist who has done this has supported your assertions. Funny that.
So, you don't want to count the four Harvard scientists who began the Harvard Moral Test expecting to find a universal moral sense?
 
Last edited:

joe1776

Well-Known Member
....In the split-brain experiment described previously, the left hemisphere made judgments solely based on the outcome of an action and not based on mental state, perhaps due to a loss of input from important belief attribution [15 ] and intention-modeling [18] networks in the right hemisphere... Similarly, by hypnotizing healthy subjects to feel disgust at neutral words, Wheatley and Haidt [24] were able to manipulate the subjects’ moral judgments [note: feeling disgust towards stimuli is a learned behaviour with a subjective component]...

Once captured as cultural norms or laws, these ideas feedback through development and learning mechanisms to fine-tune the workings of the underlying neural circuitry [25]. Indeed, recent findings indicate that cultural influences have a substantial effect on cognitive processes [26,27], including moral processing (Hauser, unpublished data)...

differences in opinion on moral topics within a given society may be based on the sensitivities of specific neural circuits that process various moral dimensions


Have you got an argument against these findings, or do you simply assert your own opinion as fact despite the evidence against it?
I have no problem whatsoever with those findings. There are many ways that biases can be created, you've listed a few of them.

The problem is that you confuse these findings with the moral intuition of conscience. You don't recognize them simply as biases.

Disgust, for example, is an interesting phenomenon. It's sometimes confused with the brain's conscience-driven signal of "wrong." It explains why some people think homosexuality is immoral. The thought of the homosexual sex act disgusts them.
 
Last edited:
I have no argument whatsoever against those findings.

I know.

Next step is to realise why that negates your entire theory ;)

Your problem is that you confuse these findings with the moral intuition of conscience.

No, those sources that you admit you cannot refute explicitly argue there is no singular cognitive function that equates to a "moral intuition of conscience".

I've given you a brief logical argument in the OP which you've never countered. Here it is again. Would you like to try making a counter-argument?

Other than having explained it about 25 times, the post you are replying to that is supported by multiple scientific journals is a refutation of this exact point :facepalm:

What we subjectively experience as "conscience" is the emergent property of numerous competing cognitive functions that deal with multiple aspects of cognition beyond "morality". We cannot resictively separate "moral intuition" from these as the articles clearly demonstrate.

As such we are prone to subjective bias in all intuitive judgements, as the articles clearly demonstrate.


, you don't want to count the four Harvard scientists who began the Harvard Moral Test expecting to find a universal moral sense?

They don't agree with you, so no of course not.

Not only that, there are not even any results from the MST as they say on their own website:

The MST is still at the data collection phase, but we hope to be analyzing and publishing our results soon. Since knowledge of our hypothesis and preliminary results could bias test-takers' answers we cannot release data from each phase of our research until that phase is completed and the data prepared for publication.

 

joe1776

Well-Known Member
No, those sources that you admit you cannot refute explicitly argue there is no singular cognitive function that equates to a "moral intuition of conscience".
They do not "explicitly argue" anything of the kind. If you think they do, pull the quotes and post them.

Other than having explained it about 25 times, the post you are replying to that is supported by multiple scientific journals is a refutation of this exact point'. What we subjectively experience as "conscience" is the emergent property of numerous competing cognitive functions that deal with multiple aspects of cognition beyond "morality". We cannot resictively separate "moral intuition" from these as the articles clearly demonstrate.
I've explained this to you before. I'll try once again.

Many of the studies of morality, including those you attached, are based on Josh Greene's study on moral dilemmas. Moral dilemmas are not only fairly rare but they are exceptions to the general rule that the final judgments are intuitive.

The typical moral dilemma offers two optional actions. Both immediately and intuitively feel wrong.

But conscience cannot weigh the consequences of each option to determine which causes the least harm. So, that job falls to the not-completely-trustworthy reasoning function.

So, when researchers employ moral dilemmas think they are testing moral intuition when, in fact, they are testing the exceptional case, the product of reason with its hit or miss results.

Josh Greene noted that when subjects considered moral dilemmas while hooked up to fMRI two parts of their brain lit up. Yet he still didn't figure out that both intuition and reason were involved.


The MST is still at the data collection phase, but we hope to be analyzing and publishing our results soon. Since knowledge of our hypothesis and preliminary results could bias test-takers' answers we cannot release data from each phase of our research until that phase is completed and the data prepared for publication.

Georgetown University published the MST's preliminary positive findings a few years ago which I linked to this forum. I recall your predictable negative critique of them.




I'll ask once more. I've given you a brief logical argument in the OP which you've never countered. Here it is again. Would you like to try making a counter-argument?

Learning begins with the senses: An effect (as in cause-and-effect) must first be sensed. It must be seen, tasted, smelled, heard or felt before we can use reasoning to learn from it. Science, humanity's best attempt at learning, must begin with an observed effect.

Since the difference between right and wrong and fair and unfair can't be seen, tasted, smelled or heard, it must be 'felt.' We feel an unpleasant sensation produced by the pain function of our brains which can be interpreted as 'wrong' or 'unfair' depending on the situation. If we don't feel that signal, we can assume the act was justified or fair. We call this faculty 'conscience.
 
Last edited:
They do not "explicitly argue" anything of the kind. If you think they do, pull the quotes and post them.

The first linked article is entirely on this topic :rolleyes:

3rd time lucky... [and no, this is not based simply on 'moral dilemmas that force people to reason as you keep falsely claiming].

About a decade ago, when neuroscientists first began to investigate the moral brain, the question was this: Where in the brain is morality? A key assumption was that uniquely moral cognition could in fact be discovered in the brain, distinct from other kinds of cognition... So far, the uniquely moral brain has appeared nowhere—perhaps because it does not exist...

This research revealed morality to rely on multiple domain-general processes, which are housed in many parts of the brain. Therefore, one kind of answer to the original question posed by moral neuroscientists is “everywhere”—morality is virtually everywhere in the brain


The moral brain can be found in the emotional brain and the social brain, as we have seen, and, undoubtedly many other brain regions and brain systems that support cognitive capacities such as valuation...

Moreover, moral judgments across different contexts, cultures, and individuals are certain to depend differently on these cognitive inputs

https://www.researchgate.net/public...owhere/link/53f359020cf2da8797446b9c/download



The typical moral dilemma offers two optional actions. Both immediately and intuitively feel wrong.

But conscience cannot weigh the consequences of each option to determine which causes the least harm. So, that job falls to the not-completely-trustworthy reasoning function.

Your go tactic with any science that shows you to be wrong is simply to claim it's invalid as it uses moral dilemmas.

The ironic thing is that, while my studies are not simply based on 'moral dilemmas', the Moral Sense Test is full of them, yet you consider it beyond reproach.

Georgetown University published the MST's preliminary positive findings a few years ago which I linked to this forum. I recall your predictable negative critique of them.

You mean this one that doesn't agree with you either? https://faculty.georgetown.edu/lbh24/BanerjeeEtAl.pdf

In summary:
a) There are statistically significant differences in moral judgements between groups: male/female, different cultures, etc. Their argument is there may be a kind of 'universal moral grammar' that anchors judgements so that subjective variables, while they clearly exist, make less of an impact than we might expect.
b) The authors themselves acknowledge that the MST data set may be biased by that fact it overwhelmingly relies on WEIRD participants who self-select (Western, educated, industrialised, rich and democratic)
c) The authors note cultural biases may exist outside this population, and support this with an example of differences in judgement in Mayan population

As usual, you cherry pick parts of the study, ignore the caveats of the actual authors, then make naive, overconfident claims about the what the study shows that goes way beyond the far more nuanced views of the authors.

Also if you do the MST, you will see that the questions tend to be exactly the kind you dismiss as useless - moral dilemmas. They are abstract moral dilemmas in which have no emotional component and seem focused on rational utilitarian or value-based judgement.

Any real world moral judgement, even one in which we are not personally invested (like a murder trial), would contain an emotional component that could lead to significantly different responses (as my 2nd linked paper notes).

I'll ask once more. I've given you a brief logical argument in the OP which you've never countered. Here it is again. Would you like to try making a counter-argument?

I'll make the same point.

How can you trust you subjective experience that whatever you 'feel' is in fact 'moral intuition' rather than an emergent property of multiple social, emotional and perceptive neurological functions subject to different biases?
 
Last edited:

joe1776

Well-Known Member
The first linked article is entirely on this topic :rolleyes:

3rd time lucky... [and no, this is not based simply on 'moral dilemmas that force people to reason as you keep falsely claiming].

About a decade ago, when neuroscientists first began to investigate the moral brain, the question was this: Where in the brain is morality? A key assumption was that uniquely moral cognition could in fact be discovered in the brain, distinct from other kinds of cognition... So far, the uniquely moral brain has appeared nowhere—perhaps because it does not exist...

This research revealed morality to rely on multiple domain-general processes, which are housed in many parts of the brain. Therefore, one kind of answer to the original question posed by moral neuroscientists is “everywhere”—morality is virtually everywhere in the brain


The moral brain can be found in the emotional brain and the social brain, as we have seen, and, undoubtedly many other brain regions and brain systems that support cognitive capacities such as valuation...
Since you're prone to exaggeration, I should have guessed that an "explicit argument" in your words would deflate to mere speculation by one researcher: "perhaps because it doesn't exist."

With the crude tools these "scientists" have available, how could they possibly isolate the conscience from the emotions, the biases, the reasoning, the memories, etc. It's like standing outside the open door of a junk-filled garage at night with a flashlight and speculating that "We can't see the carpenter's level. Maybe it's not here."
 
Since you're prone to exaggeration, I should have guessed that an "explicit argument" in your words would deflate to mere speculation by one researcher: "perhaps because it doesn't exist."

ALL of the scientists who have investigated it (not 'one researcher') have found no evidence of it existing.

They have found numerous, clear examples that suggest it doesn't exist.

You can read about these in the 2 linked papers, or if that is too onerous for you, simply read the quotes I provided.

With the crude tools these "scientists" have available, how could they possibly isolate the conscience from the emotions, the biases, the reasoning, the memories, etc. It's like standing outside the open door of a junk-filled garage at night with a flashlight and speculating that "We can't see the carpenter's level. Maybe it's not here."

Given the fact that there is no evidence it exists, and there is evidence that it doesn't, why are you so certain it exists? Even if we accept it may exist, why should we deem it probable.

Unless we were created by a god and designed to be moral agents rather than survival machines, that is exactly what you would expect.

You feel some kind of intuition, but both evidence and the evolutionary logic suggest this is not something that is distinct from other parts of cognition.

Why should anybody believe your non-expert, subjective experience is more authoritative than all of the scientific research that exists on the topic?
 

joe1776

Well-Known Member
Your go tactic with any science that shows you to be wrong is simply to claim it's invalid as it uses moral dilemmas.The ironic thing is that, while my studies are not simply based on 'moral dilemmas', the Moral Sense Test is full of them, yet you consider it beyond reproach.
Your memory is flawed. I have criticized the MST on that point with every first mention of it. It's remarkable that the results are positive despite that flaw. The results would be far better if they didn't rely on the reasoning mind to decide the lesser harm.


You mean this one that doesn't agree with you either? https://faculty.georgetown.edu/lbh24/BanerjeeEtAl.pdf

Yes, I do mean that paper which is entitled: Intuitive moral judgments are robust across variation in gender, education, politics, and religion: A large-scale web-based study

The paper refers to Harvard's 15 year study which expects to find a universal moral sense.

As usual, you cherry pick parts of the study, ignore the caveats of the actual authors, then make naive, overconfident claims about the what the study shows that goes way beyond the far more nuanced views of the authors.

Most scientific papers include caveats that don't seriously compromise the findings. In your usual debating strategy when confronted with evidence you don't like, you exaggerate the importance of the caveats hoping that weak-minded readers will dismiss the findings.
 

joe1776

Well-Known Member
I'll make the same point.

How can you trust you subjective experience that whatever you 'feel' is in fact 'moral intuition' rather than an emergent property of multiple social, emotional and perceptive neurological functions subject to different biases?
We trust our senses by verifying them with others. Did you see what I saw? Did you hear what I heard?

Biases can usually be explained as outliers. In the sentencing of a convicted rapist, if we want justice we don't give the job to the mother of the rapist or the father of his victim.

But the point you duck is that if we did not feel the judgments of conscience, our rational minds would know nothing of morality since "all knowledge begins with the senses." You can't explain how we write our criminal laws or construct ethical systems if conscience didn't exist.
 
Last edited:

joe1776

Well-Known Member
ALL of the scientists who have investigated it (not 'one researcher') have found no evidence of it existing.
With the crude tools they have to examine the brain, it surprises me that they can find much of anything.

They have found numerous, clear examples that suggest it doesn't exist.

o_O That's a dumb claim on its face. Evidence of non-existence is NOT logically possible.
 
Your memory is flawed. I have criticized the MST on that point with every first mention of it. It's remarkable that the results are positive despite that flaw. The results would be far better if they didn't rely on the reasoning mind to decide the lesser harm.

Why bother to claim it supports your personal opinion then?

It uses the wrong kind of moral judgement, has no officially published results, and the only results that use the partial dataset do not support what you claim.

Most scientific papers include caveats that don't seriously compromise the findings. In your usual debating strategy when confronted with evidence you don't like, you exaggerate the importance of the caveats hoping that weak-minded readers will dismiss the findings.

I look at what the sources say, whereas you cherry pick the odd sentence that seems to support you and ignore everything that contradicts you.

What they said:

While we predict it won't matter that much in these hypothetical moral dilemmas, our data set lacked diversity and this may impact the results, especially when it comes to understanding of morality in the real-world.

Quote:

It is possible that the sample that we examined is so homogeneous that it has eliminated the possibility of meaningful variation in moral judgments... this fact may turn out to be significant in coming to a broader understanding of the role of demographic and cultural variables in structuring the cognitive strategies that are deployed in making moral judgments... [Although] we contend that for hypothetical and unfamiliar cases, demographic and cultural variation is likely to be largely irrelevant.

On the other hand, you are making very strong claims about real-world moral intuitions based on hypothetical, reason based judgements of the kind you acknowledge as not even being intuitive.


Yes, I do mean that paper which is entitled: Intuitive moral judgments are robust across variation in gender, education, politics, and religion: A large-scale web-based study

The paper refers to Harvard's 15 year study which expects to find a universal moral sense.

Unfortunately for you, I have read more than the title. I'm not sure if you have though as you seem unfamiliar with the content.

They argue for a kind of universal 'anchor point' that limits moral variation but does not remove it.

"our data suggest that researchers in moral psychology are unlikely to find unrestricted variation of the sort that seems to have often been supposed... it is becoming more and more clear that our intuitive moral psychology is bounded by an implicit set of computational rules that robustly govern intuitive judgments about unfamiliar cases...

there may be some cases in which the relevant cultural difference does evoke a modification to a central moral principle (as suggested by Abarbanell & Hauser’s (2010) research on a rural Mayan population); but even here, we predict that the relevant variation will leave the vast majority of moral judgments untouched. Specifically, in these cases, the variation should only have a significant impact on moral judgments that rely on the use of that particular principle"


Also note, you seem to be mistaking that there are only minor differences between demographic categories means there are only minor differences between individuals within these categories. Your claim that intuition is not subjective relies on there being minimal difference between individuals as well.
 
With the crude tools they have to examine the brain, it surprises me that they can find much of anything.

Evidently they are more intelligent than you then ;)

o_O That's a dumb claim on its face. Evidence of non-existence is NOT logically possible.

Of course it is. Evidence of non-existence is inferred from what we would expect to see if it did exist. It is used in the sciences all the time.

Lack of a dog shaped bulge is good evidence that there is no dog under my blanket.

Just read the article, a man of your intellect should be able to read and understand it really easily.

We trust our senses by verifying them with others. Did you see what I saw? Did you hear what I heard?

Doesn't work with optical illusions, does it?

Subjective experience is not a reliable guide to cognitive functions.

Also, judging them on their words and actions, I've certainly seen other people have 'moral intuitions' that I've not felt.

But the point you duck is that if we did not feel the judgments of conscience, our rational minds would know nothing of morality since "all knowledge begins with the senses." You can't explain how we write our criminal laws or construct ethical systems if conscience didn't exist.

Again, this does not require them to be specifically 'moral intuitions', but a subset of other cognitive functions and intuitions that deal with socialisation, perception, emotion, self-interest, etc.
 

joe1776

Well-Known Member
Why bother to claim it supports your personal opinion then? It uses the wrong kind of moral judgement, has no officially published results, and the only results that use the partial dataset do not support what you claim.
Readers of this thread should be aware that these questions have been asked and answered two,maybe three times, previously. My debate opponent is out of gas and relying now on repetition.

The purpose of Harvard's MST, which is now 15 years old, was aligned with my hypothesis that we humans have a universal conscience. The Harvard guys based their hypothesis on findings in several branches of science. Here's an excerpt from their justification. You can read the entire article linked below:

For hundreds of years, scholars have argued that our moral judgments arise from rational, conscious, voluntary, reflective deliberations about what ought to be. This perspective has generated the further belief that our moral psychology is a slowly developing capacity, founded entirely on experience and education, and subject to considerable variation across cultures. With the exception of a few trivial examples, one culture’s right is another’s wrong. We believe this hyper rational, culturally-specific view is no longer tenable.

The MST has been designed to show why and offer an alternative. Most of our moral intuitions are unconscious, involuntary, and universal, developing in each child despite formal education. When humans, from the hunter-gathers of the Rift Valley to the billionaire dot-com-ers of the Silicon Valley generate moral intuitions they are like reflexes, something that happens to us without our being aware of how or even why. We call this capacity our moral faculty.

Preliminary results were published a few years ago and despite using questions about moral dilemmas, a design flaw in my opinion, the results were positive and the test continues online.


THE MORAL SENSE TEST | Edge.org

And if you'd like to take the test, here's a link:

The Moral Sense Test
 
Last edited:

joe1776

Well-Known Member
Of course it is. Evidence of non-existence is inferred from what we would expect to see if it did exist. It is used in the sciences all the time. Lack of a dog shaped bulge is good evidence that there is no dog under my blanket.
That test works only because you know dogs exist. That isn't a test that is evidence that dogs don't exist. Try again.
 
That test works only because you know dogs exist. That isn't a test that is evidence that dogs don't exist. Try again.

We know that there are specific parts of the brain that relate to specific cognitive functions.

We know that other cognitive functions utilise multiple neural networks.

If there was a specific, uniquely "moral" part of the brain, we might expect to be able to see evidence that it exists. We don't.

What experiments referenced above show is that we use multiple parts of the brain to make intuitive moral judgements. This is further demonstrated in a) split brain patients whose moral judgements are impacted in exactly the way we would expect to see if intuitions involved multiple parts of the brain b) that influencing non-moral cognitive functions can impact later moral judgements.

The is thus a strong balance of probability that no such uniquely moral part of the brain exists.

Readers of this thread should be aware that these questions have been asked and answered two,maybe three times, previously.

Don't be daft, no one else is reading by now :D

You certainly haven't addressed that you are misrepresenting their usage of the term 'universal conscience'.

They argue for a kind of universal 'anchor point' that limits moral variation between groups in limited, hypothetical situations, but does not completely remove differences between groups (and certainly not between individuals). I won't bother quoting it again, but it's just above if you want to directly explain why their words don't mean what they say.

You argue that intuition means that differences between individuals are unimportant as groups of "unbiased" individuals make "objectively correct" decisions in the. real world.

Even if we uncritically assume they are correct, ignore all of their caveats and assume their findings are perfectly replicable with realistic, emotive, real world scenarios (which is very unlikely indeed), we still need to accept that variations exist between both groups and individuals.

As such there is still a clear subjective component to morality.
 
Top