• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Reformed Epistemology

MikeF

Well-Known Member
Premium Member
I know to Philosophy enthusiasts I am out of my depth, but I am going to continue to try and tread water with my water wings on. :)

To me, I see observation as a priori, not a posteriori, because it exists within the mind.

It would be my assumption, perhaps erroneously, that a priori knowledge is all present once the mind exists. From a biological perspective, we receive sensory inputs of all kinds, many of which are responded to automatically without thought. If we are contemplating the information, evaluating it, it does not become knowledge until that process is finished in my view, and once the thinking is done, it becomes a posteriori knowledge. The knowledge doesn't exist until the observation has been processed, it wasn't known all along.

I would only consider instinctual behaviors as a priori knowledge. We do not have to be taught to suckle from our mother for example, that would be pre-programed knowledge.

It just seems that anything learned would be a priori knowledge by your definition because it is happening in the mind.

I think we can imply that our observations reflect an external world through reason, but I think empiricism isn't correct to sort of assume the existence of this external world axiomatically. For all I know, I could be in the Matrix right now, and everyone else could be a highly-advanced AI.

From my perspective, empiricism doesn't assume the external world exists axiomatically. It is my view that outside of our instinctual behaviors, we are born with a clean slate, a tabula rasa if you will. As infants and toddlers, we struggle to understand the rules of the game, the rules of reality. Think of the toddler playing peek-a-boo. It takes a certain level of experience to understand that just because something isn't seen, it does not mean that it has disappeared.

The existence of the world is learned, and that understanding grows with the consistency of our experiences. We form reasoned expectations based on the consistency of our experience, which in turn, allow us to predict and anticipate outcomes.

We can imagine numerous ways in which the world isn't really real, but why? As I've said elsewhere, why build a Rube Goldberg explanation of reality when there is no experience, no information to inform such a choice.

If evolution is a real thing (not background story to the simulation), then we have only been in the game a short while. Our senses are the result of millions of years of evolution. If they did not provide accurate information about the external world, we would not survive long enough to reproduce. Our senses, statistically across the population as a whole, provide accurate information within the range limit of our biological senses. With instrumentation, we further enhance our senses and confirm our senses level of accuracy. I feel we can be quite confident in our acceptance of the macroscopic world as we perceive it. We have millennia in which billions of people have provided intersubjective corroboration of what we experience.

Falsification is a rather new concept, coming from the 20th century. Before then, I would argue that there was not much to distinguish "scientism" and the "men of science" as they were once called from the natural philosophers, except for the fact that they were closely tied to the secularism of Enlightenment philosophy.

As I touched on above, we are all born amateur empiricists. As amateurs we rely heavily on induction. In our macroscopic world and direct observation, this has served us pretty well. But true knowledge is limited by our access to all the information. We have had to slog through in a hit or miss fashion, because that was all that was available to us. We are getting better at this as we go along, which means the remaining problems are just that much harder to solve. Popper devised a way for us to set reliance on unreliable induction aside and devised an empirical system that helps keep the theoretical systems of science empirical and not get off track or trapped in conventionalism.

I would also say that falsification is still a logical concept, and ultimately that science itself is subordinate to logic.

My take is that the empirical observation is paramount. We have to have real information about reality upon which to apply logic. The scientific theoretical systems must be synthetic and remain synthetic. Otherwise, the system become analytic and you are no longer talking about reality. Falsifiability and deductive logic may keep science on a straighter course, but you have to start with actual empirical information to get anywhere.

I think it might be better to view science as a form of applied Post-Enlightenment epistemology, rather than some specific invention or rigid formulation.

I think this might be similar to my view. To my mind, Science is a knowledge acquisition discipline that strives to mitigate the fallibility of the investigator. That's it. Methods are going to be problem specific. Methods vary. It is making sure the methods used account for, and counter human error that makes science so successful.
 

Ella S.

*temp banned*
It would be my assumption, perhaps erroneously, that a priori knowledge is all present once the mind exists. From a biological perspective, we receive sensory inputs of all kinds, many of which are responded to automatically without thought. If we are contemplating the information, evaluating it, it does not become knowledge until that process is finished in my view, and once the thinking is done, it becomes a posteriori knowledge. The knowledge doesn't exist until the observation has been processed, it wasn't known all along.

I would only consider instinctual behaviors as a priori knowledge. We do not have to be taught to suckle from our mother for example, that would be pre-programed knowledge.

It just seems that anything learned would be a priori knowledge by your definition because it is happening in the mind.

Here's why: Last Thursdayism (or, if you prefer, confabulation and hallucination, but I'm running with Last Thursdayism as my example)

It is not necessarily true that there was ever a time in your existence that you did not have memories of last Wednesday, because, for instance, you could have been created last Thursday with all of those memories already implanted in your mind. The universe might not have even existed last Wednesday.

So for us to think that there was a past, we have to actively analyze the information our mind has in that moment, which is a priori. To come to the conclusion that our observations are a posteriori, we would need to analyze the information we already have available to us through our rational faculties, but as in this case, that might not be true.

I do think an external world is likely and I don't think I was created Last Thursday, but these are conclusions that I arrive at solely through my a priori knowledge that my mind already has in the present moment.



From my perspective, empiricism doesn't assume the external world exists axiomatically. It is my view that outside of our instinctual behaviors, we are born with a clean slate, a tabula rasa if you will. As infants and toddlers, we struggle to understand the rules of the game, the rules of reality. Think of the toddler playing peek-a-boo. It takes a certain level of experience to understand that just because something isn't seen, it does not mean that it has disappeared.

The existence of the world is learned, and that understanding grows with the consistency of our experiences. We form reasoned expectations based on the consistency of our experience, which in turn, allow us to predict and anticipate outcomes.

We can imagine numerous ways in which the world isn't really real, but why? As I've said elsewhere, why build a Rube Goldberg explanation of reality when there is no experience, no information to inform such a choice.

If evolution is a real thing (not background story to the simulation), then we have only been in the game a short while. Our senses are the result of millions of years of evolution. If they did not provide accurate information about the external world, we would not survive long enough to reproduce. Our senses, statistically across the population as a whole, provide accurate information within the range limit of our biological senses. With instrumentation, we further enhance our senses and confirm our senses level of accuracy. I feel we can be quite confident in our acceptance of the macroscopic world as we perceive it. We have millennia in which billions of people have provided intersubjective corroboration of what we experience.

As I touched on above, we are all born amateur empiricists. As amateurs we rely heavily on induction. In our macroscopic world and direct observation, this has served us pretty well. But true knowledge is limited by our access to all the information. We have had to slog through in a hit or miss fashion, because that was all that was available to us. We are getting better at this as we go along, which means the remaining problems are just that much harder to solve. Popper devised a way for us to set reliance on unreliable induction aside and devised an empirical system that helps keep the theoretical systems of science empirical and not get off track or trapped in conventionalism.

My take is that the empirical observation is paramount. We have to have real information about reality upon which to apply logic. The scientific theoretical systems must be synthetic and remain synthetic. Otherwise, the system become analytic and you are no longer talking about reality. Falsifiability and deductive logic may keep science on a straighter course, but you have to start with actual empirical information to get anywhere.

I would say that I think our observations cannot be said to be true in and of themselves, but must be analyzed for truth content. This is especially the case due to a number of sensory illusions and hallucinations, which can give us false sensory data.

At least, in my opinion. Although I think we generally end up in the same place for most pragmatic concerns, since I do think evidence and experimentation is paramount, as did most of the continental rationalists.

I would say that Karl Popper was instrumental in forming what we now refer to as "critical rationalism," which is a sub-type of rationalism, not empiricism, and did the exact opposite of setting induction aside. Instead, he demonstrated how induction could be used in conjunction with deduction, and that we can prove that it is impossible for highly-formalized models to be true if they contradict the data that they assert exists, thanks to the logical Law of Noncontradiction.

This focus on impossibility continues to be of special importance to epistemic modal logic.
 

MikeF

Well-Known Member
Premium Member
Here's why: Last Thursdayism (or, if you prefer, confabulation and hallucination, but I'm running with Last Thursdayism as my example)
It is not necessarily true that there was ever a time in your existence that you did not have memories of last Wednesday, because, for instance, you could have been created last Thursday with all of those memories already implanted in your mind. The universe might not have even existed last Wednesday.
So for us to think that there was a past, we have to actively analyze the information our mind has in that moment, which is a priori. To come to the conclusion that our observations are a posteriori, we would need to analyze the information we already have available to us through our rational faculties, but as in this case, that might not be true.
I do think an external world is likely and I don't think I was created Last Thursday, but these are conclusions that I arrive at solely through my a priori knowledge that my mind already has in the present moment.

This goes back to my question of why would one even entertain such an idea. What would inform one that it is in any way probable or even possible? As I said before, we can imagine an innumerable number of scenarios, an innumerable number of possible worlds, but we want to identify and describe the one world that represents our world of experience.

This is why demarcation between hypotheses that are synthetic with our world of experience and those that are purely analytic is so important. We think in abstractions. The realm of abstractions is infinite, unbounded, without rules. Within the realm of abstraction we can create abstract systems that are bounded and have rules. Language, Logic, and Mathematics would be examples of such abstract systems. But these abstract systems are not bound by the rules and properties of physical things, of the natural world. These abstract systems with their abstract constructs are not physical, simply mental inventions. As such, all boundaries, rules, properties are malleable; can be invented, adhered to, ignored, or changed at a whim.

Popper has created a system of rules that allow us to create an abstract system that is synthetic to our world of experience and that prevents us from wandering outside of it, as wandering outside of it would result in any hypotheses or theories derived from this abstract system being no longer synthetic with the real world, our world of experience.

This ties in with my definition of Science as a knowledge acquisition discipline that recognizes the flaws and fallibility of the human investigator, and which takes active steps to mitigate those flaws and fallibilities. There are so many factors that can impinge on our objectivity starting with the physical hardware of our central nervous system, an aspect of which is that not one human being is identical in the unique neural patterns of their central nervous systems. Added to our basic wiring, we are affected by physiological factors, injury and illness, and chemical insults. We are influenced by socialization and indoctrination, as well as other environmental factors and life experiences. All this can influence and derail our individual objectivity.

How do we get around this problem? We cannot trust ourselves, our own intuition, and I would say Popper agrees:

"We must distinguish between, on the one hand, our subjective experiences or our feelings of conviction, which can never justify any statement (though they can be made the subject of psychological investigation) and, on the other hand, the objective logical relations subsisting among the various systems of scientific statements, and within each of them."
Popper, Karl. The Logic of Scientific Discovery (Routledge Classics) (p. 44). Taylor and Francis. Kindle Edition.

The answer is through intersubjective corroboration. In this way, we can get out of our own heads and distance ourselves from a purely subjective perspective. By comparing the flawed and fallible observations of many observers (I and you among them :) ), we are able to piece together the consistencies, those aspects of our experience that are truly external to ourselves and objectively true.

Would you consider the hypothesis "Thursdayism" to be synthetic with our world of experience and therefore your definition of "a priori" synthetic as well? If not, my preference would be to hold a definition of "a priori" that is synthetic. :)

I would say that I think our observations cannot be said to be true in and of themselves, but must be analyzed for truth content. This is especially the case due to a number of sensory illusions and hallucinations, which can give us false sensory data.

Here I would make a clear distinction between illusion and hallucination. To my mind, an illusion is simply the result of insufficient information upon which to make a definitive determination. The sensory data is accurate, say when viewing a distant object. The light hitting our eyes is real light representing a real phenomena. The ill-defined distant image however may fit the profile of a number of objects. The illusion occurs when we assign the wrong object to the ill-defined, data deficient picture. Any error is resolved by obtaining a more complete data set, for example, changing perspective by moving closer.

The hallucination is also not false sensory data. The data can be fine, extremely accurate. The sense organs can function properly and within limits. What is occurring is an error in either assigning that accurate sense data to the appropriate synthetic construct, for example, sounds being assigned to abstractions of color, or within the abstraction of thought, abstract constructs representing sense data are being activated and incorporated into thought independent of any corresponding external sense data.

Illusion is resolved with more information. Hallucination is identified and mitigated by intersubjective corroboration.

I would say that Karl Popper was instrumental in forming what we now refer to as "critical rationalism," which is a sub-type of rationalism, not empiricism, and did the exact opposite of setting induction aside. Instead, he demonstrated how induction could be used in conjunction with deduction, and that we can prove that it is impossible for highly-formalized models to be true if they contradict the data that they assert exists, thanks to the logical Law of Noncontradiction.
This focus on impossibility continues to be of special importance to epistemic modal logic.

Thank you for the clarification on induction. I should have referenced setting aside sole reliance on the principle of induction, inductive logic, inductive methods, or similar.

I cannot speak to Critical Rationalism as I am wholly unfamiliar with the term. My take on your comments above would be that I see Popper's work in "The Logic of Scientific Discovery" specifically, as simply creating a clear demarcation between what theoretical systems are to be considered "Empirical" and synthetic to the world of experience, and "Not-Empirical" which would constitute everything else. "Not-Empirical" would represent those things we think about that cannot be said to be real and physically existent phenomena.
 

Ella S.

*temp banned*
This goes back to my question of why would one even entertain such an idea. What would inform one that it is in any way probable or even possible? As I said before, we can imagine an innumerable number of scenarios, an innumerable number of possible worlds, but we want to identify and describe the one world that represents our world of experience.

This is why demarcation between hypotheses that are synthetic with our world of experience and those that are purely analytic is so important. We think in abstractions. The realm of abstractions is infinite, unbounded, without rules. Within the realm of abstraction we can create abstract systems that are bounded and have rules. Language, Logic, and Mathematics would be examples of such abstract systems. But these abstract systems are not bound by the rules and properties of physical things, of the natural world. These abstract systems with their abstract constructs are not physical, simply mental inventions. As such, all boundaries, rules, properties are malleable; can be invented, adhered to, ignored, or changed at a whim.

Popper has created a system of rules that allow us to create an abstract system that is synthetic to our world of experience and that prevents us from wandering outside of it, as wandering outside of it would result in any hypotheses or theories derived from this abstract system being no longer synthetic with the real world, our world of experience.

This ties in with my definition of Science as a knowledge acquisition discipline that recognizes the flaws and fallibility of the human investigator, and which takes active steps to mitigate those flaws and fallibilities. There are so many factors that can impinge on our objectivity starting with the physical hardware of our central nervous system, an aspect of which is that not one human being is identical in the unique neural patterns of their central nervous systems. Added to our basic wiring, we are affected by physiological factors, injury and illness, and chemical insults. We are influenced by socialization and indoctrination, as well as other environmental factors and life experiences. All this can influence and derail our individual objectivity.

How do we get around this problem? We cannot trust ourselves, our own intuition, and I would say Popper agrees:

"We must distinguish between, on the one hand, our subjective experiences or our feelings of conviction, which can never justify any statement (though they can be made the subject of psychological investigation) and, on the other hand, the objective logical relations subsisting among the various systems of scientific statements, and within each of them."
Popper, Karl. The Logic of Scientific Discovery (Routledge Classics) (p. 44). Taylor and Francis. Kindle Edition.

The answer is through intersubjective corroboration. In this way, we can get out of our own heads and distance ourselves from a purely subjective perspective. By comparing the flawed and fallible observations of many observers (I and you among them :) ), we are able to piece together the consistencies, those aspects of our experience that are truly external to ourselves and objectively true.

Would you consider the hypothesis "Thursdayism" to be synthetic with our world of experience and therefore your definition of "a priori" synthetic as well? If not, my preference would be to hold a definition of "a priori" that is synthetic. :)

Thursdayism is not meant to be a serious theory. It is a thought experiment to demonstrate that our belief in the past does not necessarily mean that there is a past.

You continue to assume that there is an external, real world (eta: and other observers outside of yourself). This is not an axiomatic assumption that I make, but something that I derive through reason. You might ask why we should entertain the idea of Last Thursdayism, but the real question is why we should believe an external reality exists.

Here I would make a clear distinction between illusion and hallucination. To my mind, an illusion is simply the result of insufficient information upon which to make a definitive determination. The sensory data is accurate, say when viewing a distant object. The light hitting our eyes is real light representing a real phenomena. The ill-defined distant image however may fit the profile of a number of objects. The illusion occurs when we assign the wrong object to the ill-defined, data deficient picture. Any error is resolved by obtaining a more complete data set, for example, changing perspective by moving closer.

There are illusions where we perceive data that is not there. For instance, the scintillating grid illusion.

The hallucination is also not false sensory data. The data can be fine, extremely accurate. The sense organs can function properly and within limits. What is occurring is an error in either assigning that accurate sense data to the appropriate synthetic construct, for example, sounds being assigned to abstractions of color, or within the abstraction of thought, abstract constructs representing sense data are being activated and incorporated into thought independent of any corresponding external sense data.

That is synesthesia, which can be a form of hallucination, but it is not the only type. There are a variety of different kinds of hallucinations, depending on the cause.

Psychotic hallucinations are caused by confusing the imagined for the real, which is false sensory data.

Some hallucinogens work by confusing the way the brain interprets sensory input, essentially creating distortions that become re-interpreted as new patterns by the brain. Thus, the actual observations, such as dancing mushrooms forming a conga line in one's room, are not accurate.

Illusion is resolved with more information. Hallucination is identified and mitigated by intersubjective corroboration.

Precisely my point. The observation must be put in a broader, reasonable context to be made sense of.

Thank you for the clarification on induction. I should have referenced setting aside sole reliance on the principle of induction, inductive logic, inductive methods, or similar.

I cannot speak to Critical Rationalism as I am wholly unfamiliar with the term. My take on your comments above would be that I see Popper's work in "The Logic of Scientific Discovery" specifically, as simply creating a clear demarcation between what theoretical systems are to be considered "Empirical" and synthetic to the world of experience, and "Not-Empirical" which would constitute everything else. "Not-Empirical" would represent those things we think about that cannot be said to be real and physically existent phenomena.

I will say that you are sort of correct in that Popper thoroughly rejects ampliative induction, although I do not agree with him on that point. The form of induction he uses is called "eliminative induction," which is a little bit different from what most people think of when they think about induction.

It is related to empirical evidence, despite being a form of logic rather than empiricism, and I can honestly understand how paradoxical that sounds.
 
Last edited:

MikeF

Well-Known Member
Premium Member
Thursdayism is not meant to be a serious theory. It is a thought experiment to demonstrate that our belief in the past does not necessarily mean that there is a past.
You continue to assume that there is an external, real world (eta: and other observers outside of yourself). This is not an axiomatic assumption that I make, but something that I derive through reason. You might ask why we should entertain the idea of Last Thursdayism, but the real question is why we should believe an external reality exists.

It is interesting that you characterize my conclusion that there is an external reality as not being a reasoned conclusion, but an axiomatic assumption instead. I guess I am not effective at conveying my thoughts.

From my perspective, there are two foundational facts, which you may characterize as axiomatic assumptions. The first is that I exist and the second is that I experience. All other knowledge derives from those two facts. Knowledge, to me, is reasoned expectation based on experience. Since we can't instantly be everywhere and know everything about the cosmos, our ability to acquire knowledge is limited by restrictions to our perspective, our ability to observe and experience. We are by default empirical creatures. From the moment we are born we are exploring and testing the world around us, learning, gaining knowledge and confidence. Science is simply the professionalization of this process. I see as axioms for scientific inquiry the following: (1) We (humanity) do not know everything. (2) What we think we know may be incomplete or incorrect. Given this, any knowledge that we hold is held with degrees of confidence, in my view. The greater the experience of a phenomena and the greater that phenomena's intersubjective corroboration from other observers, then the greater our confidence that that phenomena is objectively real.

This is the basis for my confidence in an existent external reality. It is a reasoned conclusion based on my experience and the experience of billions of others, built only from the assumption that I exist and I experience. It is held with confidence and not as axiomatic fact.

In regards to the thought experiment 'Thursdayism', as well as similar such thought experiments, I still have concerns as to the usefulness of it. Despite my inability to express myself well, I will endeavor to try nonetheless. :)

The thought experiment sets up a scenario in which we continue to perceive the world exactly as we do, yet our assumptions as to why and what is occurring are false. But to create this scenario, any technical obstacle to the scenario's actual implementation are waved aside. How, for example, if there are overarching rules that govern the interaction of phenomena, and the simulation would be a set of phenomena of some type, that such a simulation is even possible? If it is literally impossible under any conditions to pull off such a simulation, then the thought experiment tells us nothing.

The problem we face is that the greater our ignorance, the greater the possibilities we can imagine. Once we begin to nail down some facts, we presumably begin to narrow the imaginable possibilities. Unfortunately, the unknown is quite infinite to imagination. We can waive away the technical requirements of any scenario we can imagine. Thought experiments like "Thursdayism" are no different than literary fiction. "Thursdayism" is one of an infinite number of fictional scenarios that can explain current reality in a manner different from the one we conclude through experience. Why give any more weight to Thursdayism over any other fiction?

We are confident that we have a past because our experience leads us to that conclusion. Unless and until some new information contradicts that conclusion, we should have no qualms about holding it confidently. :)

The way I see it is that the only tool we have starts with is experience. Through experience, knowledge is built incrementally, each problem solved leads to opportunity to solve other problems. But all that is unknown shall forever remain unknown until we can gain an experiential foothold on it. Useful speculations, hypotheses, must be rooted upon our growing foundation of acquired knowledge or we become lost in the infinite possibilities of imagination.

I see thought experiments such as "Thursdayism" speaking more to the psyche of the Philosopher, than shedding any actual light on reality. Contemplating the currently unknowable acts as a kind of Rorschach Test, an indeterminant canvas of the imagination. I find it interesting that in these types of thought experiments, the subject is always a Philosopher in full command of his faculties as opposed to an uneducated person, a child, infant, or someone with a mental impairment.

We all must come to terms that there is so much about reality that we are never going to know in our lifetime. It is out of our grasp. The best we can do is muddle along with what we do know and hopefully add to humanity's base of knowledge to further the progress of future generations.

There are illusions where we perceive data that is not there. For instance, the scintillating grid illusion.
That is synesthesia, which can be a form of hallucination, but it is not the only type. There are a variety of different kinds of hallucinations, depending on the cause.
Psychotic hallucinations are caused by confusing the imagined for the real, which is false sensory data.
Some hallucinogens work by confusing the way the brain interprets sensory input, essentially creating distortions that become re-interpreted as new patterns by the brain. Thus, the actual observations, such as dancing mushrooms forming a conga line in one's room, are not accurate.

Do you see illusion and hallucination as exceptions to the rule, so to speak. Do you consider their instances to be small and speak to the reliability of senses overall, or do you see them as more ubiquitous and therefore cast strong doubt on the reliability of our senses?
 

Ella S.

*temp banned*
It is interesting that you characterize my conclusion that there is an external reality as not being a reasoned conclusion, but an axiomatic assumption instead. I guess I am not effective at conveying my thoughts.

From my perspective, there are two foundational facts, which you may characterize as axiomatic assumptions. The first is that I exist and the second is that I experience. All other knowledge derives from those two facts. Knowledge, to me, is reasoned expectation based on experience. Since we can't instantly be everywhere and know everything about the cosmos, our ability to acquire knowledge is limited by restrictions to our perspective, our ability to observe and experience. We are by default empirical creatures. From the moment we are born we are exploring and testing the world around us, learning, gaining knowledge and confidence. Science is simply the professionalization of this process. I see as axioms for scientific inquiry the following: (1) We (humanity) do not know everything. (2) What we think we know may be incomplete or incorrect. Given this, any knowledge that we hold is held with degrees of confidence, in my view. The greater the experience of a phenomena and the greater that phenomena's intersubjective corroboration from other observers, then the greater our confidence that that phenomena is objectively real.

This is the basis for my confidence in an existent external reality. It is a reasoned conclusion based on my experience and the experience of billions of others, built only from the assumption that I exist and I experience. It is held with confidence and not as axiomatic fact.

In contrast, I make no axiomatic assumptions, but I follow the self-correcting methodology of logic.

In regards to the thought experiment 'Thursdayism', as well as similar such thought experiments, I still have concerns as to the usefulness of it. Despite my inability to express myself well, I will endeavor to try nonetheless. :)

The thought experiment sets up a scenario in which we continue to perceive the world exactly as we do, yet our assumptions as to why and what is occurring are false. But to create this scenario, any technical obstacle to the scenario's actual implementation are waved aside. How, for example, if there are overarching rules that govern the interaction of phenomena, and the simulation would be a set of phenomena of some type, that such a simulation is even possible? If it is literally impossible under any conditions to pull off such a simulation, then the thought experiment tells us nothing.

It is not logically impossible, nor can it be proven to be, but it is unfalsifiable.

The problem we face is that the greater our ignorance, the greater the possibilities we can imagine. Once we begin to nail down some facts, we presumably begin to narrow the imaginable possibilities. Unfortunately, the unknown is quite infinite to imagination. We can waive away the technical requirements of any scenario we can imagine. Thought experiments like "Thursdayism" are no different than literary fiction. "Thursdayism" is one of an infinite number of fictional scenarios that can explain current reality in a manner different from the one we conclude through experience. Why give any more weight to Thursdayism over any other fiction?

Again, the point really isn't that we should seriously consider that the entire universe was made last Thursday, but to get us to think about our reasons for believing that we can trust that our memories really took place in the past. It is not necessarily true that our memories represent a real past, therefore it is not impossible that they don't.

We are confident that we have a past because our experience leads us to that conclusion. Unless and until some new information contradicts that conclusion, we should have no qualms about holding it confidently. :)

The way I see it is that the only tool we have starts with is experience. Through experience, knowledge is built incrementally, each problem solved leads to opportunity to solve other problems. But all that is unknown shall forever remain unknown until we can gain an experiential foothold on it. Useful speculations, hypotheses, must be rooted upon our growing foundation of acquired knowledge or we become lost in the infinite possibilities of imagination.

I am aware. I disagree. I think the value of experience can be demonstrated through reason alone, and that experience relies on reason to be made sense of. At this point, I think we're going in circles.

I see thought experiments such as "Thursdayism" speaking more to the psyche of the Philosopher, than shedding any actual light on reality. Contemplating the currently unknowable acts as a kind of Rorschach Test, an indeterminant canvas of the imagination. I find it interesting that in these types of thought experiments, the subject is always a Philosopher in full command of his faculties as opposed to an uneducated person, a child, infant, or someone with a mental impairment.

Not always, it depends upon the thought experiment. Thought experiments are also how the sciences form hypotheses and theoretical models, so I would strongly caution against dismissing them entirely.

Do you see illusion and hallucination as exceptions to the rule, so to speak. Do you consider their instances to be small and speak to the reliability of senses overall, or do you see them as more ubiquitous and therefore cast strong doubt on the reliability of our senses?

I don't see the relevance of this line of questioning. I do not see illusions and hallucinations as common, but this doesn't matter; they prove the broad generalization that our senses are trustworthy to be false. I only need a single case where our senses are not trustworthy to prove that, but hallucinations and illusions give us many more than just one case where our senses give us false information.

These special cases were chosen specifically because they best illustrate that our observations only lead to truth when we analyze them according to reason, but I would argue that this is true of all observations. It's just the most obvious in these cases, because we reason that they're false rather than true like we do with most other observations.

You can make illusions and hallucinations a special case in your worldview if you would like, but I think that would be unjustified.
 

MikeF

Well-Known Member
Premium Member
My goal in these discussions is to test my reasoning about the world, about reality. If there are points upon which there is no intersubjective agreement, then the hope is to analyze and reconcile those differences. I do not want to preserve my own personal 'worldview', rather, I want to understand to the best of my ability our shared external reality. I certainly do not wish to talk past each other or talk in circles. Sometimes changing tact can be effective in continuing to explore differences, and other times it is simply beating a dead horse. I'm confident you will disengage before your patience is too terribly taxed.

Not always, it depends upon the thought experiment. Thought experiments are also how the sciences form hypotheses and theoretical models, so I would strongly caution against dismissing them entirely.

I agree completely. Thought Experiments can be excellent tools, or poor tools. (eta: In addition, a good speculative thought experiment can be rendered moot, or resolved, by new evidence) I still have concerns with Thursdayism, Simulated World, or Brain-In-A-Vat thought experiments for the reasons previously described.

Again, the point really isn't that we should seriously consider that the entire universe was made last Thursday, but to get us to think about our reasons for believing that we can trust that our memories really took place in the past. It is not necessarily true that our memories represent a real past, therefore it is not impossible that they don't.

If Thursdayism, is as you say, not to be taken seriously, but simply as an apologue with the point being we should never make assumptions but investigate and test everything, then I have absolutely no problem with it. If, on the other hand, we are to give such weight to it that it can be used to support skepticism of our memories actually representing the past, then my criticism stands. As a fictional story, there is no reason to give it any more or less weight or value than any other fictional story that conflicts with it.

It [Thursdayism] is not logically impossible, nor can it be proven to be, but it is unfalsifiable.

Interesting phrasology. In what way is it logically possible?

In contrast, I make no axiomatic assumptions, but I follow the self-correcting methodology of logic.

In my understanding, Logic is an abstract system with rules and operators used to make, for example, deductively valid inferences from premises. As a result, Logic isn't sufficient in and of itself, rather it needs to be applied to something, it requires input. What is your starting point and how is it justified? Does your self-correcting methodology of logic solve the Münchhausen trilemma that many (?) consider insurmountable? If experience is not to be taken at face value but considered equally likely to be just illusion, where do you begin?

I am aware. I disagree. I think the value of experience can be demonstrated through reason alone, and that experience relies on reason to be made sense of. At this point, I think we're going in circles.
...
I don't see the relevance of this line of questioning. I do not see illusions and hallucinations as common, but this doesn't matter; they prove the broad generalization that our senses are trustworthy to be false. I only need a single case where our senses are not trustworthy to prove that, but hallucinations and illusions give us many more than just one case where our senses give us false information.
These special cases were chosen specifically because they best illustrate that our observations only lead to truth when we analyze them according to reason, but I would argue that this is true of all observations. It's just the most obvious in these cases, because we reason that they're false rather than true like we do with most other observations.
You can make illusions and hallucinations a special case in your worldview if you would like, but I think that would be unjustified.

Does current technology verify the accuracy, statistically accross the population as a whole, of our biological sense perceptions within their biological limits?

I think we are saying the same thing here, especially the notion that reason is required to turn sense data into knowledge. To be clear, I am not advocating that we ever rely on the subjective experience of any one individual, sensory or otherwise. We must always remain skeptical of any individuals subjective report. It is through testing and intersubjective corroboration that we gain our confidence in the overall reliability of our senses or experience in general. At the very least, we should not lack all confidence in sense experience. We learn the level of reliability of our individual senses through its consistency and through comparison of experience.
 
Last edited:

Ella S.

*temp banned*
Interesting phrasology. In what way is it logically possible?

It does not require the violation of any of the laws of logic to be true, such as the Law of Identity, the Law of Excluded Middle, or the Law of Noncontradiction.

In my understanding, Logic is an abstract system with rules and operators used to make, for example, deductively valid inferences from premises. As a result, Logic isn't sufficient in and of itself, rather it needs to be applied to something, it requires input. What is your starting point and how is it justified?

A priori data. That is, the data we have in the mind at any given moment. This includes more than just sensory perceptions.

Does your self-correcting methodology of logic solve the Münchhausen trilemma that many (?) consider insurmountable? If experience is not to be taken at face value but considered equally likely to be just illusion, where do you begin?

I have not overcome the Munchhausen trilemma. Rather, I openly admit that induction cannot lead to absolute, necessary truths. It's merely a process we apply to the information we have available to us to better approximate truth, but it will never be a "verification" in the psuedo-rational sense used by the Logical Positivists. This is the "solution" often referred to as "fallibilism."

Does current technology verify the accuracy, statistically accross the population as a whole, of our biological sense perceptions within their biological limits?

To my knowledge, I think so, at least for everyday purposes.

I think we are saying the same thing here, especially the notion that reason is required to turn sense data into knowledge. To be clear, I am not advocating that we ever rely on the subjective experience of any one individual, sensory or otherwise. We must always remain skeptical of any individuals subjective report. It is through testing and intersubjective corroboration that we gain our confidence in the overall reliability of our senses or experience in general. At the very least, we should not lack all confidence in sense experience. We learn the level of reliability of our individual senses through its consistency and through comparison of experience.

I agree, I think we do end up in a fairly similar place.
 
Last edited:

vulcanlogician

Well-Known Member
In that sense, while I recognize that we are hard-wired to pursue pleasure and avoid pain, I think this is surface-level. Under that surface, the reason we are hard-wired to pursue pleasure and avoid pain is because it helps us survive and reproduce. In an era where we can experience a variety of pleasures that might even be harmful to us, I don't think it's adaptive to merely pursue pleasure anymore.

I think you may be selling hedonism short. Standing back and seeing that we are hard-wired towards pleasure has no impact on whether or not it is wise for us to adopt happiness as an essential good. After all, Epicurus realized that our hard-wiring toward pleasure is one of our greatest undoings in the very pursuit of pleasure/happiness in our lives. Sustained pleasure.... frugal pleasure... that is what the wise person hopes to attain, according to Epicurus. To live a happy life, free of misery, where we always have enough happiness /pleasure to keep us satisfied. No more, no less-- That is what Epicurus thought was good. And to achieve this good he advised his readers to stay away from intense pleasures... because intense pleasures tend to be an overall deficit to a happy life.

As one of my favorite poets puts it: "Passion is a flame that burns to its own destruction." But these kind of passions are what we're hard-wired toward, no? My thesis is this: Hedonism is not a product of our prejudices toward pleasure (that come as a result of our hard-wiring). It is the result of an axiomatic foundation that is quite sturdy. No hedonist denies that pain can be useful and (ultimately) satisfying... especially when viewed in retrospect. But the hedonist will point out that-- the only thing that made the pain and suffering valuable at the end of the day, is that it was a source of greater happiness.

As, you probably know, hedonists chop up "useful things" (aka "goods") into two categories: instrumental and intrinsic goods. The thinking goes that things like cars and money are instrumental goods. We want a car because it takes us places. We want money because it allows us to buy things. On a desert island, neither money nor a car would be useful for us. (A car, maybe, and maybe even money to help start fires... but you get my drift. Money isn't valuable in itself. It is valuable because what you can do with it.) Nonetheless, people still desire money and cars. Why? Well because you can keep asking "three year old style":

1) Why do you want the money? So I can buy a car.

2) Why do you want to buy a car? So I can get a job.

3) Why do you want to get a job? To make more money.

4) Why do you want to make more money? So I can pay the bills.

5) Why do you want to pay the bills? So I can enjoy my life and have things that make my life more pleasurable.

6) Why do you want to enjoy your life and have things that make your life more pleasurable?

***

We can pretty much stop at item 6. We do not want pleasure and happiness because they can help us achieve even greater things (although, in many cases, happiness helps us perform better at work and things like that). We want happiness because it is intrinsically good. Imagine that you lived your life, exactly as you have lived it thus far. Except for one thing. On your twentieth birthday you go into a room and one of two things happen.

1) You consume a delicious and sweet strawberry that is extremely delicious. -OR-

2) Somebody pricks the tip of your index finger with a needle.

All things being equal, and assuming that whichever one happened in the room had no real impact on the rest of your life, which would you choose to experience? I think, all things being equal, if we were just inserting an inconsequential experience into our lives, we would choose the strawberry over the pin prick. Why? Because the strawberry is a pleasant experience... another intrinsic good... a thing that is good JUST because it is good to experience. Unless the pin prick is delivering vital medicines or something it needs further justification to be chosen that the strawberry does not.

But this is ethics. And we're not only concerned with what you or I want. Let's further elaborate on the thought experiment and say that everyone in the world must go into the room and experience either the pin prick or the strawberry. But they don't get to decide for themselves which one. YOU are the appointed arbiter who must decide whether everyone experiences the strawberry or the pin prick. You aren't choosing individually for each person. You are making one blanket choice for everyone: the strawberry for everyone, or the pin prick for everyone. (And by the way, this is a kind of hypoallergenic strawberry that even people allergic to strawberries can eat and enjoy. So no worming your way out. Studies show that even people who dislike strawberries tend to like this particular one.) Which "room experience" is most ethical for you to decree for everyone?

Hedonism does not deny that painful phenomena can be useful for us. But we need to ask ourselves: "useful how?" If it can't be demonstrated that pain is instrumental in achieving something other good, then it is merely bad and must be dispensed with. It is like the pin prick. The plain experience of suffering is not good. It needs to create good to be good. It is not good in itself... or "intrinsically" good. It's intrinsically bad. And anyone who has experienced both pleasure and pain will attest to this fact. Pleasure, on the other hand, needs no further justification.

I ultimately have issues with hedonism being the sole intrinsic good (as hedonists like to claim). I am not a hedonist. But any other monistic good pales in comparison to its axiomatic validity. Ultimately, I think that pleasure can not be the only good. But I'm fairly convinced that it is either "one of the goods" or involved heavily in how "the single good" whatever that may be, is constituted. What I like about hedonism is that it is completely solid from its trip from axiom to realized truth. "Wellbeing" is vague. Desire satisfaction is fraught with contradiction. But hedonism makes sense.

I wanted to reply to a lot of the other things you said. I like rationalism a great deal too, for instance. But I got to talking about hedonism and I've probably gone on long enough.
 

Ella S.

*temp banned*
I think you may be selling hedonism short. Standing back and seeing that we are hard-wired towards pleasure has no impact on whether or not it is wise for us to adopt happiness as an essential good. After all, Epicurus realized that our hard-wiring toward pleasure is one of our greatest undoings in the very pursuit of pleasure/happiness in our lives. Sustained pleasure.... frugal pleasure... that is what the wise person hopes to attain, according to Epicurus. To live a happy life, free of misery, where we always have enough happiness /pleasure to keep us satisfied. No more, no less-- That is what Epicurus thought was good. And to achieve this good he advised his readers to stay away from intense pleasures... because intense pleasures tend to be an overall deficit to a happy life.

As one of my favorite poets puts it: "Passion is a flame that burns to its own destruction." But these kind of passions are what we're hard-wired toward, no? My thesis is this: Hedonism is not a product of our prejudices toward pleasure (that come as a result of our hard-wiring). It is the result of an axiomatic foundation that is quite sturdy. No hedonist denies that pain can be useful and (ultimately) satisfying... especially when viewed in retrospect. But the hedonist will point out that-- the only thing that made the pain and suffering valuable at the end of the day, is that it was a source of greater happiness.

As, you probably know, hedonists chop up "useful things" (aka "goods") into two categories: instrumental and intrinsic goods. The thinking goes that things like cars and money are instrumental goods. We want a car because it takes us places. We want money because it allows us to buy things. On a desert island, neither money nor a car would be useful for us. (A car, maybe, and maybe even money to help start fires... but you get my drift. Money isn't valuable in itself. It is valuable because what you can do with it.) Nonetheless, people still desire money and cars. Why? Well because you can keep asking "three year old style":

1) Why do you want the money? So I can buy a car.

2) Why do you want to buy a car? So I can get a job.

3) Why do you want to get a job? To make more money.

4) Why do you want to make more money? So I can pay the bills.

5) Why do you want to pay the bills? So I can enjoy my life and have things that make my life more pleasurable.

6) Why do you want to enjoy your life and have things that make your life more pleasurable?

***

We can pretty much stop at item 6. We do not want pleasure and happiness because they can help us achieve even greater things (although, in many cases, happiness helps us perform better at work and things like that). We want happiness because it is intrinsically good. Imagine that you lived your life, exactly as you have lived it thus far. Except for one thing. On your twentieth birthday you go into a room and one of two things happen.

1) You consume a delicious and sweet strawberry that is extremely delicious. -OR-

2) Somebody pricks the tip of your index finger with a needle.

All things being equal, and assuming that whichever one happened in the room had no real impact on the rest of your life, which would you choose to experience? I think, all things being equal, if we were just inserting an inconsequential experience into our lives, we would choose the strawberry over the pin prick. Why? Because the strawberry is a pleasant experience... another intrinsic good... a thing that is good JUST because it is good to experience. Unless the pin prick is delivering vital medicines or something it needs further justification to be chosen that the strawberry does not.

I am familiar with hedonism, but none of this really negates my point that the reason we are hard-wired to pursue pleasure is because we evolved to pursue it in order to help us survive. This is a direct negation of the idea that pleasure itself is the intrinsic good, because the reason we value pleasure is because it increases our chances for survival and reproduction.

The supposedly "intrinsic good" there is caused by natural selection, therefore it's not really the intrinsic good.

But this is ethics. And we're not only concerned with what you or I want. Let's further elaborate on the thought experiment and say that everyone in the world must go into the room and experience either the pin prick or the strawberry. But they don't get to decide for themselves which one. YOU are the appointed arbiter who must decide whether everyone experiences the strawberry or the pin prick. You aren't choosing individually for each person. You are making one blanket choice for everyone: the strawberry for everyone, or the pin prick for everyone. (And by the way, this is a kind of hypoallergenic strawberry that even people allergic to strawberries can eat and enjoy. So no worming your way out. Studies show that even people who dislike strawberries tend to like this particular one.) Which "room experience" is most ethical for you to decree for everyone?

I think that question has no valid answer, because whether something is "ethical" or not is not truth-apt. It's an evaluative claim. If I was a sadist, I would probably want everyone to get the pinprick.

So really, the question is, which room would I approve of more as a choice? It's a question that I don't really care about discussing.

Hedonism does not deny that painful phenomena can be useful for us. But we need to ask ourselves: "useful how?" If it can't be demonstrated that pain is instrumental in achieving something other good, then it is merely bad and must be dispensed with. It is like the pin prick. The plain experience of suffering is not good. It needs to create good to be good. It is not good in itself... or "intrinsically" good. It's intrinsically bad. And anyone who has experienced both pleasure and pain will attest to this fact. Pleasure, on the other hand, needs no further justification.

I think some masochists and ascetics might disagree with you there, but I also reject any argument based on this line of thought. I don't care if everyone agrees that something is an intrinsic good, that doesn't necessarily mean that it is an intrinsic good in some objective, real sense. It just means that everyone says it is, which is hardly evidence of anything.

It could serve as a linguistic definition for "good" in reference to that social construct, but at that point we have left the realm of discussing reality and we begin a debate about semantics, which is not something that I have much interest in.

I ultimately have issues with hedonism being the sole intrinsic good (as hedonists like to claim). I am not a hedonist. But any other monistic good pales in comparison to its axiomatic validity. Ultimately, I think that pleasure can not be the only good. But I'm fairly convinced that it is either "one of the goods" or involved heavily in how "the single good" whatever that may be, is constituted. What I like about hedonism is that it is completely solid from its trip from axiom to realized truth. "Wellbeing" is vague. Desire satisfaction is fraught with contradiction. But hedonism makes sense.

I appreciate your defense of hedonism, because I agree that it is often misunderstood and is erroneously associated with a variety of behaviors that most hedonists actually argue against, such as drug addiction. I reject all axioms as unproven, though. I don't believe in "self-evident" truths with the possible exception of empirical observation, although I am agnostic about the truth values of observation and merely use them for premises due to lacking anything else to apply logic to. I think all truths ultimately derive from reason, which is a method.

As such, I have still yet to see someone who has successfully logically derived an "ought" without starting with some asserted "ought" that the ought is derived from. Even the analytical method, where saying that something is a clock implies that it "ought to do what a clock does," I find to be unsupported.

I wanted to reply to a lot of the other things you said. I like rationalism a great deal too, for instance. But I got to talking about hedonism and I've probably gone on long enough.

I am interested in your further input.
 
Last edited:

vulcanlogician

Well-Known Member
think that question has no valid answer, because whether something is "ethical" or not is not truth-apt. It's an evaluative claim. If I was a sadist, I would probably want everyone to get the pinprick.

Idk, there are reasons for anyone to reject a moral system. There is nothing written in the sky saying "you have to accept this ethical view." No matter what moral determination you make, there will always be opponents. If I say "stealing is wrong." Of course, thieves are prone to disagree.

I'm not sure what the objectively correct moral system is, or even if such a system exists. But I know this: if an objective moral system DID exist, sadists would probably disagree with it. As I write this, I'm reminded that "egoism" is a value system that sadists might be prone to agree with... but still... you get my point. We needn't give sadists veto power over our moral hypotheses.

I think some masochists and ascetics might disagree with you there, but I also reject any argument based on this line of thought. I don't care if everyone agrees that something is an intrinsic good, that doesn't necessarily mean that it is an intrinsic good in some objective, real sense. It just means that everyone says it is, which is hardly evidence of anything.

Disagreement doesn't prove anything. Nor does any ethical system rest on the foundation that "everyone agrees with this." Plenty of people think the Earth is flat. Is that a reason to think there is no real truth about the Earth's shape?

Masochists are quintessential hedonists. Often, they don't even like pain at all. What they like is entering into a dominant/submissive power structure. Such power structures are demonstrated by the causing of pain to the submissive participant by the dominant participant. The "masochist" derives great pleasure by participating in such a power dynamic, so... to the masochist, pain is an "instrumental good" that leads to the ultimate good which, as hedonists agree, is pleasure. Even if there was a hypothetical masochist out there who simply "liked pain," we'd be left to wonder... "Do they like pain because they ultimately find it... pleasurable?" If so: the hedonists win again.

In the case of ascetics, I'm tempted to bring the whole "instrumental good" argument in once again. It's hard to make sweeping generalizations about ascetics, because there is some variety among them, but a chief concern of theirs seems to be to escape the bondage of physical habit to (perhaps) achieve communion with something divine. There's nothing indicating that they prefer pain to pleasure generally or that they would institute the pin prick for everyone. Even though they engage in self-denial, they may still think it is ethically correct to choose the strawberry for everyone in the world. They may even choose the strawberry for themselves.

It could serve as a linguistic definition for "good" in reference to that social construct, but at that point we have left the realm of discussing reality and we begin a debate about semantics, which is not something that I have much interest in.

Neither am I. I would not be satisfied with an ethics that is only true due to a social construct. A good example is "believing something is wrong simply because God forbade it." To me THAT is an ethics that depends on a social construct (in this case, a religious one). What I'm interested in is if something can be determined as objectively true in ethics that is independent of religion or culture. That is one of the big questions in ethics.

In metaethics, the three main theories are realism, nihilism, and relativism. I think that relativism can be ruled out. So, in my mind, the real argument is between realism and nihilism.

There are ways of understanding ethics that allow one to counter Hume's "is/ought" problem. One way is "moral nonnaturalism" championed by G.E. Moore. Moore's thinking is very interesting, as he is pretty much a utilitarian. But the "goodness" that Moore wants to maximize is not hedonism, nor is it welfare. To Moore, goodness is a simple concept. To Moore, the concept "good" indubitably contains things like "pleasure" or "welfare"... but is not reducible to any of these definitions. There are several logical errors that have been unearthed in Moore's line of reasoning. And they are genuine mistakes. But I still think Moore makes a compelling argument. If anything, I appreciate Moore taking us back to Plato by insisting that good is an irreducible concept. It is very much reminiscent of Plato's "form of the good" or "goodness itself."

We can talk about moral naturalism (which must answer Hume) and moral nonnaturalism (that avoids Hume) if you wish. I'm not sure how into metaethics you are. But I'm curious which metaethical theory you think is most sound (realism, relativism, nihilism, or something else). If you want to discuss whether moral nonnaturalism truly evades Hume, we can do that too. The issue is far from settled.

I am interested in your further input.

I mean, yeah. I love rationalism. I like Plato very much. My avatar is one of the greatest rationalists in history. Even Descartes I like very much... though I don't agree with many of his conclusions. Noam Chomsky has advanced the notion that we must have a set of innate ideas from which we develop our further ideas. (His theory is based on the fact that language seems to require a set of such innate ideas.) So that seems to speak in favor of one of rationalism central tenets. (But, tbh, I'm not sure that innate ideas, as understood by humans, are required to argue in favor of rationalism. But that's a long story.)

We must remember though, the difference between rationalism and empiricism was established after the fact. A historical retrospective.

While Locke and Hume argued against the tenets of "Rationalism" they didn't think that's what they were doing. They thought they were merely disagreeing with Descartes on some of his propositions. And furthermore, I think it's incorrect to say that Plato, Descartes, Spinoza were not empiricists in the plain meaning of the word (ie. that the senses deliver to us correct information about reality). I think they all acknowledged this.

But, as Plato's metaphor contends, about the boat that is carried by its sails upon the winds, when the wind finally dies down, you must complete the voyage by the power of your crew as they row with their oars. To Plato, empirical observation is but one part of the journey. The second part is where you put your mind to work (using logic) to understand what you have observed.

If this couts as rationalism... and I think it does, then we must concede that rationalism is just as much a part of science as empiricism.

I guess I neglected to mention Leibniz in all that. I'm not a big fan based on what I've read thus far. But, I'm not all that familiar with Leibniz. So I can't really say all that much.
 
Last edited:

Ella S.

*temp banned*
Disagreement doesn't prove anything. Nor does any ethical system rest on the foundation that "everyone agrees with this." Plenty of people think the Earth is flat. Is that a reason to think there is no real truth about the Earth's shape?

Disagreement does prove that your statement that "anyone who has experienced both pleasure and pain will attest to" that the "plain experience of suffering is not good." That's all I was referring to.

Even if there was a hypothetical masochist out there who simply "liked pain," we'd be left to wonder... "Do they like pain because they ultimately find it... pleasurable?" If so: the hedonists win again.

If.

In the case of ascetics, I'm tempted to bring the whole "instrumental good" argument in once again. It's hard to make sweeping generalizations about ascetics, because there is some variety among them, but a chief concern of theirs seems to be to escape the bondage of physical habit to (perhaps) achieve communion with something divine. There's nothing indicating that they prefer pain to pleasure generally or that they would institute the pin prick for everyone. Even though they engage in self-denial, they may still think it is ethically correct to choose the strawberry for everyone in the world. They may even choose the strawberry for themselves.

Yet their pain isn't instrumental to further pleasure, but to divinity. You could redefine that as a pleasure, but I think that's stretching "pleasure" a bit too far. If we can say that ascetics are hedonists, I question whether the term "hedonism" really means anything at all.

Neither am I. I would not be satisfied with an ethics that is only true due to a social construct. A good example is "believing something is wrong simply because God forbade it." To me THAT is an ethics that depends on a social construct (in this case, a religious one). What I'm interested in is if something can be determined as objectively true in ethics that is independent of religion or culture. That is one of the big questions in ethics.

In metaethics, the three main theories are realism, nihilism, and relativism. I think that relativism can be ruled out. So, in my mind, the real argument is between realism and nihilism.

There are ways of understanding ethics that allow one to counter Hume's "is/ought" problem. One way is "moral nonnaturalism" championed by G.E. Moore. Moore's thinking is very interesting, as he is pretty much a utilitarian. But the "goodness" that Moore wants to maximize is not hedonism, nor is it welfare. To Moore, goodness is a simple concept. To Moore, the concept "good" indubitably contains things like "pleasure" or "welfare"... but is not reducible to any of these definitions. There are several logical errors that have been unearthed in Moore's line of reasoning. And they are genuine mistakes. But I still think Moore makes a compelling argument. If anything, I appreciate Moore taking us back to Plato by insisting that good is an irreducible concept. It is very much reminiscent of Plato's "form of the good" or "goodness itself."

We can talk about moral naturalism (which must answer Hume) and moral nonnaturalism (that avoids Hume) if you wish. I'm not sure how into metaethics you are. But I'm curious which metaethical theory you think is most sound (realism, relativism, nihilism, or something else). If you want to discuss whether moral nonnaturalism truly evades Hume, we can do that too. The issue is far from settled.

I'm an emotivist. I think people express what they feel as meaningful or what they approve of as "good," and then they become so emotionally attached to these values that they demand that other people agree with them. I think that's really all ethics is. It's the same kind of psychosocial dynamic that we see in people who are hardcore fanatics about a TV show or movie they really like.

This is especially demonstrated by the pervasive reliance on "self-evident truths" and "moral intuitionism," which I think are merely thin disguises for appeals to emotion.

As such, I wouldn't say that I'm a realist, relativist, or nihilist. I just think that ethical statements aren't truth-apt at all because they're merely social expressions of private feelings. We know that much of our morality is informed by pro-social emotions like empathy, compassion, disgust, and shame, and we have demonstrated that in a variety of studies in moral psychology.

I realize that this is a niche concept in the field of ethics, but outside of ethics I think this realization is quite well-established. As such, I don't really see what the point of talking about ethics is unless we agree upon a shared set of values, but values are so idiosyncratic that I'm not sure this will ever happen.

It does lead me to something resembling existentialism, where meaning is created by the individual and doesn't exist as some sort of transcendent principle, but I think existentialists also seem to not fully grasp this? They make a lot of unfounded assertions about the restrictions on these meanings, about the necessity of living authentically, etc. that make me feel like they've somehow missed the point.

But I don't agree with the nihilists that reject the notion of meaning itself. I don't think the people who feel that their lives are meaningful are deluded; I think they just feel that their lives are meaningful. And if so much of ethical philosophy is based around understanding that sort of transcendent meaning or value, and it turns out to be so idiosyncratic, then I think ethics should be done away with entirely as an outdated field.

Ethics seems to me to be a form of superstition that's somehow managed to form a foothold even within secular spaces that I'd otherwise expect to know better. I think the idea that one has some sort of right to force others to follow their will is intoxicating, and I really think that's all ethics boils down to. Even when they value freedom, they demand that everyone else values freedom, too, and in a particular way.

To which I have to say, I don't care what they feel I should do with my life. It's my life. I'm going to live it how I want to.
 
Last edited:

vulcanlogician

Well-Known Member
As such, I wouldn't say that I'm a realist, relativist, or nihilist. I just think that ethical statements aren't truth-apt at all because they're merely social expressions of private feelings. We know that much of our morality is informed by pro-social emotions like empathy, compassion, disgust, and shame, and we have demonstrated that in a variety of studies in moral psychology.

metaethics-flowchart.jpg


There is an even better theory flowchart out there, but it's tiny when I try to post it here. If you follow this link and scroll down a page, you'll see it. I'm working from a theoretical framework that categorizes emotivism as a sort of moral nihilism. (Although I've started calling it "antirealism" recently because 1) nihilism is a charged word, and 2) it can be confused with a rejection of meaningfulness, as with you. I often shorten the ethical theory "moral nihilism" to "nihilism" within discussions of ethics... but I never mean "general nihilism" unless the topic shifts to the question of beliefs and meaningfulness.

According to the theoretical framework I was educated within, error theory and all non-cognitivist theories (including emotivism) fall under the umbrella of moral nihilism. [in the flowchart above, everything not under realism is moral nihilism.] I've since come to find that all ethicists do not work within this exact theoretical framework, so we can categorize emotivism any way you wish... after all, it's ultimately arbitrary what categorizations we use... but at the very least, my saying this clears up what I mean when I say "nihilism" or "moral nihilism."

Anyway, I'm delighted to discuss the issue with an emotivist. I've always thought that the real issue is between nihilism and realism. After having been a moral relativist for 15 years of my life and eventually rejecting it, the only options left on the table for me are: realism, error theory, and non-cognitivism (including emotivism as the best non-cognitivist candidate).

But, as I said before, I'm working with a specific theoretical framework... the one described by the flowchart in the link I provided above.) I've learned that a different framework classifies emotivism and error theory as forms of relativism, and only pits realism against relativism. The thing I like about the framework I use is that it seems to be more specific. For instance, the first order of contention between you and I is if moral statements are beliefs or not. It's one specific issue that we could potentially resolve through argument. I tend to think moral statements do qualify as beliefs about reality (like the belief that "the earth revolves around the sun." or the belief that "the sun revolves around the earth." To me, moral statements are factual claims about reality. As such, we could potentially determine if they are correct or incorrect. Even the error theorists think that all moral statements are beliefs... even though they add that none of these beliefs can be true.

This is especially demonstrated by the pervasive reliance on "self-evident truths" and "moral intuitionism," which I think are merely thin disguises for appeals to emotion.

You have to have a basis in axioms upon which you build your theories. Mathematicians and physicists require the same. We start with an axiom like "the shortest distance between two points is a straight line." We see what logically follows from that and what doesn't. How otherwise can one proceed? Isn't that process one that tries to avoid biases?

It's true that even a cold logical process, carried from axiom to theory, can be affected by biases. But I do insist that we be specific in where exactly the moral realist process derails. It's unfair to say that looking for an axiomatic foundation is itself biased. It isn't. Tons of unbiased ventures do the same.

I am a human being with biases. I admit as much. My biases may affect my conclusions. I don't deny it. I welcome all criticism, because without it, the debate can't be honest.

When I say, "moral statements are capable of being true"... what I really mean is: "If we agree to certain things and accept those things as objectively true- AND- moral determinations can follow from those objectively true statements, then some moral statements can be objectively true."

THAT is my thesis. Even if I'm correct, I still have all my work ahead of me to demonstrate what moral beliefs are fact and what moral beliefs are falsehoods.

I have more to say, but I'll leave it there for now.
 

Ella S.

*temp banned*
You have to have a basis in axioms upon which you build your theories. Mathematicians and physicists require the same. We start with an axiom like "the shortest distance between two points is a straight line." We see what logically follows from that and what doesn't. How otherwise can one proceed? Isn't that process one that tries to avoid biases?

It's true that even a cold logical process, carried from axiom to theory, can be affected by biases. But I do insist that we be specific in where exactly the moral realist process derails. It's unfair to say that looking for an axiomatic foundation is itself biased. It isn't. Tons of unbiased ventures do the same.

I think there's a difference. In mathematics, physics, logic, and lingusitics, axioms are used as a method of communication. They aren't said to have a concrete, real truth value of their own as much as they are a standard convention for abstracting reality itself.

In essence, axioms in these fields help inform methodology. The issue I see with aesthetic and ethical axioms is that they aren't just being used to develop standards of language, but are used to assert truths in and of themselves. I think this is less than ideal. In fact, I would say that it carries the same issue that "Reformed Epistemology" does, to wrap back around to the start of this thread.

I would also say that it's better, in general, to try to reduce the number of axioms one follows as best they can, so even if axioms are a mainstay in some fields there can be approaches to others that reduces the axioms further. This is to avoid making too many assumptions, because just building on top of assumptions can taint a whole line of reasoning.

The reason why I say the axioms in ethics are particularly biased is because they have no widespread agreement. This is actually an issue with philosophy as a whole. I see a lot of philosophers making speculative assertions, and then using rhetoric defend them and attack those who disagree with them. It seems standard for the field, but I try to hold myself to a higher standard of truth than that.

I am a human being with biases. I admit as much. My biases may affect my conclusions. I don't deny it. I welcome all criticism, because without it, the debate can't be honest.

When I say, "moral statements are capable of being true"... what I really mean is: "If we agree to certain things and accept those things as objectively true- AND- moral determinations can follow from those objectively true statements, then some moral statements can be objectively true."

THAT is my thesis. Even if I'm correct, I still have all my work ahead of me to demonstrate what moral beliefs are fact and what moral beliefs are falsehoods.

I have more to say, but I'll leave it there for now.

I am also quite biased, as I think this discussion likely shows, since I favor an epistemological form of process reliablism that focuses on logic.

Under logic, moral statements are not capable of being true without additional assumptions, unless we are engaging in descriptive morality rather than normative ethics. That's the real crux of my issue. We have no way of actually proving the axioms that ethics rely on, so most of ethical philosophy seems like it's built on pillars of sand to me.
 

vulcanlogician

Well-Known Member
I think there's a difference. In mathematics, physics, logic, and lingusitics, axioms are used as a method of communication. They aren't said to have a concrete, real truth value of their own as much as they are a standard convention for abstracting reality itself.

In essence, axioms in these fields help inform methodology. The issue I see with aesthetic and ethical axioms is that they aren't just being used to develop standards of language, but are used to assert truths in and of themselves. I think this is less than ideal. In fact, I would say that it carries the same issue that "Reformed Epistemology" does, to wrap back around to the start of this thread.

I think we need to be clear in what we mean by "axiom." To me, an axiom should be obvious from the git go. So obvious that we feel comfortable assuming it is true.

Sure, a "self-evident" truth may turn out to be false (ie. the sun revolves around the Earth). But, at the same time, it is acceptable to assume the painfully apparent thing in the absence of contrary evidence. I don't blame Aristotle for assuming the sun circled around the Earth. Seeing that the sun -apparently- revolves around the Earth is the first step to understanding that the opposite is true. Even if an axiom turns out to be false, I think we absolutely must assume that obvious things are as they appear ... at least at first... at least until there is an adequate challenge to the idea. An ancient Greek who believes that the Earth revolves around the sun is only correct if he can explain why. Otherwise, he is just guessing.

Axioms are not something that I may insist that other people believe, simply because I think they're obvious. They aren't a shortcut to knowledge or a "get out of jail free card" when it comes to what is required of me concerning the justification of my claims. They're more of a starting point in an investigation: the obvious assumption that we begin with.

"Pleasure is good and pain is bad." If we take a sec and imagine the sensation of pleasure... and then imagine the sensation of pain, it's obvious that one has an almost inherent quality of "this is good"... while the other has the quality of "this is bad." Wouldn't you agree? Or, and I suppose this is the more important question, "Would you deem a person who accepted such an axiom (that pleasure is good and pain is bad) irrational?"
 

Ella S.

*temp banned*
I think we need to be clear in what we mean by "axiom." To me, an axiom should be obvious from the git go. So obvious that we feel comfortable assuming it is true.

Sure, a "self-evident" truth may turn out to be false (ie. the sun revolves around the Earth). But, at the same time, it is acceptable to assume the painfully apparent thing in the absence of contrary evidence. I don't blame Aristotle for assuming the sun circled around the Earth. Seeing that the sun -apparently- revolves around the Earth is the first step to understanding that the opposite is true. Even if an axiom turns out to be false, I think we absolutely must assume that obvious things are as they appear ... at least at first... at least until there is an adequate challenge to the idea. An ancient Greek who believes that the Earth revolves around the sun is only correct if he can explain why. Otherwise, he is just guessing.

Axioms are not something that I may insist that other people believe, simply because I think they're obvious. They aren't a shortcut to knowledge or a "get out of jail free card" when it comes to what is required of me concerning the justification of my claims. They're more of a starting point in an investigation: the obvious assumption that we begin with.

"Pleasure is good and pain is bad." If we take a sec and imagine the sensation of pleasure... and then imagine the sensation of pain, it's obvious that one has an almost inherent quality of "this is good"... while the other has the quality of "this is bad." Wouldn't you agree? Or, and I suppose this is the more important question, "Would you deem a person who accepted such an axiom (that pleasure is good and pain is bad) irrational?"

That's quite fair. Yes, I would.

Here's why: to me, pleasure (hedone) is a vice and a form of pathos, because I'm a Stoic. "Rational" for me is closely tied to Socratic rationalism, from which Stoic virtue is based on, along with influence from Cynicism.

Granted, many modern forms of hedonism have a much broader concept of what "pleasure" means, but I'm skeptical of this expansion of the term. To me, it seems as if "pleasure" has been conflated with "eudaimonia" and "ataraxia," but I think these are distinct concepts.

The focus on pleasure was a large source of contention between the Stoics and the Epicureans, and I think it's an argument that's still being rehashed to this day in various forms. I think calling them both hedonist philosophies paints with too broad of a brush and obscures more than it reveals.

Although I think I probably would be a hedonist, or at least have hedonistic tendencies, based on the way you've defined it in this discussion.
 
Top