I'm not sure how this analysis will go over in this forum, but here goes.
First, some background. Mathematics is proof based. The original example
of how math is done is Euclid's Elements, the first geometry 'textbook'.
What Euclid did was give a number of axioms and postulates: basic assumptions
of how things work that he considered to be 'intuitively obvious'. he then
proceeded to use those assumptions to prove various things about geometry,
numbers, etc.
But there was one postulate, the parallel postulate, that just didn't seem to
be as 'intuitively obvious' as the rest of Euclid's assumptions. Many people
over many centuries tried to prove this postulate from the other assumptions,
but failed to do so. So, while nobody actually believed the parallel postulate
to be false, nobody was able to prove it either.
Then, about 200 years ago, people started working with a system of geometry
that assumed the parallel postulate was false. And, what they eventually found
is that geometry with the parallel postulate and geometry with its negation
are *equally* consistent internally. In fact, using either, you can construct
a 'model' where the assumptions of the other hold.
This shook the foundations of mathematics. Math had always been held as the
most certain area of study. It was used by Plato as a crucial example showing the
existence of his 'Forms'. It had inspired Descartes to re-examine his assumptions,
leading to his break with Aristotelian philosophy.
But now, there were *two* equally consistent theories of geometry. Neither could
be discarded solely on the basis of logic. They were equally consistent internally.
One aspect of this revolution was that mathematicians started looking much closer
at the basic assumptions of *all* of math, even those underlying basic arithmetic.
One goal was to find a collection of assumptions that is consistent--where it could
be *proved* that no inconsistencies would ever be derived. It was also a goal that
it could be proved that these assumptions could answer any (mathematical) question,
this being called 'completeness'. It was felt that, even if we didn't have all the
'correct' axioms, there should be a collection of axioms that is both consistent
and complete and finding that collection of assumptions became a central goal for
many mathematicians in the early 1900's.
Then, a mathematician named Kurt Godel came along. He *proved* that any sysem of
axioms that is unambiguous (in a well defined sense) could NOT be both consistent
and complete if it was strong enough to deal with ordinary arithmetic.
So, the problem that had been raised by the parallel postulate in geometry was shown
to exist even in ordinary, elementary school arithmetic: it is simply not possible
to even prove arithmetic is consistent without making assumptions stronger than
arithmetic itself.
Moreover, any system of axioms strong enough to deal with arithmetic *must* have
questions it cannot answer. If consistent, it simply cannot be complete (and unambiguous
in the sense above). There MUST be questions that are unanswerble without making
additional assumptions. We say that such questions are independent of our axioms
And we know of many independent questions in mathematics: questions that simply
cannot be answered one way or the other from the axioms we use for modern math.
Some have been subsequently assumed (the Axiom of Choice). Others are still being hotly
debated (Continuum Hypothesis). Others are obscure even to most mathematicians (Martin's
axiom).
For ALL of these, we get different theories of *mathematics* by either assuming them
OR assuming them to be false. And *both* ways are equally consistent internally.
In a sense, these questions have no actual truth value, even though they are meaningful
and some are even important. You can use either the statement or its negation equally
well and have a mathematics that is equally consistent either way.
Bummer. Well, sort of.
Now, I have been pondering. We all make assumptions about the 'real world'. Among those
is that we are not the only conscious being, that there is an external world that our
senses can perceive, however imperfectly, that our memories are reliable to at least
some extent, etc.
We use these assumptions to move around in the world, learn how that world works, etc.
We develop science and technology, etc. We make art, music, and love others.
Then comes the issue of religion.
People have many different views of what things are true in their religions. There are
multitudes of different religions (especially through time) with huge variations on what
was believed, thought important, etc.
What if the existence of a supernatural is simply inconsistent?
What if the existence of a supernatural is neither provable nor disprovable from our other
assumptions? What if the question really doesn't have a truth value at all?
Now, if this is the case, what are the advantages and disadvantages of the assumptions either way?
Mathematics is actually less proven than you think. Yes, granted, like science it is intended to be proven. But then in the last century or so, we have had New Math, Common Core, and branches of math that clearly haven't been tested. Of these, Order of Operations is particularly galling. I'm gonna skip Geometry and show how math has gotten imprecise.
You see, Order of Operations is a nice theory, but seldom works in actual practice. If you are a school teacher and need to average grades, your best tool is not in fact an expensive calculator, but a cheap solar calculator. Same for library work.
Why? Because Order of Operations turns a simple average, say...
84+86+85/3
into 84+86+(85/3) or in other words, that becomes a fraction.
Suppose I'm doing a word problem, and they tell me that I have 2 apples and another grocer sells me 4 more, and by the end of the day, I sold half. How many apples are left?
Using basic logic, math should be consistent, meaning I have six apples, sold half, and now have three (3) apples.
Using elementary school math, 2+4/2 is 3.
Using parentheses CORRECTLY (2+4)/2 is 3.
Using Order of Operations, they expect you to believe that 2+4/2 is four. Any grocer who is looking at the remaining apples is like BULL****.
Effectively, Order of Operations, doesn't work for averages, and it does work for most basic math. It is the equivalent of defaulting to 2+(4/2) which nobody asked you to do.
And so, I'm going to offer a revised theory: the PEARL (named after my niece) Method of Mathematical Order.
Parentheses
Exponents
And
Right from
Left
(It should be left to right but I rephrased it to make it work as an acronym)
Basically, you solve inside the parentheses, then you square, cube, root, etc the parentheses, then you work left to right, making parentheses as you go to avoid sloppy association. Like a comma in a sentence "Let's Eat Grandma", there is no excuse to ignore making parentheses.
Using my theory, 2+rt9/2 is easily solved
Exponents first rt9 is 3
(2+3)/2 is 2.5
Unless, that last bit is a fraction, in which case, you would associate
2 + (rt9/2)
or 2 + (3/2)
or 2 + (1.5)
or 3.5
But unless noted as a fraction, it's unclear.