• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

How much faith do you have in science?

youknowme

Whatever you want me to be.
Poor practice is catching up with science,1-3 manifesting in part in the failure of results to be reproducible and replicable.4-7 Various causes have been posited,1, 8 but we believe that poor statistical education and practice are symptoms of and contributors to problems in science as a whole.

The problem is one of cargo-cult statistics – the ritualistic miming of statistics rather than conscientious practice. This has become the norm in many disciplines, reinforced and abetted by statistical education, statistical software, and editorial policies.

At the risk of oversimplifying a complex historical process, we think the strongest force pushing science (and statistics) in the wrong direction is existential: science has become a career, rather than a calling, while quality control mechanisms have not kept pace.9

Some, such as historian and sociologist of science Steven Shapin, still argue that science survives thanks to the ethical commitment of scientists,10 but others, such as philosopher of science Jerome Ravetz, find this a charitable perspective.11,12 Much of what is currently called “science” may be viewed as mechanical application of particular technologies, including statistical calculations,13 rather than adherence to shared moral norms.14, 15

We believe the root of the problem lies in the mid-twentieth century.

Significance magazine - Cargo-cult statistics and scientific crisis | Significance magazine

Here is the ASA's (American Statistical Association) statement on p-values:

https://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.XFefHlVKiCh


Here is a very simple short video on the p-value to give you a basic idea what a p-value is:



Some consider science to be in a state of crisis and this in part due to the fact that they have been using the p-value in a way it was never meant to be used and to get published they push for "statistical significance" regardless of if it actually means anything worthwhile.

Think of all the studies you see in the news day in and day out: Do you really think science happens that fast? I see several people on these forums chucking statistics at each other, as if they are facts, and a few of them have even insisted that statistics are facts, so my question is: How much faith do you have in science?
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
It always amazes me that *any* scientist uses a p value as large as .05. Let's face it, a 1/20 change of the result being due to chance is way too high for any real confidence in the result.

Instead, what would happen if we started using, say, a maximum p value of .0005?

Now, instead of a 1 in 20 chance of the result being due to chance, we have a 1 in 2000 for it being due to chance. Still, given the thousands of tests being done, there will be some 'false positives' that get through, but many, many fewer than is now the case.

That would greatly increase the overall confidence in our result and, most likely, improve the overall image of the science among the lay public.

The main downside? Some scientists would have to work a lot harder to get publishable results. I bet that would be a good thing.

In particular, way too many results in the medical journals use a p<.05 standard. This, in my mind, is almost criminal. It is also a standard in many psychological journals.
 

Brickjectivity

wind and rain touch not this brain
Staff member
Premium Member
Think of all the studies you see in the news day in and day out: Do you really think science happens that fast? I see several people on these forums chucking statistics at each other, as they are facts, and a few of them have even insisted that statistics are facts, so my question is: How much faith do you have in science?
Those few people who understand what a p-value is understand that it takes multiple studies and reputations on the line to amount to anything to pay attention to. That being said a study is not without value, but its not news and shouldn't be put forward as news in the Daily Mail etc. People love to gossip though, and that's why we like to talk about interesting statistics.
 

youknowme

Whatever you want me to be.
It always amazes me that *any* scientist uses a p value as large as .05. Let's face it, a 1/20 change of the result being due to chance is way too high for any real confidence in the result.

Instead, what would happen if we started using, say, a maximum p value of .0005?

Now, instead of a 1 in 20 chance of the result being due to chance, we have a 1 in 2000 for it being due to chance. Still, given the thousands of tests being done, there will be some 'false positives' that get through, but many, many fewer than is now the case.

That would greatly increase the overall confidence in our result and, most likely, improve the overall image of the science among the lay public.

The main downside? Some scientists would have to work a lot harder to get publishable results. I bet that would be a good thing.

In particular, way too many results in the medical journals use a p<.05 standard. This, in my mind, is almost criminal. It is also a standard in many psychological journals.

Most statisticians are aware of these problems, it is the non-statistician scientists that heavily rely on the .05, but then that is what they are taught to do. They are taught the how's of doing the tests, but not the why's behind them.

But there has been a big push to drop "significance" testing and move on to a strength of evidence statements. Where a p-value does not have an arbitrary cut off for rejection of the null, we instead consider it stronger evidence the smaller the p-value gets. Schools are also pushing that even the non-math stem students need to understand what the p-value actually means.

I suppose the a strength of evidence scale is something of an improvement, but I still have my criticisms of it. It is still arbitrary, and personally I am not sure if a statistician should be making conclusions about the p-value unless they also happen to be a subject matter expect in what they are studying. Also using a strength of evidence statements may be good for basic hypothesis testing but there are other statistical tests where it would become awkward to interpret, like multiple comparison tests.
 
Last edited:

beenherebeforeagain

Rogue Animist
Premium Member
The physical sciences (physics, chemistry, etc) can expect to find p values many digits to the right of the decimal, but in the social sciences (psychology, sociology, etc) and medicine, it's difficult to eliminate enough variables to have much of a chance of being as accurate as the physical sciences.
 

youknowme

Whatever you want me to be.
The physical sciences (physics, chemistry, etc) can expect to find p values many digits to the right of the decimal, but in the social sciences (psychology, sociology, etc) and medicine, it's difficult to eliminate enough variables to have much of a chance of being as accurate as the physical sciences.

I would not bank on that. The world is a very complicated place, and trying to model it with statistics can get very messy.
 

youknowme

Whatever you want me to be.
Here is a dilemma.

Statistics is used to estimate unknown parameters, we estimate them because we don't have the means to find the true values. So if I build a regression model to explain the relationship between many different variables, how can I ever know if that model is true? The fact is I can't. There is no way from me to know, for sure, if that model is correct. With multiple studies I would be able to gain more confidence in my model if I kept finding similar results, but we push science so fast these days, that doesn't always happen.
 

youknowme

Whatever you want me to be.
It always amazes me that *any* scientist uses a p value as large as .05. Let's face it, a 1/20 change of the result being due to chance is way too high for any real confidence in the result.

Instead, what would happen if we started using, say, a maximum p value of .0005?

Now, instead of a 1 in 20 chance of the result being due to chance, we have a 1 in 2000 for it being due to chance. Still, given the thousands of tests being done, there will be some 'false positives' that get through, but many, many fewer than is now the case.

That would greatly increase the overall confidence in our result and, most likely, improve the overall image of the science among the lay public.

The main downside? Some scientists would have to work a lot harder to get publishable results. I bet that would be a good thing.

In particular, way too many results in the medical journals use a p<.05 standard. This, in my mind, is almost criminal. It is also a standard in many psychological journals.

There is actually another issue associated with such a approach. You could use a significance level of 0.0005 which would reduce your chance of rejecting a true null hypothesis (type one error) but you also increase your chance of failing to reject a false null hypothesis (type two error). So I mean you can't just use a smaller significance level, at the moment using a strength of evidence statement seems to be the route to take.
 

Ellen Brown

Well-Known Member
Significance magazine - Cargo-cult statistics and scientific crisis | Significance magazine

Here is the ASA's (American Statistical Association) statement on p-values:

https://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.XFefHlVKiCh


Here is a very simple short video on the p-value to give you a basic idea what a p-value is:



Some consider science to be in a state of crisis and this in part due to the fact that they have been using the p-value in a way it was never meant to be used and to get published they push for "statistical significance" regardless of if it actually means anything worthwhile.

Think of all the studies you see in the news day in and day out: Do you really think science happens that fast? I see several people on these forums chucking statistics at each other, as if they are facts, and a few of them have even insisted that statistics are facts, so my question is: How much faith do you have in science?

What makes you think that Scientists screw up less than Religious folk? We are all deeply flawed.
 
It always amazes me that *any* scientist uses a p value as large as .05. Let's face it, a 1/20 change of the result being due to chance is way too high for any real confidence in the result.

At the macro level it is even more worrying. If 5% of all studies yield false positives, then this will relate to maybe 20-60% of published results being such false positives.

Only a tiny fraction of studies get published, and publication has biases towards positive results, novelty and the unexpected, etc. Researchers need results to keep their jobs or funding and progress their careers, and whenever people have such motivations, some will undoubtedly look to game the system, and others will be credulous towards results that are beneficial to their future prospects rather than scientifically rigorous. Others will believe they are doing everything by the book, but just lack the methodological of statistical knowledge to understand the flaws in their processes. Corporations, especially medical ones, will throw thousands of darts at the board, them promote all of the random hits as if they were meaningful. As a result, false results will always be disproportionately represented in scientific publication.

I wonder at what level of error a discipline becomes pretty much functionally useless?
 

xir

New Member
It always amazes me that *any* scientist uses a p value as large as .05. Let's face it, a 1/20 change of the result being due to chance is way too high for any real confidence in the result.

Instead, what would happen if we started using, say, a maximum p value of .0005?

Now, instead of a 1 in 20 chance of the result being due to chance, we have a 1 in 2000 for it being due to chance. Still, given the thousands of tests being done, there will be some 'false positives' that get through, but many, many fewer than is now the case.

That would greatly increase the overall confidence in our result and, most likely, improve the overall image of the science among the lay public.

The main downside? Some scientists would have to work a lot harder to get publishable results. I bet that would be a good thing.

In particular, way too many results in the medical journals use a p<.05 standard. This, in my mind, is almost criminal. It is also a standard in many psychological journals.

While I have to agree that 1/20 is quite a lot, I see some issues using a much lower p-value as the cut-off.

Selecting a significance level as high as 5% has to do with the power of the test. The lower the significance level used, the lower the chance is to identify an effect that does exist. I'm sure you see how this can lead to erroneous conclusions as well. Using a significance level much lower than the current standard could make it easier for e.g. pharmaceutical companies to deny that their medicines have dangerous side effects.

It also depends upon the field and setting how large the effect is, and what the "signal to noise"-ratio is. Some data types require tests that are less sensitive than the standard t-tests or z-tests.

My conclusion is that we need to create incentives for scientists to perform replication studies! No journal wants to publish something we "already know", which is why we have this publication bias in the first place. This really needs to change.
 
How much faith do you have in science?

Depends on the field.

Much social science is about as trustworthy as astrology, much of the physical sciences is incredibly reliable.

Folk should also be very sceptical about new findings in the medical sciences, particularly medicines. Also anything that involves complex systems: the body, the environment, genetic modification, etc. as we are nowhere near smart enough to consistently understand how changing specific variables in such systems will affect the whole.

For all its successes, the sciences are still one of the biggest sources of false information we have, and such false information is often disproportionately enduring 'because science'. They are also potentially great sources of harm, often driven by hubristic belief in our own rationality and intelligence.

Too many people treat science according to its normative aims, rather than as a real world human endeavour resplendent with all of the failings of any human activity.

In general, people have far too much faith in science, often driven by ideology.
 

youknowme

Whatever you want me to be.
Depends on the field.

Much social science is about as trustworthy as astrology, much of the physical sciences is incredibly reliable.

[. . .]
.

In a thread about science, I must ask: Do you have any evidence to support that?

I don't know what the difference between the two is but even the physical sciences are suffering from this "crisis", because the problem is inherit in the methodology. If the social sciences suffer more then it would be due to the difficulty in obtaining random samples and doing randomized experiments when working with humans. However, the problem of significance testing is a problem for any of the sciences that use such methods (these days almost all of science has been mathematized). Some sciences seem to be more keenly aware of the inherit problem in significance testing than others, like ecology seems be more keyed in than others.
 
  • Like
Reactions: xir

Thief

Rogue Theologian
Significance magazine - Cargo-cult statistics and scientific crisis | Significance magazine

Here is the ASA's (American Statistical Association) statement on p-values:

https://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.XFefHlVKiCh


Here is a very simple short video on the p-value to give you a basic idea what a p-value is:



Some consider science to be in a state of crisis and this in part due to the fact that they have been using the p-value in a way it was never meant to be used and to get published they push for "statistical significance" regardless of if it actually means anything worthwhile.

Think of all the studies you see in the news day in and day out: Do you really think science happens that fast? I see several people on these forums chucking statistics at each other, as if they are facts, and a few of them have even insisted that statistics are facts, so my question is: How much faith do you have in science?
so.....if you throw some numbers at the idea....
and then over think the situation.....

you could believe anything

see my recent thread title......This is a Test
 

Quintessence

Consults with Trees
Staff member
Premium Member
The physical sciences (physics, chemistry, etc) can expect to find p values many digits to the right of the decimal, but in the social sciences (psychology, sociology, etc) and medicine, it's difficult to eliminate enough variables to have much of a chance of being as accurate as the physical sciences.

It works this way in biological sciences too, in many cases.


It's likely that different fields have different standards. There really wasn't a push for "statistical significance" in the field I trained in. That was something that my mentors were insistent upon dispelling, including the statistician I was working with at the time. In conservation biology, whether or not something is statistically significant is often irrelevant when we're dealing with topics that have both practical and policy implications. That's not to say the stats aren't important, but they are often not the driving factor in how you write up your research.

Also, I'd be remiss not to post up one of my favorite webcomics again, because it's relevant to the issue (the problem is not so much the science itself as the interpretation thereof):

full
 

LuisDantas

Aura of atheification
Premium Member
Significance magazine - Cargo-cult statistics and scientific crisis | Significance magazine

Here is the ASA's (American Statistical Association) statement on p-values:

https://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.XFefHlVKiCh


Here is a very simple short video on the p-value to give you a basic idea what a p-value is:



Some consider science to be in a state of crisis and this in part due to the fact that they have been using the p-value in a way it was never meant to be used and to get published they push for "statistical significance" regardless of if it actually means anything worthwhile.

Think of all the studies you see in the news day in and day out: Do you really think science happens that fast? I see several people on these forums chucking statistics at each other, as if they are facts, and a few of them have even insisted that statistics are facts, so my question is: How much faith do you have in science?
Statistics are facts. How those facts relate to matters of claims of cause and effect is something else altogether.

It seems to me that you are identifying a very real problem, but entirely misattributing its causes and conceivable solutions. We are not supposed to have "faith in science". If it requires faith, then it is probably not science at all.

The article that you are quoting is superb, by the way.

But the matter at hand here is not even close to an excess of "faith in science", but rather one of neglect of the needs of rigor of method and expression. To a very large extent, that is not even a flaw of the scientific community, but rather of the social and political environment, which goes out of its way to misunderstand and misrepresent scientific findings.

There is a reason why Feynman called that "Cargo Cult Science".
 

LuisDantas

Aura of atheification
Premium Member
Here is a dilemma.

Statistics is used to estimate unknown parameters, we estimate them because we don't have the means to find the true values. So if I build a regression model to explain the relationship between many different variables, how can I ever know if that model is true? The fact is I can't. There is no way from me to know, for sure, if that model is correct. With multiple studies I would be able to gain more confidence in my model if I kept finding similar results, but we push science so fast these days, that doesn't always happen.
Indeed, you can not conclude that any model is true out of statistical tests alone. You can become pretty certain, but even that is unlikely and in practice almost unheard of.

Then again, one of the reasons why it is so rare is because it is not meant to happen in the first place. Statistical models are great investigative tools, but almost demonstrably unsuitable for showing actual causation or causal relation.

For that you have to build an actual theoretical model and test it. You have to attain falsifiability. Your model should be capable of supporting claims that may be objectively tested for accuracy of prediction.

Sure, that can be challenging at the best of times, and arguably impossible in human and biological fields. We definitely should remind ourselves often that statistical correlation does not imply causation at all. Above all, we should watch for and reign in our natural tendencies for pursuing certainty at the flimsiest of excuses.
 
  • Like
Reactions: xir
In a thread about science, I must ask: Do you have any evidence to support that?

If many of the findings of chemistry were not accurate, we'd notice when lots of areas of industry stopped working.

If aspects of physics were not accurate (enough) then we couldn't land probes on comets.

If the social sciences suffer more then it would be due to the difficulty in obtaining random samples and doing randomized experiments when working with humans.

Social sciences suffer more because you often can't isolate variables very easily; it's hard to test things in 'natural' conditions; they often relate to things which are hard to quantify in a meaningful way; are more susceptible to ideology, bias and poor experiment design, etc. etc.
 

youknowme

Whatever you want me to be.
If many of the findings of chemistry were not accurate, we'd notice when lots of areas of industry stopped working.

If aspects of physics were not accurate (enough) then we couldn't land probes on comets.



Social sciences suffer more because you often can't isolate variables very easily; it's hard to test things in 'natural' conditions; they often relate to things which are hard to quantify in a meaningful way; are more susceptible to ideology, bias and poor experiment design, etc. etc.

That is an anecdotal argument. Also understand I am talking about a certain context, now I don't know how much of physics and chemistry relies on statistical methods, those are two of our oldest branches of science that have well developed foundations, they may not need to rely on probability theory as much. However, you trying to pin this on social science only reveals your lack of understanding concerning regression modeling and experimental design. We are talking about biology, medicine, geology, ecology, just almost every branch of natural science, climate science, and so on and so on. This is not a problem that is just isolated to social science, and it is important to recognize that fact if we are to address and correct these issues.
 
Last edited:
Top