• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

How much can we trust science?

I don't believe science is harmful or beneficial in that regard. As that is not science. It's using what science revealed to use in other applications, such as military. Discovering atomic energies is nothing more than a part of learning how atoms work. And they worked the same before and after we made those discoveries. Science isn't magic, it explains magic. Science explains electricity. The electric chair is not inherently a part of that or of science. That is people using the findings of science and applying them to destructive and lethal ends.

Fair enough. Can't say I agree with treating science as anything other than a human activity with real-world consequences, but we can agree to disagree on that.
 

LegionOnomaMoi

Veteran Member
Premium Member
What works in particle physics does not necessarily work for everything else.
It doesn't really work in particle physics. It's just that the inherent problems with significant testing (plus some additional ones) were more readily apparent in HEP and as an unfortunate consequence the experimentalists and consulting statisticians changed little of what they had already borrowed from the social and behavioral sciences (i.e., p-values and the atheoretical, horribly mangled mixture of Fisherian statistics and contradictory as well as incompatible notions from Neyman-Pearson hypothesis testing). In the main, they did the minimal required to combat the issues now facing standard statistical methods applied to so-called Big Data (e.g., the issues associated with large n), the look-elsewhere effect and the related issue of multiple testing. They simply lowered the significance level. Thankfully, because almost the entirety of experimentalist methods and findings in HEP are rooted in QFT and the standard model (the latter or some variant often serving as a null), experimentalists mostly have to figure out not what to discover or even how, but ensure that when they inevitably find what is predicted they can do so without egg on a whole lot of faces. What is lost, unfortunately, is a great deal of time and money as well as the possibility of new discoveries as a result of particle physics adopting bad methods from other sciences that were criticized even there by their very founders.

And for some distributions being considered if you used a .0005 cutoff you'd start racking up type two errors instead of type one errors
This is more of the unfortunate confused approach that has been part of standard indoctrination for decades now, despite combining the Fisherian significance cutoff approach with the conceptually, formally/mathematically, and philosophically incompatible Neyman-Pearson hypothesis testing approach. The type I and II errors are only statistically meaningful in the pre-data N-P approach in which the parameter space in question is partitioned in to mutually exclusive regions so that one could apply methods in order to optimize the probability of making a decision error (which is why Wald later reformulated their approach into decision theory more formally). The trade-off makes sense from the pre-data perspective, because one can minimize the probability of long-term procedural errors in the frequentist interpretation inherent to the N-P approach. It doesn't make sense in terms of a Fisherian p-value interpreted as the probability of obtaining the data one has (post-data perspective), under the assumption of a null and no alternative.
There are methods for trying to find an optimal alpha level if you must use one, but really you don't need a significance level at all. You can simply interpret the p-value for what it is, a conditional probability.
This is completely false and absolutely, horribly mistaken. It is a common but no less destructive misconception. The p-value is ABSOLUTELY NOT a conditional probability. This is why it is important to avoid defining it (in words) using something like "the probability of obtaining data X, given that the null is true. The "given" suggests conditional probability, but a p-value is absolutely NOT a random variable, and thus it is meaningless to speak of it in terms of a conditional probability p(data|null hypothesis is true) despite unfortunately frequent abuses of notation. For Bayesians, p-values are random variables, but by adopting this approach one has to involve priors and compute posteriors and the entire approach is conceptually, philosophically, computationally, and mathematically quite different.
 

LegionOnomaMoi

Veteran Member
Premium Member
A smaller p value means that your conclusions are more likely to be valid. But a p<.05 means that *way* too many studies will draw false conclusions.
It doesn't. It usually means that you've computed something misleading and largely meaningless. P-values can be easily and readily made as significant as desired by e.g., enlarging the sample size or simply choosing a different model, as all they can possibly ever do is relate to the probability that particular some particular results (or more extreme ones) would be found under the assumption that a particular probability distribution has a specified parameter. Thus, it is immediate from extreme findings only that (at best) either the parameter or model/distribution is unlikely to be the random data generating mechanism behind the results (which we almost always no is the case to begin with). Things go downhill from there. Which is why the current approach to significance testing has been wholly and soundly criticized since before it existed (as the current approach is an illogical, incoherent mixture of two incompatible founding camps of classical statistical methods).

I recommend the following as a tiny start on almost a century of criticisms on the single worst thing to happen to the sciences since the dawn of the 20th century or probably earlier:
Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606.
Kennedy-Shaffer, L. (2019). Before p< 0.05 to beyond p< 0.05: Using history to contextualize p-values and significance testing. The American Statistician, 73(sup1), 82-90.
McShane, B. B., Gal, D., Gelman, A., Robert, C., & Tackett, J. L. (2019). Abandon statistical significance. The American Statistician, 73(sup1), 235-245.
See also the ASA's official statement. The literature on this goes back at least as far as Fisher's criticisms of Neyman-Pearson if not earlier (i.e., the 30s). That was before the illogical mess was introduced into standard indoctrination (and the historical context carefully removed in order to give the appearance absolute authority rather than the truly controversial reality). Requiring lower p-values will just ensure more p-hacking or things like it (both unintentional and intentional) whilst hiding worthwhile results because they don't meet some meaningless, misunderstood threshold.
 

paarsurrey

Veteran Member
If you are unaware science is considered to be in a state of crisis. Replication is one of the foundations of science, but scientists are having problems reproducing results.

Here is an article about it, but you can also Google the replication crisis if you want more information.

In this survey of 1500 scientist they found



1,500 scientists lift the lid on reproducibility


There have been a number of studies on this replication crisis, so feel free to investigate more if you like.

Given the current state of the replication crisis, how much can we trust science?

We can trust Science to the extent it is accurate to tally with the Nature that has been created by G-d, I understand. Right, please?

Regards
 
Last edited:
Top