• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Scientific articles in peer-reviewed journals

LegionOnomaMoi

Veteran Member
Premium Member
Nonsense.
Evidence based ideas will always have a better probability of being right than non-evidence based ideas.
And any evidence is better than no evidence. Always.

To say otherwise would not only be illogical, but reveals a blindness to reality that is disasterous.
Actually, it's taken for granted in most theories of probability, reasoning, scientific methodology and/or hypothesis testing, and so forth that no evidence can absolutely be better than some. In fact, for certain approaches to epistemic justification or intelligent systems (particularly those using Bayesian models) this is a central area of concern. But even in your average social science intro stats book, there will almost always be a section on the problems with certain evidence. Outliers, systematic error, measurement error, and numerous other problems can replace ignorance with misinformation. Personally, I don't think that knowing little or even nothing about a topic is worse than "knowing" things that aren't true because the evidence I have access to warps reality.
 

LegionOnomaMoi

Veteran Member
Premium Member
Are they 100% accurate?

How much accurate in percentages?
Peer-review is just the start. Depending on the journal in question (some aren't taken seriously) the fact that a paper, article, or study passes peer-review simply means that it becomes a piece of the literature in some topic. It's not taken as gospel by anyone in the field.
 

greentwiga

Active Member
A peer reviewed journal has passed some sot of scrutiny. Nevertheless, sometimes papers like cold fusion get through. That is why each paper must state its methodology. Others try to duplicate the results. When a scientist successfully duplicates the results, the first paper is treated as much more reliable.

Still, others manipulate the system. Drug companies hire scientists to test their products. If the scientist gets negative results, the secrecy clause is invoked, so the paper doesn't get published. A drug company might hire 5 scientists to run the tests. If only one gets positive results, the one result gets published. As a result, we have many ineffective or worthless drugs on the market.

Therefore, most peer reviewed papers are highly accurate, but there are a few know exceptions.
 

paarsurrey

Veteran Member
A peer reviewed journal has passed some sot of scrutiny. Nevertheless, sometimes papers like cold fusion get through. That is why each paper must state its methodology. Others try to duplicate the results. When a scientist successfully duplicates the results, the first paper is treated as much more reliable.

Still, others manipulate the system. Drug companies hire scientists to test their products. If the scientist gets negative results, the secrecy clause is invoked, so the paper doesn't get published. A drug company might hire 5 scientists to run the tests. If only one gets positive results, the one result gets published. As a result, we have many ineffective or worthless drugs on the market.

Therefore, most peer reviewed papers are highly accurate, but there are a few know exceptions.

So; we can safely conclude from the foregoing posts that peer review system of scientific articles is far from being a fool proof system; its accuracy is always between 0% to sometimes or often much less than 100%; contrary to what the atheists/agnostics/skeptics would like us to believe; they just hold these notions out of blind faith, in my opinion.

This is the status in the pure physical and material realm which are the most relevant and the main realm of science; in other realms that is ethical, moral and spiritual realms it is most or absolute defective as it is never designed for them to start with, in my opinion.
 

jarofthoughts

Empirical Curmudgeon
Actually, it's taken for granted in most theories of probability, reasoning, scientific methodology and/or hypothesis testing, and so forth that no evidence can absolutely be better than some. In fact, for certain approaches to epistemic justification or intelligent systems (particularly those using Bayesian models) this is a central area of concern. But even in your average social science intro stats book, there will almost always be a section on the problems with certain evidence. Outliers, systematic error, measurement error, and numerous other problems can replace ignorance with misinformation. Personally, I don't think that knowing little or even nothing about a topic is worse than "knowing" things that aren't true because the evidence I have access to warps reality.

Are you familiar with the concept of the Null Hypothesis?
 

jarofthoughts

Empirical Curmudgeon
So; we can safely conclude from the foregoing posts that peer review system of scientific articles is far from being a fool proof system; its accuracy is always between 0% to sometimes or often much less than 100%; contrary to what the atheists/agnostics/skeptics would like us to believe; they just hold these notions out of blind faith, in my opinion.

Can you quote any atheists/agnostics/skeptics saying that the peer review system is fool proof or that science makes claims with 100% certainty?
If not, then I am forced to conclude that this is just another baseless straw man.

This is the status in the pure physical and material realm which are the most relevant and the main realm of science; in other realms that is ethical, moral and spiritual realms it is most or absolute defective as it is never designed for them to start with, in my opinion.

Ethics and most certainly morality is of a subjective nature, although once a goal has been set for morality, science provides the best insight as to how to achieve said goal.
Also, I would like to see your evidence for the existence of 'spiritual realms'.
 

LegionOnomaMoi

Veteran Member
Premium Member
Are you familiar with the concept of the Null Hypothesis?
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:
The problem with a great deal of social science research comes from an ever-increasing number of advanced statistical and modeling techniques (path analysis, social network analysis, structural equation modeling, Bayesian models, etc.), ever more powerful and easy to use software packages (SPSS, MATLAB, STATISTICA, etc.), and researchers with PhDs who took one course in multivariate statistics. In other words, they know just enough to understand that X technique has something to do with what they are interested in, but they do not understand the underlying logic and principles behind the techniques. Time was the biggest concern for journal reviewers, instructors, and researchers themselves was collinearity. Now, there are almost more ways to get a statistically significant result with any data set from any experiment then there are ways to fail to reach the desired alpha level. Finally, too often the choice of test has more to do with trends than applicability or validity.

Not that this is true across the board. In fact, in addition to a number of books designed to give researchers a better understanding of the techniques they use, the last few decades have seen ever more complaints, criticisms, and warnings about the misuse of statistics within the social sciences. To give just a few examples, we have Rein Taagepera's Making Social Sciences More Scientific: The Need for Predictive Models (Oxford University Press, 2008), Peter Fayer's "Alphas, betas and skewy distributions: two ways of getting the wrong answer" (Advances in Health Science Education, vol. 16), the edited volume Measurement in the Social Sciences: Theories and Strategies (1974), Gerd Gigerenzer's "Mindless statistics" (The Journal of Socio-Economics vol. 33), Taagepera's "Adding meaning to regression" (European Political Science 10), and on and on.

The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.
 
Last edited:
  • Like
Reactions: MD

Photonic

Ad astra!
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:


The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.

Hence why I take issue with some of my fellows rabid adherence to SST. (super-string theory)
 

jarofthoughts

Empirical Curmudgeon
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:


The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.

I'm not saying that your point is invalid, and indeed, results that have poor support from weak evidence are generally not accepted as true but rather as interesting but requiring more research.
The understanding I'm referring to regarding the Null Hypothesis is the practical notion that all claims are false until shown not to be, thereby falsifying the Null Hypothesis.
Such a view is useful in both science and other topics of discussion, such as religion, because until a claim ("This medicine will cure disease X" or "There is a god") has enough evidential backing to falsify the Null Hypothesis, the claim should be considered false.
And it is in this context I'm saying that a claim with at least some evidence behind it is better than a claim with no evidence behind it, not because either claim must be false, but because one is on its way to falsifying the Null Hypothesis and has a chance of being considered correct, whereas the other should be dismissed out of hand until someone can back it up empirically.
 

LegionOnomaMoi

Veteran Member
Premium Member
I'm not saying that your point is invalid, and indeed, results that have poor support from weak evidence are generally not accepted as true but rather as interesting but requiring more research.
The understanding I'm referring to regarding the Null Hypothesis is the practical notion that all claims are false until shown not to be, thereby falsifying the Null Hypothesis.
Such a view is useful in both science and other topics of discussion, such as religion, because until a claim ("This medicine will cure disease X" or "There is a god") has enough evidential backing to falsify the Null Hypothesis, the claim should be considered false.
And it is in this context I'm saying that a claim with at least some evidence behind it is better than a claim with no evidence behind it, not because either claim must be false, but because one is on its way to falsifying the Null Hypothesis and has a chance of being considered correct, whereas the other should be dismissed out of hand until someone can back it up empirically.
I get the context, and in most cases of hypothesis testing at the very least one should be trying to show they are wrong (whatever researchers actually do in practice). However, the framework of "null hypothesis" testing is unnecessarily restrictive. For example, if I'm trying to locate regions of the brain involved in the processing of "action words"/verbs but not "things/objects/nouns", I don't really have a clearly defined hypothesis or null hypothesis. I'm assuming that the brain is responsible for processing, sure. But I may not have much in the way of predictions about which regions are used when it comes to verbs. Also, it may be that I can't find distinct regions used for verbs compared to nouns, but this isn't a "null hypothesis".

Then there is the issue of evidence vs. hypothesis. Let's say I believe that processing even abstract concepts relies on sensorimotor regions of the brain (embodied cognition). So I run a bunch of subjects through some neuroimaging experimental paradigm involving words, pictures, or both. I notice that when subjects hear words like "mountain", "shark", "movie", etc., neural regions in the visual cortex "light up", and when subjects hear words like "hammer", "fork", "screwdriver", "baseball", etc., as well as words like "walking", "waving", "kicking", "pointing", etc., not only do I find activity in the motor cortex, but I find that words associated with the activity of particular body-parts (e.g., leg or arm) are differentially activated in the motor cortex somatotopically. Basically, regions which are more involved with leg movements are far more active when the subject hears words like "kicking" or "walking" (or sees images of these actions) relative not only to other action words but also to abstract nouns and pseudowords.

People have been doing this kind of research basically since neuroimaging became possible. The problem is what to make of the data. A common interpretation is that the reason sensorimotor regions are significantly differentially activated when processing words is that even abstract words rely on more basic sensorimotor experience, because humans learn concepts through their motor and sensory experience in the world. Therefore, the meaning of words is distributed not only across regions associated with higher cognitive processing and with memory, but also across sensorimotor regions.

Another interpretation, however, is that the observed activation has nothing to do with semantic processing (i.e., the meaning of the words). There are various explanations for the observed activation in sensorimotor regions, some based on other experimental evidence, but that's not really important here.

The important thing is that the problem isn't a matter of falsification or null hypotheses. It has to do with the adequacy of the experimental paradigms, methods, instruments, but not usually data analyses (i.e., statistical techniques). In principle, embodied cognition is falsifiable in the same way classical models of abstract semantic processing are. However, as the disagreement is not about the data (and often not even about the experimental paradigm), but is mostly over how the methods used were flawed and/or the interpretation of the results were problematic, falsification is pragmatically impossible.

To make it even simpler, we can have two teams carry out identical experiments and get (for all practical purposes) identical results, and have totally different findings because the results are interpreted according to different theoretical frameworks. This makes null hypothesis testing pretty useless, because reaching the desired alpha level is meaningless without the interpretation of exactly what is significant.

And that's without getting into the unbelievable number of ways to misuse statistical models (due to a lack of understanding) and get a result which allows the rejection of the null.

So why is it that supporting data is better than no data, if in reality the "supporting data" is simply misleading? If those involved in neuroscience, cognitive neuropsychology, and cognitive science follow the embodied cognition model, but in reality all the evidence for it has resulted from misinterpreting data, poor experimental methods, and similar faults, and their model is wrong, then why is this better than not having all those data? What we have instead is an ever growing theoretical framework built on nothing but air, which will continue to skew perceptions of research results, inspire poorly designed experiments, and in general not only increase ignorance, but mask it with the illusion of increased knowledge.

That's just one example. There are any number of ways to get a similar result. Online surveys for research are becoming increasingly popular, but often enough researchers do not adequately ensure that 1) the same person doesn't submit multiple results, 2) the information about the person, such as age, gender, occupation, etc., is accurate, and 3) the resulting surveys represent an adequate sample of the desired population.

Again, this is not a thought experiment: published studies have become the center of rather heated debates because of the above issues with internet survey research. And once again, why is "some data" supporting a claim better here than none, when the data is in fact misleading? Two examples which spring to mind concern climate science: one which sought to determine the opinion of climate scientists, and another which sought to understand the skeptical online public (i.e., those who frequent blogs or sights which are skeptical of mainstream climate science). In the first example, the researchers found a large minority of scientists who disagreed with the "mainstream" versions. And a lot of criticism was directed at their methods which (so it was claimed) led inadequate data which supported their claims, but only because it was inadequate. In the second example, the researchers surveyed climate blog users and found that believing in conspiracy theories is a predictor of climate skepticism while "acceptance of science" was correlated with the climate consensus. Here to the researchers were criticized for inadequate sampling and methods, such that their results were (supposedly) spurious.

Both research groups had evidence for their claims which supposedly only supported their claims because it was inadequate in some or multiple ways. If true, then here once more we find, instead of just ignorance, ignorance masked by false knowledge.

I don't see how this is preferable. Working within a theoretical framework built upon insufficient or inadequate evidence means propagating false beliefs, models, and theories. Often enough these frameworks are used in debates of public policy, in constructing laws, for informing public opinion, and even for solutions to social, individual, environmental, and other problems. If we have evidence to support a particular framework, but for some reason (misinterpretation of evidence, poor methods, inadequate data, etc.) the framework is wrong, then this error may be felt throughout society. Just look at the recovered memory movement (or, actually, pretty much the entire history of mental health treatment).
 
Last edited:

Caladan

Agnostic Pantheist
Are they 100% accurate?

How much accurate in percentages?
You are asking a wrong question. Many articles offer a thesis or a line of thought, or a direction of research. Furthermore there are countless fields. Exact sciences, soft sciences, etc. many are dealing with studies to determine patterns, they do not even expect to provide a 100% absolute accuracy or a final answer to put to rest a scholarly discussion once and for all.
 

Shadow Wolf

Certified People sTabber
Nothing is 100%. But the purpose of a peer-reviewed article is so that others can offer their own insight, and attempt to replicate what the article is talking about. It's a way to help weed out what is good from what is bad.
 

LegionOnomaMoi

Veteran Member
Premium Member
Nothing is 100%. But the purpose of a peer-reviewed article is so that others can offer their own insight, and attempt to replicate what the article is talking about. It's a way to help weed out what is good from what is bad.
I've never heard of a reviewer doing this. That's what other studies are for. What the reviewer does depends on the field and the journal's editorial board. For example, if one wishes to publish a paper on the use of epexegetic infinitives in Herodotus, the journal will probably contact those who have published on Herodotus multiple times (especially with respect to his use of language, as opposed to historical method or something less related), some big names in Greek philology and perhaps Greek linguistics too, and perhaps some others who work in classics. Their job is, ostensibly, to make sure that the work has no glaring omissions or errors by ensuring the author both adequately addressed previous work and didn't go well beyond the evidence or data in her or his paper.

Mathematics papers work much the same (especially in that like classics or similar fields, there is usually only one or two authors), except that it is usually much easier to know who can check the work, and much easier for those individuals to check it.

The sciences, especially the social sciences, are a whole different ballgame. Like the humanities, they are many journals and reviewers with a very clear stance on many issues, and journals tend to reflect this. If one writes a paper on, say, the validity of the sociocognitive model of mental health (either a study or a review of studies), one doesn't then send the paper to any journal with a biomedical stance on mental disorders (which includes most psychiatric journals). It's a great way not to get your paper published. Additionally, the ever-increasing diversity within any given field has resulted in some pretty tightly-knit communities in which the same small group of people all review each others' papers, because the topic is so specialised only a handful of people in the world are qualified to talk about it.

In none of these cases, however, is anything replicated. Most journals don't even bother to ask for things like data and code which would make replication possible, and when they do it isn't for review but for other researchers. This has been something of a problem, and resulted in a number of scandals (Marc Hauser from Harvard, Diederik Stapel from Tilburg University, Dirk Smeesters from Erasmus University Rotterdam, not to mention the infamous Hockey-stick papers, just to name a few).

Outright fraud isn't as big of a problem as the complexity behind many of the analyses performed for studies, which often can't be checked by reviewers because they don't have the code (MATLAB, R, whatever) or the data. And as humans can't do the math behind most multivariate techniques, there is no way to check the results. All a reviewer can do is basically guess whether or not the analysis, model(s), and their were results sound, which amounts to matching a general description of data and the methods with the statistical techniques employed. Hopefully, this is enough to determine that the appropriate statistical models were used, and used correctly.

Which is why peer-review is just the beginning. The issue is that in some fields, it becomes increasingly easier to get bad studies past review. But in all of them, the idea (or hope) is that the review process ensures some work will become part of academic debate. The greater reputation the journal has, the more publication will ensure others pay attention to it. That's when (ideally) other try to replicate the findings, or critique the methods, interpretations, etc., in the study.

It's less about weeding what's good from what's bad, and more about 1) making sure that the review/paper/study didn't miss anything the reviewers find important and 2) giving people in the field places to go apart from the equivalent of wikipedia. There's a lot of bad peer-reviewed research. There are peer-reviewed journals that almost no one in the field pays any attention to or cites because it is pretty much assumed that everything published is bogus. And there are plenty of journals which will almost never publish certain types of studies or papers because they represent some view within a theoretical framework that the journal is opposed to.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
And I've never heard of pygmies eating hamburgers. So what does all this have to do with the price of sardines?
The pygmies eat sardines. Demand drives up the price. There was a recent study about it.

And that's what Shadow Wolf is talking about: OTHERS. :facepalm:
The point is that these others are not the reviewers. I don't think Shadow Wolf meant that they were, but it is a very important distinction.

More importantly, though, is that often enough "peer-review" simply ensures that the paper or study ascribes to the theoretical framework, models, approach, etc., the journal's editorial board does. It says little about the quality of the work. Again, this is less true in areas which do not attempt analyses of highly complex systems or phenomena and/or in areas where subspecialties are not so isolated, but it remains an issue.

In particular, there are no "others" in any meaningful way too much of the time. If you subscribe to X model or framework, you publish in Y set of journals, series, etc. If you subscribe to B, however, then you use set C. Which means that the work one group does may be published and accepted and replicated by those who follow that framework, while the rest will either ignore that work or simply treat it as flawed from the outset.

If you look at, for example, many journals in psychology, sociology, linguistics, and related fields, there are a number of areas in which researchers disagree on fundamental issues. I discussed one a few posts back (embodied cognition). There are numerous others. The end result is diverging published findings continually increasing in the amount of supporting evidence despite contrary findings. I could right now cite an enormous number of academic works which passed peer-review within the past decade and which support the idea that there is a "language faculty" or "organ" or "module" in the brain. I can cite an enormous number which come to the opposite conclusion. And often enough, that's what peer-review ends up being: researchers citing one group of other researchers who share their basic approach and framework, and others in the same field doing the same for a different approach and contradictory results.

What, then, does peer-review actually do?
 

paarsurrey

Veteran Member
I've never heard of a reviewer doing this. That's what other studies are for. What the reviewer does depends on the field and the journal's editorial board. For example, if one wishes to publish a paper on the use of epexegetic infinitives in Herodotus, the journal will probably contact those who have published on Herodotus multiple times (especially with respect to his use of language, as opposed to historical method or something less related), some big names in Greek philology and perhaps Greek linguistics too, and perhaps some others who work in classics. Their job is, ostensibly, to make sure that the work has no glaring omissions or errors by ensuring the author both adequately addressed previous work and didn't go well beyond the evidence or data in her or his paper.

Mathematics papers work much the same (especially in that like classics or similar fields, there is usually only one or two authors), except that it is usually much easier to know who can check the work, and much easier for those individuals to check it.

The sciences, especially the social sciences, are a whole different ballgame. Like the humanities, they are many journals and reviewers with a very clear stance on many issues, and journals tend to reflect this. If one writes a paper on, say, the validity of the sociocognitive model of mental health (either a study or a review of studies), one doesn't then send the paper to any journal with a biomedical stance on mental disorders (which includes most psychiatric journals). It's a great way not to get your paper published. Additionally, the ever-increasing diversity within any given field has resulted in some pretty tightly-knit communities in which the same small group of people all review each others' papers, because the topic is so specialised only a handful of people in the world are qualified to talk about it.

In none of these cases, however, is anything replicated. Most journals don't even bother to ask for things like data and code which would make replication possible, and when they do it isn't for review but for other researchers. This has been something of a problem, and resulted in a number of scandals (Marc Hauser from Harvard, Diederik Stapel from Tilburg University, Dirk Smeesters from Erasmus University Rotterdam, not to mention the infamous Hockey-stick papers, just to name a few).

Outright fraud isn't as big of a problem as the complexity behind many of the analyses performed for studies, which often can't be checked by reviewers because they don't have the code (MATLAB, R, whatever) or the data. And as humans can't do the math behind most multivariate techniques, there is no way to check the results. All a reviewer can do is basically guess whether or not the analysis, model(s), and their were results sound, which amounts to matching a general description of data and the methods with the statistical techniques employed. Hopefully, this is enough to determine that the appropriate statistical models were used, and used correctly.

Which is why peer-review is just the beginning. The issue is that in some fields, it becomes increasingly easier to get bad studies past review. But in all of them, the idea (or hope) is that the review process ensures some work will become part of academic debate. The greater reputation the journal has, the more publication will ensure others pay attention to it. That's when (ideally) other try to replicate the findings, or critique the methods, interpretations, etc., in the study.

It's less about weeding what's good from what's bad, and more about 1) making sure that the review/paper/study didn't miss anything the reviewers find important and 2) giving people in the field places to go apart from the equivalent of wikipedia. There's a lot of bad peer-reviewed research. There are peer-reviewed journals that almost no one in the field pays any attention to or cites because it is pretty much assumed that everything published is bogus. And there are plenty of journals which will almost never publish certain types of studies or papers because they represent some view within a theoretical framework that the journal is opposed to.

Very informative post; thanks
 

paarsurrey

Veteran Member
So peer review is just an interpretation of the natural phenomenon that exists in nature irrespective if science existed or did not exist. Science has no role to create anything that existed naturally.

Science's interpretation of nature may be correct within the range of 0% to sometimes/often/always much less than 100%; it is a constant endeavour to improve its interpretation to perfection but never attaining it; it is a tool of understand and make benefit from nature in daily life, physically.

It is just like a person who reads Word revealed and makes interpretation to understand it correctly towards benefiting from the Word in daily life ethically, morally or spiritually.
 

jarofthoughts

Empirical Curmudgeon
So peer review is just an interpretation of the natural phenomenon that exists in nature irrespective if science existed or did not exist. Science has no role to create anything that existed naturally.

Science's interpretation of nature may be correct within the range of 0% to sometimes/often/always much less than 100%; it is a constant endeavour to improve its interpretation to perfection but never attaining it; it is a tool of understand and make benefit from nature in daily life, physically.

It is just like a person who reads Word revealed and makes interpretation to understand it correctly towards benefiting from the Word in daily life ethically, morally or spiritually.

Except that there is no reason to think that the 'Word revealed' is in any way revealed.
In fact, there is little reason to think that it is anything but a collection of made up stuff with a few semi-historical anecdotes thrown in.
The reason science is such a powerful method is exactly the fact that it continually tests our understanding against the best available evidence, and should our understanding collide with the evidence, we have to change our understanding.
If you doubt the effectiveness of the scientific method I suggest you take a look at the society around you and see how much of it is there as a result of science (hint: almost all of it).
 

ImmortalFlame

Woke gremlin
So peer review is just an interpretation of the natural phenomenon that exists in nature irrespective if science existed or did not exist. Science has no role to create anything that existed naturally.
Correct. Science's job is not to "create" nature. It observes and explains nature, then puts that understanding to practical use.

Science's interpretation of nature may be correct within the range of 0% to sometimes/often/always much less than 100%; it is a constant endeavour to improve its interpretation to perfection but never attaining it; it is a tool of understand and make benefit from nature in daily life, physically.
Correct. Science is an ever-developing field of human endeavour that will likely never cease.

It is just like a person who reads Word revealed and makes interpretation to understand it correctly towards benefiting from the Word in daily life ethically, morally or spiritually.
Incorrect. We have reasons to trust science since it produces tangible results. If a scientific hypothesis cannot be demonstrated, then it is disregarded. This is not so in the world of religious scripture, where you are (often) expected to take their claims at face value and are not expected to investigate them, yet must believe they are true. That is the exact opposite of science.
 

paarsurrey

Veteran Member
Except that there is no reason to think that the 'Word revealed' is in any way revealed.
In fact, there is little reason to think that it is anything but a collection of made up stuff with a few semi-historical anecdotes thrown in.
The reason science is such a powerful method is exactly the fact that it continually tests our understanding against the best available evidence, and should our understanding collide with the evidence, we have to change our understanding.
If you doubt the effectiveness of the scientific method I suggest you take a look at the society around you and see how much of it is there as a result of science (hint: almost all of it).

None of the people around has been created by science; not even any animal or a tree or any in-animate things have been created by science; there is no such claim or any reasons given by science in this connection, in my opinion.
 
Last edited:
Top