Religious Education Forum  

Welcome Guest to ReligiousForums.com . You are currently not registered. When you become registered you will be able to interact with our large base of already registered users discussing topics. Some annoying Ads will also disappear when you register. Registering doesn't cost a thing and only takes a few seconds. We provide areas to chat and debate all World Religions. Please go to our register page!
Home Who's Online Today's Posts Mark Forums Read
Go Back   Religious Education Forum / Religious Topics / Religious Debates / Science and Religion
Sitemap Popular RF Forums REGISTER Search Mark Forums Read

Reply
 
Thread Tools Display Modes
  #21  
Old 09-13-2012, 11:44 PM
LegionOnomaMoi Offline
Religion: Agnostic
Title:Former member
Shield of The Renaissance Man: Awarded to a real polymath, a person with many talents or interests who contributes greatly to a wide range of discussions and debates - Issue reason: For your knowledge and contributions in regards to a wide range of topics. Shield of Knowledge: Awarded for outstanding demonstration of high knowledge in a particular field - Issue reason: For your excellent knowledge on more than one topic. Shield of Research: Awarded for meticulous attention to detail and comprehensive reading around a subject - Issue reason: For your outstanding attention to details and extensive reading on a subject 
 
Join Date: Jan 2012
Location: Massachusetts
Gender: Male
Posts: 6,163
Frubals: 511
LegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fear
Default

Quote:
Originally Posted by jarofthoughts View Post
Nonsense.
Evidence based ideas will always have a better probability of being right than non-evidence based ideas.
And any evidence is better than no evidence. Always.

To say otherwise would not only be illogical, but reveals a blindness to reality that is disasterous.
Actually, it's taken for granted in most theories of probability, reasoning, scientific methodology and/or hypothesis testing, and so forth that no evidence can absolutely be better than some. In fact, for certain approaches to epistemic justification or intelligent systems (particularly those using Bayesian models) this is a central area of concern. But even in your average social science intro stats book, there will almost always be a section on the problems with certain evidence. Outliers, systematic error, measurement error, and numerous other problems can replace ignorance with misinformation. Personally, I don't think that knowing little or even nothing about a topic is worse than "knowing" things that aren't true because the evidence I have access to warps reality.
__________________
I would welcome that insanity
That looks upon humanity
And earth and its banality
Finding hope despite reality .

-Thanks to all for making my experience here such a valuable one.
Reply With Quote
  #22  
Old 09-13-2012, 11:46 PM
LegionOnomaMoi Offline
Religion: Agnostic
Title:Former member
Shield of The Renaissance Man: Awarded to a real polymath, a person with many talents or interests who contributes greatly to a wide range of discussions and debates - Issue reason: For your knowledge and contributions in regards to a wide range of topics. Shield of Knowledge: Awarded for outstanding demonstration of high knowledge in a particular field - Issue reason: For your excellent knowledge on more than one topic. Shield of Research: Awarded for meticulous attention to detail and comprehensive reading around a subject - Issue reason: For your outstanding attention to details and extensive reading on a subject 
 
Join Date: Jan 2012
Location: Massachusetts
Gender: Male
Posts: 6,163
Frubals: 511
LegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fear
Default

Quote:
Originally Posted by paarsurrey View Post
Are they 100% accurate?

How much accurate in percentages?
Peer-review is just the start. Depending on the journal in question (some aren't taken seriously) the fact that a paper, article, or study passes peer-review simply means that it becomes a piece of the literature in some topic. It's not taken as gospel by anyone in the field.
__________________
I would welcome that insanity
That looks upon humanity
And earth and its banality
Finding hope despite reality .

-Thanks to all for making my experience here such a valuable one.
Reply With Quote
  #23  
Old 09-14-2012, 08:26 AM
greentwiga Offline
Title:Sophomore Member
 
Join Date: Jul 2009
Gender: Undisclosed
Posts: 208
Frubals: 5120
greentwiga considers it odd how frubals never go stale
greentwiga considers it odd how frubals never go stalegreentwiga considers it odd how frubals never go stalegreentwiga considers it odd how frubals never go stale
Default

A peer reviewed journal has passed some sot of scrutiny. Nevertheless, sometimes papers like cold fusion get through. That is why each paper must state its methodology. Others try to duplicate the results. When a scientist successfully duplicates the results, the first paper is treated as much more reliable.

Still, others manipulate the system. Drug companies hire scientists to test their products. If the scientist gets negative results, the secrecy clause is invoked, so the paper doesn't get published. A drug company might hire 5 scientists to run the tests. If only one gets positive results, the one result gets published. As a result, we have many ineffective or worthless drugs on the market.

Therefore, most peer reviewed papers are highly accurate, but there are a few know exceptions.
Reply With Quote
  #24  
Old 09-14-2012, 01:06 PM
paarsurrey Offline
Title:Intrepid Member
 
Join Date: May 2012
Gender: Undisclosed
Posts: 3,403
Frubals: 84
paarsurrey hates having to clean up after all these frubalspaarsurrey hates having to clean up after all these frubals
Default

Quote:
Originally Posted by greentwiga View Post
A peer reviewed journal has passed some sot of scrutiny. Nevertheless, sometimes papers like cold fusion get through. That is why each paper must state its methodology. Others try to duplicate the results. When a scientist successfully duplicates the results, the first paper is treated as much more reliable.

Still, others manipulate the system. Drug companies hire scientists to test their products. If the scientist gets negative results, the secrecy clause is invoked, so the paper doesn't get published. A drug company might hire 5 scientists to run the tests. If only one gets positive results, the one result gets published. As a result, we have many ineffective or worthless drugs on the market.

Therefore, most peer reviewed papers are highly accurate, but there are a few know exceptions.
So; we can safely conclude from the foregoing posts that peer review system of scientific articles is far from being a fool proof system; its accuracy is always between 0% to sometimes or often much less than 100%; contrary to what the atheists/agnostics/skeptics would like us to believe; they just hold these notions out of blind faith, in my opinion.

This is the status in the pure physical and material realm which are the most relevant and the main realm of science; in other realms that is ethical, moral and spiritual realms it is most or absolute defective as it is never designed for them to start with, in my opinion.
Reply With Quote
  #25  
Old 09-14-2012, 04:57 PM
jarofthoughts's Avatar
jarofthoughts Offline
Religion: None/Empiricist
Title:Empirical Curmudgeon
 
Join Date: Aug 2010
Location: Oslo, Norway
Gender: Male
Posts: 3,756
Frubals: 246
jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'
Default

Quote:
Originally Posted by LegionOnomaMoi View Post
Actually, it's taken for granted in most theories of probability, reasoning, scientific methodology and/or hypothesis testing, and so forth that no evidence can absolutely be better than some. In fact, for certain approaches to epistemic justification or intelligent systems (particularly those using Bayesian models) this is a central area of concern. But even in your average social science intro stats book, there will almost always be a section on the problems with certain evidence. Outliers, systematic error, measurement error, and numerous other problems can replace ignorance with misinformation. Personally, I don't think that knowing little or even nothing about a topic is worse than "knowing" things that aren't true because the evidence I have access to warps reality.
Are you familiar with the concept of the Null Hypothesis?
__________________
"I'd much rather be a rising ape than a falling angel."
- Terry Pratchett

http://jarofthoughts.livejournal.com/
Reply With Quote
  #26  
Old 09-14-2012, 05:00 PM
jarofthoughts's Avatar
jarofthoughts Offline
Religion: None/Empiricist
Title:Empirical Curmudgeon
 
Join Date: Aug 2010
Location: Oslo, Norway
Gender: Male
Posts: 3,756
Frubals: 246
jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'
Default

Quote:
Originally Posted by paarsurrey View Post
So; we can safely conclude from the foregoing posts that peer review system of scientific articles is far from being a fool proof system; its accuracy is always between 0% to sometimes or often much less than 100%; contrary to what the atheists/agnostics/skeptics would like us to believe; they just hold these notions out of blind faith, in my opinion.
Can you quote any atheists/agnostics/skeptics saying that the peer review system is fool proof or that science makes claims with 100% certainty?
If not, then I am forced to conclude that this is just another baseless straw man.

Quote:
Originally Posted by paarsurrey View Post
This is the status in the pure physical and material realm which are the most relevant and the main realm of science; in other realms that is ethical, moral and spiritual realms it is most or absolute defective as it is never designed for them to start with, in my opinion.
Ethics and most certainly morality is of a subjective nature, although once a goal has been set for morality, science provides the best insight as to how to achieve said goal.
Also, I would like to see your evidence for the existence of 'spiritual realms'.
__________________
"I'd much rather be a rising ape than a falling angel."
- Terry Pratchett

http://jarofthoughts.livejournal.com/
Reply With Quote
  #27  
Old 09-14-2012, 11:07 PM
LegionOnomaMoi Offline
Religion: Agnostic
Title:Former member
Shield of The Renaissance Man: Awarded to a real polymath, a person with many talents or interests who contributes greatly to a wide range of discussions and debates - Issue reason: For your knowledge and contributions in regards to a wide range of topics. Shield of Knowledge: Awarded for outstanding demonstration of high knowledge in a particular field - Issue reason: For your excellent knowledge on more than one topic. Shield of Research: Awarded for meticulous attention to detail and comprehensive reading around a subject - Issue reason: For your outstanding attention to details and extensive reading on a subject 
 
Join Date: Jan 2012
Location: Massachusetts
Gender: Male
Posts: 6,163
Frubals: 511
LegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fear
Default

Quote:
Originally Posted by jarofthoughts View Post
Are you familiar with the concept of the Null Hypothesis?
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:
Quote:
Originally Posted by LegionOnomaMoi View Post
The problem with a great deal of social science research comes from an ever-increasing number of advanced statistical and modeling techniques (path analysis, social network analysis, structural equation modeling, Bayesian models, etc.), ever more powerful and easy to use software packages (SPSS, MATLAB, STATISTICA, etc.), and researchers with PhDs who took one course in multivariate statistics. In other words, they know just enough to understand that X technique has something to do with what they are interested in, but they do not understand the underlying logic and principles behind the techniques. Time was the biggest concern for journal reviewers, instructors, and researchers themselves was collinearity. Now, there are almost more ways to get a statistically significant result with any data set from any experiment then there are ways to fail to reach the desired alpha level. Finally, too often the choice of test has more to do with trends than applicability or validity.

Not that this is true across the board. In fact, in addition to a number of books designed to give researchers a better understanding of the techniques they use, the last few decades have seen ever more complaints, criticisms, and warnings about the misuse of statistics within the social sciences. To give just a few examples, we have Rein Taagepera's Making Social Sciences More Scientific: The Need for Predictive Models (Oxford University Press, 2008), Peter Fayer's "Alphas, betas and skewy distributions: two ways of getting the wrong answer" (Advances in Health Science Education, vol. 16), the edited volume Measurement in the Social Sciences: Theories and Strategies (1974), Gerd Gigerenzer's "Mindless statistics" (The Journal of Socio-Economics vol. 33), Taagepera's "Adding meaning to regression" (European Political Science 10), and on and on.
The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.
__________________
I would welcome that insanity
That looks upon humanity
And earth and its banality
Finding hope despite reality .

-Thanks to all for making my experience here such a valuable one.

Last edited by LegionOnomaMoi; 09-14-2012 at 11:37 PM..
Reply With Quote
  #28  
Old 09-15-2012, 01:10 AM
Photonic's Avatar
Photonic Offline
Religion: Scientific Pantheist
Title:Ad astra!
 
Join Date: Sep 2011
Location: California
Gender: Male
Posts: 2,421
Frubals: 103
Photonic added a shed out back to store more frubalsPhotonic added a shed out back to store more frubalsPhotonic added a shed out back to store more frubals
Default

Quote:
Originally Posted by LegionOnomaMoi View Post
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:


The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.
Hence why I take issue with some of my fellows rabid adherence to SST. (super-string theory)
__________________
"Ford," he said, "you're turning into a penguin. Stop it."But that's not the point!" raged Ford "The point is that I am now a perfectly safe penguin, and my colleague here is rapidly running out of limbs!"

-Hitchikers Guide to the Galaxy
Reply With Quote
  #29  
Old 09-15-2012, 03:28 AM
jarofthoughts's Avatar
jarofthoughts Offline
Religion: None/Empiricist
Title:Empirical Curmudgeon
 
Join Date: Aug 2010
Location: Oslo, Norway
Gender: Male
Posts: 3,756
Frubals: 246
jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'jarofthoughts whispers to complete strangers, 'Pssst!  Your frubals are unbuttoned!'
Default

Quote:
Originally Posted by LegionOnomaMoi View Post
Yes. It's an intro to stats notion that has unfortunately resulted in a great deal of bad reasearch:


The concept of the "null hyothesis" (especially as it is normally formulated) is built around preconceptions of the "normal curve" and arbitrary alpha levels. As the author of an unusually useful textbook on research methods and statistics put it, "As pointed out in hundreds of journal articles, for many applied problems the use of the normal curve can be disastrous. Even under arbitrarily small departures from normality, important discoveries are lost by assuming that observations follow a normal curve." (Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy by R. R. Wilcox; Springer 2010).

Apart from the methodology employed and the techniques employed to analyze the gathered data, there is the issue of why one accepts the null at all. Again, it is almost always because of some arbitrary alpha level. But it's not just arbitrary, it's also based on some statistical test which makes assumptions about the distribution of the gathered data. Finally, even granted that we're dealing with some normal distribution and an adequate method and measure to test whatever it is we are, there's a reason even the most elementary research methods textbooks (including intro stats) discuss Type I & II errors.

Garbage in, garbage out. If one values accuracy, knowledge, and good models, it's much better to know nothing about some topic or phenomenon than to follow some theory or view which resulted from bad methods, data, or analysis. In the former case, the only issue is ignorance. In the latter, ignorance is compounded with a misplaced faith in some analysis of some data.
I'm not saying that your point is invalid, and indeed, results that have poor support from weak evidence are generally not accepted as true but rather as interesting but requiring more research.
The understanding I'm referring to regarding the Null Hypothesis is the practical notion that all claims are false until shown not to be, thereby falsifying the Null Hypothesis.
Such a view is useful in both science and other topics of discussion, such as religion, because until a claim ("This medicine will cure disease X" or "There is a god") has enough evidential backing to falsify the Null Hypothesis, the claim should be considered false.
And it is in this context I'm saying that a claim with at least some evidence behind it is better than a claim with no evidence behind it, not because either claim must be false, but because one is on its way to falsifying the Null Hypothesis and has a chance of being considered correct, whereas the other should be dismissed out of hand until someone can back it up empirically.
__________________
"I'd much rather be a rising ape than a falling angel."
- Terry Pratchett

http://jarofthoughts.livejournal.com/
Reply With Quote
  #30  
Old 09-15-2012, 01:06 PM
LegionOnomaMoi Offline
Religion: Agnostic
Title:Former member
Shield of The Renaissance Man: Awarded to a real polymath, a person with many talents or interests who contributes greatly to a wide range of discussions and debates - Issue reason: For your knowledge and contributions in regards to a wide range of topics. Shield of Knowledge: Awarded for outstanding demonstration of high knowledge in a particular field - Issue reason: For your excellent knowledge on more than one topic. Shield of Research: Awarded for meticulous attention to detail and comprehensive reading around a subject - Issue reason: For your outstanding attention to details and extensive reading on a subject 
 
Join Date: Jan 2012
Location: Massachusetts
Gender: Male
Posts: 6,163
Frubals: 511
LegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fearLegionOnomaMoi advises you to just act natural, because frubals can sense fear
Default

Quote:
Originally Posted by jarofthoughts View Post
I'm not saying that your point is invalid, and indeed, results that have poor support from weak evidence are generally not accepted as true but rather as interesting but requiring more research.
The understanding I'm referring to regarding the Null Hypothesis is the practical notion that all claims are false until shown not to be, thereby falsifying the Null Hypothesis.
Such a view is useful in both science and other topics of discussion, such as religion, because until a claim ("This medicine will cure disease X" or "There is a god") has enough evidential backing to falsify the Null Hypothesis, the claim should be considered false.
And it is in this context I'm saying that a claim with at least some evidence behind it is better than a claim with no evidence behind it, not because either claim must be false, but because one is on its way to falsifying the Null Hypothesis and has a chance of being considered correct, whereas the other should be dismissed out of hand until someone can back it up empirically.
I get the context, and in most cases of hypothesis testing at the very least one should be trying to show they are wrong (whatever researchers actually do in practice). However, the framework of "null hypothesis" testing is unnecessarily restrictive. For example, if I'm trying to locate regions of the brain involved in the processing of "action words"/verbs but not "things/objects/nouns", I don't really have a clearly defined hypothesis or null hypothesis. I'm assuming that the brain is responsible for processing, sure. But I may not have much in the way of predictions about which regions are used when it comes to verbs. Also, it may be that I can't find distinct regions used for verbs compared to nouns, but this isn't a "null hypothesis".

Then there is the issue of evidence vs. hypothesis. Let's say I believe that processing even abstract concepts relies on sensorimotor regions of the brain (embodied cognition). So I run a bunch of subjects through some neuroimaging experimental paradigm involving words, pictures, or both. I notice that when subjects hear words like "mountain", "shark", "movie", etc., neural regions in the visual cortex "light up", and when subjects hear words like "hammer", "fork", "screwdriver", "baseball", etc., as well as words like "walking", "waving", "kicking", "pointing", etc., not only do I find activity in the motor cortex, but I find that words associated with the activity of particular body-parts (e.g., leg or arm) are differentially activated in the motor cortex somatotopically. Basically, regions which are more involved with leg movements are far more active when the subject hears words like "kicking" or "walking" (or sees images of these actions) relative not only to other action words but also to abstract nouns and pseudowords.

People have been doing this kind of research basically since neuroimaging became possible. The problem is what to make of the data. A common interpretation is that the reason sensorimotor regions are significantly differentially activated when processing words is that even abstract words rely on more basic sensorimotor experience, because humans learn concepts through their motor and sensory experience in the world. Therefore, the meaning of words is distributed not only across regions associated with higher cognitive processing and with memory, but also across sensorimotor regions.

Another interpretation, however, is that the observed activation has nothing to do with semantic processing (i.e., the meaning of the words). There are various explanations for the observed activation in sensorimotor regions, some based on other experimental evidence, but that's not really important here.

The important thing is that the problem isn't a matter of falsification or null hypotheses. It has to do with the adequacy of the experimental paradigms, methods, instruments, but not usually data analyses (i.e., statistical techniques). In principle, embodied cognition is falsifiable in the same way classical models of abstract semantic processing are. However, as the disagreement is not about the data (and often not even about the experimental paradigm), but is mostly over how the methods used were flawed and/or the interpretation of the results were problematic, falsification is pragmatically impossible.

To make it even simpler, we can have two teams carry out identical experiments and get (for all practical purposes) identical results, and have totally different findings because the results are interpreted according to different theoretical frameworks. This makes null hypothesis testing pretty useless, because reaching the desired alpha level is meaningless without the interpretation of exactly what is significant.

And that's without getting into the unbelievable number of ways to misuse statistical models (due to a lack of understanding) and get a result which allows the rejection of the null.

So why is it that supporting data is better than no data, if in reality the "supporting data" is simply misleading? If those involved in neuroscience, cognitive neuropsychology, and cognitive science follow the embodied cognition model, but in reality all the evidence for it has resulted from misinterpreting data, poor experimental methods, and similar faults, and their model is wrong, then why is this better than not having all those data? What we have instead is an ever growing theoretical framework built on nothing but air, which will continue to skew perceptions of research results, inspire poorly designed experiments, and in general not only increase ignorance, but mask it with the illusion of increased knowledge.

That's just one example. There are any number of ways to get a similar result. Online surveys for research are becoming increasingly popular, but often enough researchers do not adequately ensure that 1) the same person doesn't submit multiple results, 2) the information about the person, such as age, gender, occupation, etc., is accurate, and 3) the resulting surveys represent an adequate sample of the desired population.

Again, this is not a thought experiment: published studies have become the center of rather heated debates because of the above issues with internet survey research. And once again, why is "some data" supporting a claim better here than none, when the data is in fact misleading? Two examples which spring to mind concern climate science: one which sought to determine the opinion of climate scientists, and another which sought to understand the skeptical online public (i.e., those who frequent blogs or sights which are skeptical of mainstream climate science). In the first example, the researchers found a large minority of scientists who disagreed with the "mainstream" versions. And a lot of criticism was directed at their methods which (so it was claimed) led inadequate data which supported their claims, but only because it was inadequate. In the second example, the researchers surveyed climate blog users and found that believing in conspiracy theories is a predictor of climate skepticism while "acceptance of science" was correlated with the climate consensus. Here to the researchers were criticized for inadequate sampling and methods, such that their results were (supposedly) spurious.

Both research groups had evidence for their claims which supposedly only supported their claims because it was inadequate in some or multiple ways. If true, then here once more we find, instead of just ignorance, ignorance masked by false knowledge.

I don't see how this is preferable. Working within a theoretical framework built upon insufficient or inadequate evidence means propagating false beliefs, models, and theories. Often enough these frameworks are used in debates of public policy, in constructing laws, for informing public opinion, and even for solutions to social, individual, environmental, and other problems. If we have evidence to support a particular framework, but for some reason (misinterpretation of evidence, poor methods, inadequate data, etc.) the framework is wrong, then this error may be felt throughout society. Just look at the recovered memory movement (or, actually, pretty much the entire history of mental health treatment).
__________________
I would welcome that insanity
That looks upon humanity
And earth and its banality
Finding hope despite reality .

-Thanks to all for making my experience here such a valuable one.

Last edited by LegionOnomaMoi; 09-15-2012 at 01:09 PM..
Reply With Quote
Reply

Tags
accuracy, articles, peer review, science, scientific journals

Thread Tools
Display Modes

Similar Threads



All times are GMT -6. The time now is 10:35 PM.


Copyright 2014 Advameg, Inc.

SEO by vBSEO ©2010, Crawlability, Inc.