I'm not saying that your point is invalid, and indeed, results that have poor support from weak evidence are generally not accepted as true but rather as interesting but requiring more research.
The understanding I'm referring to regarding the Null Hypothesis is the practical notion that all claims are false until shown not to be, thereby falsifying the Null Hypothesis.
Such a view is useful in both science and other topics of discussion, such as religion, because until a claim ("This medicine will cure disease X" or "There is a god") has enough evidential backing to falsify the Null Hypothesis, the claim should be considered false.
And it is in this context I'm saying that a claim with at least some evidence behind it is better than a claim with no evidence behind it, not because either claim must be false, but because one is on its way to falsifying the Null Hypothesis and has a chance of being considered correct, whereas the other should be dismissed out of hand until someone can back it up empirically.
I get the context, and in most cases of hypothesis testing at the very least one should be trying to show they are wrong (whatever researchers actually do in practice). However, the framework of "null hypothesis" testing is unnecessarily restrictive. For example, if I'm trying to locate regions of the brain involved in the processing of "action words"/verbs but not "things/objects/nouns", I don't really have a clearly defined hypothesis or null hypothesis. I'm assuming that the brain is responsible for processing, sure. But I may not have much in the way of predictions about which regions are used when it comes to verbs. Also, it may be that I can't find distinct regions used for verbs compared to nouns, but this isn't a "null hypothesis".
Then there is the issue of evidence vs. hypothesis. Let's say I believe that processing even abstract concepts relies on sensorimotor regions of the brain (embodied cognition). So I run a bunch of subjects through some neuroimaging experimental paradigm involving words, pictures, or both. I notice that when subjects hear words like "mountain", "shark", "movie", etc., neural regions in the visual cortex "light up", and when subjects hear words like "hammer", "fork", "screwdriver", "baseball", etc., as well as words like "walking", "waving", "kicking", "pointing", etc., not only do I find activity in the motor cortex, but I find that words associated with the activity of particular body-parts (e.g., leg or arm) are differentially activated in the motor cortex somatotopically. Basically, regions which are more involved with leg movements are far more active when the subject hears words like "kicking" or "walking" (or sees images of these actions) relative not only to other action words but also to abstract nouns and pseudowords.
People have been doing this kind of research basically since neuroimaging became possible. The problem is what to make of the data. A common interpretation is that the reason sensorimotor regions are significantly differentially activated when processing words is that even abstract words rely on more basic sensorimotor experience, because humans learn concepts through their motor and sensory experience in the world. Therefore, the meaning of words is distributed not only across regions associated with higher cognitive processing and with memory, but also across sensorimotor regions.
Another interpretation, however, is that the observed activation has nothing to do with semantic processing (i.e., the meaning of the words). There are various explanations for the observed activation in sensorimotor regions, some based on other experimental evidence, but that's not really important here.
The important thing is that the problem isn't a matter of falsification or null hypotheses. It has to do with the adequacy of the experimental paradigms, methods, instruments, but not usually data analyses (i.e., statistical techniques). In principle, embodied cognition is falsifiable in the same way classical models of abstract semantic processing are. However, as the disagreement is not about the data (and often not even about the experimental paradigm), but is mostly over how the methods used were flawed and/or the interpretation of the results were problematic, falsification is pragmatically impossible.
To make it even simpler, we can have two teams carry out identical experiments and get (for all practical purposes) identical results, and have totally different findings because the results are interpreted according to different theoretical frameworks. This makes null hypothesis testing pretty useless, because reaching the desired alpha level is meaningless without the interpretation of exactly
what is significant.
And that's without getting into the unbelievable number of ways to misuse statistical models (due to a lack of understanding) and get a result which allows the rejection of the null.
So why is it that supporting data is better than no data, if in reality the "supporting data" is simply misleading? If those involved in neuroscience, cognitive neuropsychology, and cognitive science follow the embodied cognition model, but in reality all the evidence for it has resulted from misinterpreting data, poor experimental methods, and similar faults, and their model is wrong, then why is this better than not having all those data? What we have instead is an ever growing theoretical framework built on nothing but air, which will continue to skew perceptions of research results, inspire poorly designed experiments, and in general not only increase ignorance, but mask it with the illusion of increased knowledge.
That's just one example. There are any number of ways to get a similar result. Online surveys for research are becoming increasingly popular, but often enough researchers do not adequately ensure that 1) the same person doesn't submit multiple results, 2) the information about the person, such as age, gender, occupation, etc., is accurate, and 3) the resulting surveys represent an adequate sample of the desired population.
Again, this is not a thought experiment: published studies have become the center of rather heated debates because of the above issues with internet survey research. And once again, why is "some data" supporting a claim better here than none, when the data is in fact misleading? Two examples which spring to mind concern climate science: one which sought to determine the opinion of climate scientists, and another which sought to understand the skeptical online public (i.e., those who frequent blogs or sights which are skeptical of mainstream climate science). In the first example, the researchers found a large minority of scientists who disagreed with the "mainstream" versions. And a lot of criticism was directed at their methods which (so it was claimed) led inadequate data which supported their claims, but only
because it was inadequate. In the second example, the researchers surveyed climate blog users and found that believing in conspiracy theories is a predictor of climate skepticism while "acceptance of science" was correlated with the climate consensus. Here to the researchers were criticized for inadequate sampling and methods, such that their results were (supposedly) spurious.
Both research groups had evidence for their claims which supposedly only supported their claims because it was inadequate in some or multiple ways. If true, then here once more we find, instead of just ignorance, ignorance masked by false knowledge.
I don't see how this is preferable. Working within a theoretical framework built upon insufficient or inadequate evidence means propagating false beliefs, models, and theories. Often enough these frameworks are used in debates of public policy, in constructing laws, for informing public opinion, and even for solutions to social, individual, environmental, and other problems. If we have evidence to support a particular framework, but for some reason (misinterpretation of evidence, poor methods, inadequate data, etc.) the framework is wrong, then this error may be felt throughout society. Just look at the recovered memory movement (or, actually, pretty much the entire history of mental health treatment).