• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Conservatives Big on Fear, Brain Study Finds

The Sum of Awe

Brought to you by the moment that spacetime began.
Man, I should know better than looking at threads in this side of the forum - there's NEVER agreement :D
 

Shadow Wolf

Certified People sTabber
Uhhh, not seeing any belittling in the article. Just facts. Not my problem you're blowing it completely out of proportion. You sound fearful when you shouldn't be.
Not my problem you can't see the problems with something like "Given that their brains are so different, it is hardly surprising that liberals and conservatives should spend so much time talking across each other and never achieving real dialog or consensus." And really, it is the amount of exposure to differing groups that is a far greater prediction of inter-group dynamics and relations. It may be an experiment with fancy equipment, but this thing called the "real world" suggests otherwise and that people with different brains do have real dialogue and do accomplish things. And when you start saying that these conservatives (yet we still do not know if fiscal, social, European, or what type of Conservative exactly) are more fearful than Liberals, then you have really achieved nothing except demonizing your enemy, when in reality they shouldn't even be an enemy. And while we are at it, what sort of Liberals seemed to have less fear?
And of course a lab is not an ideal place to measure fear, as no one can predict how themselves or another will react in a high-stress/fearful situation unless they have been there. Just having a larger amygdala may make one more prone to fear, but it does not mean that individual has not learned to control that fear.
And afterall, the oldest and strongest emotion of humans is fear, and the oldest and strongest type of fear is fear of the unknown. Liberal or Conservative, humans are an easily frightened bunch.
 
Last edited:

Sunstone

De Diablo Del Fora
Premium Member
In terms of survival, being especially fearful has both advantages and disadvantages. One can see why a conservative disposition would have evolved.
 

Alex_G

Enlightner of the Senses
Interesting findings. Seems these research studies continue proving themselves correct. I see this in the comments here and elsewhere.

http://www.psychologytoday.com/blog/the-human-beast/201104/conservatives-big-fear-brain-study-finds

The amygdala isn't synonymous with 'fear bit of the brain'. Its involved in emotional memory, imprinting and conditioning, basically far more sophisticated than 'fear centre', and that's within the context of the limited knowledge we currently have of the brain's inner working.

Additionally anatomical size doesn't necessarily correlate with functionality. A larger amygdala doesn't automatically mean a more highly functioning one.

Also I'm wary of interpretations of study results, this is often the stage that bias and error comes in. If your looking for a certain result or a confirmation to an already held belief, likely you will find it.
 

YmirGF

Bodhisattva in Recovery
The amygdala isn't synonymous with 'fear bit of the brain'. Its involved in emotional memory, imprinting and conditioning, basically far more sophisticated than 'fear centre', and that's within the context of the limited knowledge we currently have of the brain's inner working.

Additionally anatomical size doesn't necessarily correlate with functionality. A larger amygdala doesn't automatically mean a more highly functioning one.

Also I'm wary of interpretations of study results, this is often the stage that bias and error comes in. If your looking for a certain result or a confirmation to an already held belief, likely you will find it.
:clap:clap:clap:clap:clap:clap:clap:clap

From the BBC article on Colin Firth's "contribution"
New York University's Professor John Jost, one of the world's leading authorities in political psychology, told The Psychologist magazine:
"It is a useful contribution because it builds on and extends previous work."
"It will probably be several years before we understand the full meaning of these results."
 
Last edited:

Revoltingest

Pragmatic Libertarian
Premium Member
Today's word is "tendentious".
Expressing or intending to promote a particular cause or point of view, esp. a controversial one: "a tendentious reading of history".
 

LegionOnomaMoi

Veteran Member
Premium Member
Interesting findings. Seems these research studies continue proving themselves correct. I see this in the comments here and elsewhere.

These findings are completely meaningless for a number of reasons. I'll try to give a point-by-point non-technical summary and then more detailed for those interested.

1) The researchers determined political orientation primarily using a five point scale (very liberal to very conservative). Out of their 90 participants, none chose "very conservative", so they simply changed the scale after getting responses on a different scale (they manipulated their data).
2) The amygdala (and, additionally, the anterior cingulate cortex (ACC), which is also what the researchers looked at) is implicated in lots of things. The ACC, in fact, is implicated in Resolving Emotional Conflicts (that's a 2012 study from Neuron, one of the most respected and important journals in neuroscience, unlike Current Biology, which is where the OP's study was published).
3) Emotional regulation and "fear" and it's relationship to the various parts of the cortex and amygdala are also poorly understood, even when it comes to fear and emotional regulation in general (see e.g., full-text studies here & here).
4) The basis for identifying the amygdala with fear comes from studies on rats and classical conditioning. When neuroscience enabled us to understand how poorly neural structures (particularly massive ones like the amygdala) can be categorized as "responsible for fear response" and so on, the focus shifted to finding correlates and interactions. That's why the authors of the study looked at the ACC: it's known that emotional regulation (including fear) involves an interaction between cortical regions like the ACC, PFC, etc. What is not well known is how this works.
5) The authors "characterized the extent to which these correlations between gray matter volume and political attitudes might permit us to determine the political attitudes of a single individual based on their structural MRI scan" using a mathematical tool called a "support vector machine algorithm". Usually, when researchers use some statistical technique that is more complicated than the most basic bivariate correlation measures or linear regression models (and sometimes even then) they cite a source or sources to indicate that this technique has been used like this before. Here, the authors cited a 1998 book Statistical Learning Theory which contains an entire section of several chapters on various types of support vectors and their uses, and so it is unclear what algorithm the authors used.


Ok, now the more detailed parts.

About the scale. The authors clearly wanted use the standard likert-scale "very this" to "very that". However, nobody responded with "very conservative". So they created a new scale which had "conservative", "middle of the road", "liberal", and "very liberal". First, any single measure to determine political orientation is almost never done, because
1) In places (like universities and the UK) where people tend to negatively judge those who identify themselves as conservative, participants lie. We've known this for decades and there are whole subfields in various behavioral and social sciences devoted to participants lying (even in anonymous, online surveys). We also know that participants tend to respond in ways that they think the researchers either want or would approve of. One reason why psychologists and sociologists who study complex things like personality traits use multiple measures and often long surveys is so that it is harder for the participants to consciously or unconsciously report things that aren't true. For example, in long questionnaires, different questions can be used to measure the same thing (like conservative orientation), so that these multiple responses can be evaluated as a composite measure which tends to wash out inaccurate and/or untruthful responses and gives a much better indication of whatever is being measured.
2) The researchers didn't just use one, simplistic measure known to be problematic. They didn't get any responses for one of the categories, so they altered the scale. Basically, they eliminated "very conservative". The problem here is that now they have weighted the entire sample pool towards "liberal", meaning they increased how significant the response "conservative" was. To see this, think about the original scale. There are two responses for liberal, and two for conservative. In the new scale, there are still two for liberal, but only one for conservative. Even without factoring in the phenomenon of clusterting using likert scales (people tend to go with the middle option more often than any other), you've now increased how much "conservative" stands out from the other responses.
3) Likert-scales are used for categorical or nominal data, which is what political orientation is. Correlation is determined using statistical techniques on numbers. What researchers frequently do is simply convert the responses to numbers, as they did here (1, 2, 3, & 4). The problem is that regression/correlation tests, like the one the authors used, are not valid for nominal/categorical data. If you have a survey with lots of questions that use a likert-scale, the invalidity of the transformation from categories to numbers may be washed out (although determing this is difficult). Using a single question and then performing a multiple regression analysis (which is what the authors did) is incredible. It is such an invalid use of multiple regression the mind boggles.

Which brings up the researchers' knowledge of statistics and math. The fact that it is seriously deficient is also indicated by the citation of that textbook. Usually, citations like this are to a specific study, often to the study in which the mathematical tool was developed. The reason for these citations are, first, to show what exactly the researchers did, and, second, to show why it is an appropriate method. They cited a textbook. No page number. No indication of what algorithm they used or why it is appropriate. This defeats the entire point of such citations, but it does indicate that the researchers didn't really understand what it was they did. They used MATLAB, in which such methods can be done by anybody, as all you need is numbers and you can enter in some commands and WHAM out pops your value. So, they went to a textbook, found some classification method they figured probably was ok (or maybe tried a bunch and settled on the one that gave the right result, which is something I've seen done), and used it. But as they didn't really know what they were doing, and had to indicate why it was they used this method, they cited the entire textbook. They hid what it was they did.


Finally, as noted earlier, the relationship between fear regulation and these neural structures is very contentious. For one thing, the same regions that are involved in expressing fear are involved with suppressing fear, along with a host of other emotional and cognitive processes. For another, it is well-known that the relationship between fear and the amygdala and cortex involves a lot of different parts of both brain areas, including many the authors didn't even look at other than in their "whole brain" analysis, which didn't find anything significant.

Had the researchers used a valid measure of political orientation and valid statistical measures, and still gotten the results they did, one could just as easily say that being conservative tends to be associated with resolving emotional conflict. In fact, given that two of the three studies they cite on the role of the ACC have to do with "optimal decision making" and "conflict monitoring", and other studies (such as the one linked to on resolution of emotional conflict) show that this area is more involved in controlling, monitoring, and inhibiting emotional responses as well as in cognitive processes (e.g., decision making), the study actually provides more evidence that conservatives are more likely to be able to resolve emotional conflicts and mediate automated/instinctive (i.e., stuff that involves the amygdala and emotion) reactions to stimuli using higher-level cognitive processing than it does for what the authors state.
 
Last edited:

tytlyf

Not Religious
These findings are completely meaningless for a number of reasons. I'll try to give a point-by-point non-technical summary and then more detailed for those interested.

1) The researchers determined political orientation primarily using a five point scale (very liberal to very conservative). Out of their 90 participants, none chose "very conservative", so they simply changed the scale after getting responses on a different scale (they manipulated their data)..
Where are you finding this information regarding the OP?

EDIT: Found it. You claim they manipulated the data out of spite?
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Where are you finding this information regarding the OP?

EDIT: Found it. You claim they manipulated the data out of spite?
No, not really. Also, I don't know which manipulation you are referring to. I am not sure which things they did because they didn't know any better, and which they did knowing that it was bad research but wanting to get certain results. After all, it is not typical to have a neuroscience study with one author who is an actor, and another who is a journalist. Also, Kanai has worked almost exclusively on vision and the visual system. It's what he did his doctoral thesis on, and for some reason before that after earning a B.Sc he published a number of papers on the issue. Rees is a bit of an oddball. He's one of the neuroscientists that came into the field from another, much like a number of physicists. At any rate, nobody in the group had any experience with this kind of research.

But that doesn't matter. The head of my lab put out an article of fuzzy sets in the 70s, and when I asked him about it and if he'd done any more work on fuzzy set theory or fuzzy systems he said it was just one of the things he'd dabbled in.

The problems with the study are those I already noted. A central issue in the sciences (mostly the social, behavioral, and to a lesser extent life sciences) is the increasing ability to perform incredibly sophisticated mathematical analyses without knowing any real math, thanks to statistical packages like SPSS or more general (and more powerful) statistical & mathematical modelling software like MATLAB.

There are a couple of "hot spots" when it comes to neuroscience, because 1) it's a relatively new field and 2) it involves really expensive equipment, so only a few places have a long history (as neuroscience goes) of working in the field. Most are those who come out of areas which were "hot spots" for cognitive science. Namely, labs at Harvard, MIT, UC Berkeley, etc. Actually, in the US the big centers are usually on the East coast or the West. I worked at one such lab. As a result, I happen to know not only some of the things which are done during neuroimaging experiments like fMRI studies, but also the level of mathematical training/knowledge that a number of students even at graduate and doctoral level (and even at such prestigious places) have. For many, it's almost none. Too many graduate programs require only one course in multivariate statistics, which would be great if the students who attend had a sufficient grasp of multivariate mathematics (esp. linear algebra and multivariate calculus) to understand the actual concepts behind the statistical methods covered. They don't. It's a problem that has been addressed many times over the years:
In fact, in addition to a number of books designed to give researchers a better understanding of the techniques they use, the last few decades have seen ever more complaints, criticisms, and warnings about the misuse of statistics within the social sciences. To give just a few examples, we have Rein Taagepera's Making Social Sciences More Scientific: The Need for Predictive Models (Oxford University Press, 2008), Peter Fayer's "Alphas, betas and skewy distributions: two ways of getting the wrong answer" (Advances in Health Science Education, vol. 16), the edited volume Measurement in the Social Sciences: Theories and Strategies (1974), Gerd Gigerenzer's "Mindless statistics" (The Journal of Socio-Economics vol. 33), Taagepera's "Adding meaning to regression" (European Political Science 10), and on and on.

The problem is that most of the people who seem to read or take such works seriously are the same people who are already aware of the problems. And that's without getting into the lack of instruction on the underlying philosophy, epistemology, and justification for standard methodological approaches.

Then there is the additional problem of neuroimaging.
In one fMRI study I was involved in, we did look at scans of people who had to make response judgments about religious and other groups or categories of people. At best, you can argue at the end that there is some consistent way in which members of some category (e.g., "Christians" or "religious people") are processed by the same parts of the brain that also process categories (or other members of the same category) which are perceived as similar. Thus, for example, I might be able to say that religious people tend to use some PFC region to a statistically signficantly greater extent than non-religious, but even that would be contentious. As for interpeting what that means, it would be useless.
Let's say I believe that processing even abstract concepts relies on sensorimotor regions of the brain (embodied cognition). So I run a bunch of subjects through some neuroimaging experimental paradigm involving words, pictures, or both. I notice that when subjects hear words like "mountain", "shark", "movie", etc., neural regions in the visual cortex "light up", and when subjects hear words like "hammer", "fork", "screwdriver", "baseball", etc., as well as words like "walking", "waving", "kicking", "pointing", etc., not only do I find activity in the motor cortex, but I find that words associated with the activity of particular body-parts (e.g., leg or arm) are differentially activated in the motor cortex somatotopically. Basically, regions which are more involved with leg movements are far more active when the subject hears words like "kicking" or "walking" (or sees images of these actions) relative not only to other action words but also to abstract nouns and pseudowords.

People have been doing this kind of research basically since neuroimaging became possible. The problem is what to make of the data. A common interpretation is that the reason sensorimotor regions are significantly differentially activated when processing words is that even abstract words rely on more basic sensorimotor experience, because humans learn concepts through their motor and sensory experience in the world. Therefore, the meaning of words is distributed not only across regions associated with higher cognitive processing and with memory, but also across sensorimotor regions.

Another interpretation, however, is that the observed activation has nothing to do with semantic processing (i.e., the meaning of the words). There are various explanations for the observed activation in sensorimotor regions, some based on other experimental evidence, but that's not really important here.

The important thing is that the problem isn't a matter of falsification or null hypotheses. It has to do with the adequacy of the experimental paradigms, methods, instruments, but not usually data analyses (i.e., statistical techniques). In principle, embodied cognition is falsifiable in the same way classical models of abstract semantic processing are. However, as the disagreement is not about the data (and often not even about the experimental paradigm), but is mostly over how the methods used were flawed and/or the interpretation of the results were problematic, falsification is pragmatically impossible.

To make it even simpler, we can have two teams carry out identical experiments and get (for all practical purposes) identical results, and have totally different findings because the results are interpreted according to different theoretical frameworks. This makes null hypothesis testing pretty useless, because reaching the desired alpha level is meaningless without the interpretation of exactly what is significant.

And that's without getting into the unbelievable number of ways to misuse statistical models (due to a lack of understanding) and get a result which allows the rejection of the null

Neuroimaging research has come under attack for being shoddy more than once. In fact, this particular study has, not just in popular literature but in peer-reviewed literature like "Of BOLD Claims and Excessive Fears: A Call for Caution and Patience Regarding Political Neuroscience" Political Psychology, 33(1), 27-43. or more explicitly in "Political ideology as motivated social cognition: Behavioral and neuroscientific evidence." Motivation and Emotion, 36(1), 55-64. (a study almost as bad). There really isn't any part of fMRI practice, from ROI selection methods to voxel size to signal processing to experimental design which hasn't been criticized. But as the public eats these studies up, and seeing brainscans is just so cool, the bad science just doesn't stop.
 
Last edited:

YmirGF

Bodhisattva in Recovery
---- Brevity snip ----
But as the public eats these studies up, and seeing brainscans is just so cool, the bad science just doesn't stop.
---- End Brevity snip ----
I really owe you a big fat pack of frubals for your contributions. Awesome stuff.
 

tytlyf

Not Religious
No, not really. Also, I don't know which manipulation you are referring to. I am not sure which things they did because they didn't know any better, and which they did knowing that it was bad research but wanting to get certain results.
So your point is that the study referenced in the OP uses fuzzy math and is really pointing to this part of the brain labelled as fear, when in reality it could mean fear and other attributes?
 

LegionOnomaMoi

Veteran Member
Premium Member
So your point is that the study referenced in the OP uses fuzzy math and is really pointing to this part of the brain labelled as fear, when in reality it could mean fear and other attributes?
No, sorry. The reference to fuzzy sets was to show that just because neither of the scientists had worked on this kind of study before doesn't mean that they didn't have any clue. The cognitive sciences are extremely broad in terms of the number of different fields they involve. So even though one might specialize in the visual system, that doesn't per se prevent them from being capable of talking about large number of other topics. One the one hand, the cognitive sciences include philosophers (of language, of mind, of science, of logic, etc.), linguists, even anthropologists. On the other, it includes engineers, computer scientists, mathematicians, physicists, biologists, etc. Clearly, most biologists don't work in the cognitive sciences nor do most of the others listed. This is because it is an interdisciplinary field.

Before going into the study again, some context might help.

To try to illustrate this (what being a specialist in an interdisciplinary field like cognitive science means), I'll give a few examples of the type of research taking place at certain places and who is involved.

For example, if you just visit UC Berkely's Institute for Cognitive and Brain Sciences, there is a nice little image showing how some fields intersect. If you click on the link "Research", you can see some diversity in the kinds of research done. This group is similar to the center I worked at (I still say things like "head of my lab" as I only left because I had to move for financial reasons, and am in denial about it, so please forgive the changes in tense). There was a social cognition lab that my lab partnered up with (all the labs actually do that, and in fact the main collaborator my lab had was a university in Italy). But there were also the labs of Stephen Pinker and Caramazza and those like them- old school mainstream cognitive psychologists.

Contrast that with the cognitive science work at MIT or at Johns Hopkins. If you go to the research page at MIT here, and click on the "cognitive science" link, you'll get a feel for what I'm talking about. You still have the "mainstream" centers like the "Brain and Cognitive science", but you also have the "Nonlinear Systems Laboratory" and the "Computer Science and Artificial Intelligence Laboratory (CSAIL)". Go to Johns Hopkins and it's hard to find mainstream cognitive psychologists. In fact, if you go to the Applied Physics Lab site, it's hard to find out that they've actually done a lot of work in various ways, such as their "Research Program in Applied Neuroscience".

Finally, we have academic socieities that are not affiliated with any one university but which (among other things) hold conferences to bring together people working in the field to network, give presentations, assess the state of research, and usually the presentations chosen get the opportunity to be published in the peer-reviewed conference proceedings.

I'll give only one example as this is already lengthy. Every year since 1984 there is an annual Human-Computer Interaction (HCI) conference held, and for the past five years it has been held jointly with a number of other conferences. Basically, it's really all one thing, but the conferences are talked about as if they were seperate because among other things new groups form and want to join. At HCI 2013, for example, two new conferences will be there for the first time. Again, it's all the same place and is more one conference than many, but another reason for distinguishing the various conferences is how the papers get published in the peer-reviewed conference proceedings volumes. At the 2011 gathering, "a total of 4,039 individuals from academia, research institutes, industry and governmental agencies from 67 countries submitted contributions, and 1,318 papers that were judged to be of high scientific quality were included in the program". The volumes of the conference proceedings were published in 23 volumes, so having different conference names helps break down the scheuling of presentations as well as how accepted papers are organized into the various volumes.

At that conference was a russian psychologist who studied preschool "gamers", a researcher from the U.S. Army Research Institute who works on unmanned vehicles, a group of Japanese researchers who studied the effect of brightness on distractions which result in car accidents, and another group who studied the relationship between internet anxiety and human behaviors. There were lots more, of course, but that sample is meant to give an idea on the kind of range of topics we're dealing with.

Basically, one can work in cognitive science and be a mathematical physicist or a sociologist. However, the reason that people from so many fields can work in the cognitive sciences is because so many things relate to what is really the core of the field: brains and cognition/thinking (in animals, humans, computers, or robots).

Take Kanai, one of the two scientists in the study. His work has been almost entirely on the visual system. Now, I can say that about someone in cog. sci. and it could mean that they are an engineer working on robots or unmanned vehicles, or it could mean that they primarily study attention/perception in humans. Kanai's work has almost all been related to how the eyes interact with the brain. In particular, his work has concentrated on the unconscious, automatic nature of reactions in the visual system to the presentation of various types of stimuli. In other words, most of his work has not only concerned things like shapes and color and the visual pathways and instinctive eye-movement and so on.

It also means that almost none of his work before teaming up with Rees dealt with how humans actually think about the world (conceptual representation, conceptual categorization, judgment, language processing, etc.).
 
Last edited:

Quagmire

Imaginary talking monkey
Staff member
Premium Member
It's pretty much splitting hairs to say that one category of people are more fearful than another category of people. All of us humans are basically scared little creatures scurrying around, randomly bumping into things, and then pretending like we meant to bump into them all along.

And usually accusing them of having bumped into us. :D
 

LegionOnomaMoi

Veteran Member
Premium Member
So, why is it relevant that the cognitive sciences are so broad? Because although so many areas overlap, there are also lots of places they don't. Kanai, for example, knows lots things about the visual system that I don't know about. There are terms in some of his work that I'm barely familiar with. Same with Rees, who has also worked a lot with the visual system. But does that mean that either one are familiar with experimental work (particular neuroimaging studies) on how abstract categories like political orientation are represented in the brain? Or how conceptual processing and similar abstract cognitive processes relate to neural regions which deal primarily or exclusively with automated responses, unconscious regulation of the peripheral nervous system, and so on? Does it mean that either have a background in mathematics? Are either familiar with the literature (which is extremely technical) on the problematic aspects of neuroimaging on the image processing side? For example, in their study the authors write:

"T1-weighted MR images were first segmented for grey matter and white matter using the segmentation tools in Statistical Parametric Mapping 8 (SPM8, SPM - Statistical Parametric Mapping). Subsequently, we performed diffeomorphic anatomical registration through exponentiated lie algebra in SPM8 for intersubject registration of the grey matter images [26]. To ensure that the total amount of gray matter was conserved after spatial transformation, we modulated the transformed images by the Jacobian determinants of the deformation field .The registered images were then smoothed with a Gaussian kernel of 12 mm full-width half-maximum and were then transformed to Montreal Neurological Institute stereotactic space using affine and nonlinear spatial normalization implemented in SPM8."

Let's break this down a bit. First, they "segmented" the images into white and grey matter and did so using SPM8. What does this mean? Well, SPM8 is a software tool developed as an add-on to MATLAB so that people like Kanai and Rees can press a button and hey-presto segmentation complete. They don't have to know anything about what's invovled. In fact, it is set-up so that they won't (most of the manual, the help files, etc., available on the site they linked to do not explain what's going on or what the processes mean, only how to do them).

Next, they get really fancy (it seems). They "performed diffeomorphic anatomical registration through exponentiated lie algebra". Wow.

Well, actually, although this sounds really fancy, it really comes from the acronym DARTEL, which is part of the SPM8 software package. What does the SPM8 manual say about this acronynm? "DARTEL stands for "Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra". It may not use a true Lie Algebra, but the acronym is a nice one."

To the normal human, "Lie algebra" sounds likes a program you might use to cheat on a high school math test. In reality, it has to do with a set of related mathematical notions (fields, vector spaces, commutators, groups, etc.) which might be grouped under the name "abstract algebras".

What's important is that while it sounds like the researchers did something really fancy, although they don't even know it what they did was tell something which isn't true. Because the people who designed SPM8 and therefore DARTEL decided they liked the acronym even though it isn't accurate, and because the researchers decided not to use the acronym but describe what they did as if they actually did some really complicated mathematics instead of follow some instructions from a software manual, they said they did something they actually did not do. They did not perform any "diffeomorphic anatomical registration through exponentiated lie algebra", because the program they used isn't, strictly speaking, using a Lie Algebra. But it doesn't sound impressive to say "we then used SPM8's DARTEL" when you can say instead "we performed diffeomorphic anatomical registration through exponentiated lie algebra", even if this isn't true.

They lied about using Lie algebras. Irony, or really bad pun?

Next we hear about how they "modulated the transformed images by Jacobian determinants of the deformation field", another scary set of jargon terms. What does this mean? It means that they went to the manual and got to this step: "Select whether or not to use Jacobian modulation" (that's a direct quote from the SPM8 manual). And the "Gaussian kernel" bit? Well, right after their manual gets to the step on selecting Jacobian modulation, it covers this to. Basically, once you are running SPM8 and you choose to select Jacobian modulation, you have a finite number of options to pick from. They picked one.

All that complicated description makes them sound very, very adept, but really it means that someone showed them how to use a program and follow a simple set of steps.

That's what their entire analysis was. All those mathematical terms, like Gaussian kernals or multiple-regression analysis or support vector machine algorithm and so on, were all them just plugging commands into a computer program designed such that they could do very sophisticated, highly technical analyses without having a clue what this meant.

And that's just one aspect of the math part of the problems with the study.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Last one (for all our sakes, especially the readers who were kind enough to lend support and thanks rather than openly display hatred, disgust, etc., at my lengthy and overly-detailed posts).

This study is bad even as neuroimaging studies which get into politicial orientation or religious beliefs or whatever go. That is, while a lot of studies out there which use fMRI techniques to show relationships between certain groups of people and things like intelligence or prevalence for emotional instability or whatever, and most are bad while few actually tell us much which is probably accurate, this study in particular was poorly done even relative to other bad studies.

One of the biggest problems, which is stressed in every undergrad stats class not to mention research methods classes is CORRELATION DOES NOT EQUAL CAUSATION. Here, this problem manifests as a serious one in the field of social neuroscience: causal inference or direction of causation. Let's say they had done their study perfectly (as if that were possible). And that the conservatives really did have more grey matter in these areas. Now we have a well-designed study with accurate results. What can we say?

The first problem is why we are only asking about grey matter. The topic is one of the emotional regulatory system, which is a sort of interface or interaction between brain structures like the amygdala and certain cortical regions like the ACC or PFC. White matter is basically the connective system. The grey (or gray) matter is neural bodies, while the white matter is the connective tissues or fibers which carry neural signals. That's the simple version, anyway.

Certain neurological disorders/diseases, like multiple sclerosis, are caused by damage to white matter. In other words, it's really, really, really, important. Loss of volume of white matter can matter when loss of grey matter cannot. So there is no reason to look only at grey matter, particularly in this case.

Why in this case? Well, because we're dealing with a problematic brain area with a problematic past. Kanai's first published study was on mice. A lot of his work, including this one, is somewhat of a relic. It is behaviorism, the reigning approach within psychology in the first half of the 20th century. Pavlov's dogs, classical conditioning, fear responses, etc., all come from behaviorist studies.

But here comes the problem. The view within behaviorism was that the "mind" was off-limits. You can't see thoughts, and so you study behavior. The extent of their work with actual brains was doing things like cutting portions of rat/mice brains (like the amygdala) and seeing how that changed the way they behaved (such as their fear responses).

Cognitive science developed in opposition to this "we can't look at the mind" position of behaviorism, and in fact is at its heart the science of the mind. Thanks to neuroimaging and other advances, we now know that a lot of the terms used in earlier psychology, from the different classes of memory to terms in emotional regulation study (e.g., "fear responses") are problematic, as are the explanations of their nature. For example, the amygdala has been known to be involved in emotional regulation for a very, very long time. What we have learned only recently, however, is how much different areas do different things (including things which have little to do with emotional regulation), and how much it is not the amygdala but the interaction between the amygdala and particular cortical structures which matter.

For example, there are a host of studies showing that decreased brain activity and/or volume in the areas the researchers looked at is correlated with mood disorders like depression or various anxiety disorders. One could then conduct the experiment the researchers did, only do so "perfectly", and then conclude that conservatives are less likely to suffer from mood disorders. It would still be wrong, but it is equally valid.

It is impossible, usually, to tell how even structural differences in the brain relate causally to certain traits (personality, cognitive, emotional, etc.). We know that the brain is quite plastic, and that a primary way in which it works is through reorganizing itself through experience. So if anxious people have, in general, less brain volume in the amygdala, is it because this lack of brain matter caused the anxiety, or that over time consistent panic attacks, frequent anxious states, etc., changed brain structure? We don't know.

This study, then, involved one of the least understood neural systems (the cortical-amygdala interaction), and one which is known to be involved an incredibly diverse number of cognitive and emotional processes. It also was extremely limited in what was looked at without explaining (for the most part) why. ROI's or regions of interest are important because fMRI studies almost never involve (at least primarily) scanning the whole brain. There is too much going on all the time. So if we know certain regions are implicated in, say, fear responses, then we focus on those. But the researchers were even more restrictive. They not only ignored various areas involved in "fear responses" and emotional regulation, but ignored half of what makes up the places they did look (the white matter or connecting tissues). A single neuron (gray matter) can connect to ten thousand other neurons (white matter). The number of neurons is nothing compared to the number of connections between them. Yet these were ignored.

So, if we ignore the fact that they altered their scale which biased the results, ignore the fact that the statistics they used to correlate their imaging results with political orientation were bogus, ignore the fact that they cited a textbook for one of the mathematical techniques which effectively hides what they did, ignore the fact that two of the authors were media figures not researchers and the other to were not exactly experienced in this area, and simply pretend that the results were representative of conservatives and liberals everywhere, we STILL don't get anything much. It's now an interesting finding, but it is impossible to tell what it means. However, as we can't ignore all the mistakes, we can focus on the fact that Colin Firth was excellent in the TV mini-series Pride and Prejudice. And he's not bad looking either.
 

tytlyf

Not Religious
Last one (for all our sakes, especially the readers who were kind enough to lend support and thanks rather than openly display hatred, disgust, etc., at my lengthy and overly-detailed posts).
I will address your posts tomorrow. I want to read them as I did before. Although they are lengthy. Don't be mad cause I don't instantly reply.
 
Top