What can brain scans of radicalized jihadists tell us about how they react to what they perceive as attacks on their sacred values? In episode 58, we’re joined by Nafees Hamid from Artis International who talks with us about his article “Neuroimaging ‘will to fight’ for sacred values: an empirical case study with supporters of an Al Qaeda associate,” published on June 12, 2019 in the open-access journal Royal Society Open Science.
Websites and other resources
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
🔊 Patrons can access bonus content here.
Hosts / Producers
Doug Leigh & Ryan Watkins
How to Cite
Leigh, D., Watkins, R., & Hamid, N.. (2019, September 17). Parsing Science – The Neuroscience of Terrorism. figshare. https://doi.org/10.6084/m9.figshare.9894509
What’s The Angle? by Shane Ivers
Hamid: They were proud to participate in the study: “Yes! I will be a representative of Lashkar-e-taiba!” … you know, an Al-Qaeda associate supporter. “You can scan my brain!”, you know.
Leigh: This is Parsing Science. The unpublished stories behind the world’s most compelling science, as told by the researchers themselves. I’m Doug Leigh.
Watkins: And I’m Ryan Watkins. Today, in episode 58 of Parsing Science, we’re joined by Nafees Hamid from Artis International and University College London. He’ll discuss what he learned by measuring the brain activity of supporters of a radical Islamist group as they talked about their willingness to fight and die for their values, and whether they were more or less likely to do so if they believed that their peers did or didn’t feel the same way. Here’s Nafees Hameed.
Hamid: Hello, I’m the Nafees Hamid. I grew up in the San Francisco Bay area. Started off my career, actually, as a professional actor. I went to theater conservatory. Was acting doing various regional theater. Lucky; didn’t need any side jobs. Was kind of making a go of it, but of course nothing famous that I did that your listeners would be able to remember me from. And then I decided I wanted to go live in France for a little while, just because it was always a dream of mine to go live in Paris. So I went to Paris to go teach English for a year. Again, thinking I would apply for grad school or something that year. My plan was to try to do a PhD in either at UCLA or USC and then continue acting while doing the PhD. Now I look back at the end realized that was quite ambitious, but … I get to Paris fall in love with Paris. Don’t get into any of the PhD programs that I had applied for. Happened to find out about this very interesting master’s program they had in cognitive science called the Cog Master. It’s like very interdisciplinary in many different labs, many different universities. And so I applied and I got directly into the second year of their master’s degree, so I was able to do it in one year. And then I decided to do my PhD in Security and Crime Sciences at University College London. And I’m just about to finish that PhD.
Leigh: Nafees and his colleagues do research on the influence of “sacred values” on the decision-making pf people who hold the kind of radical beliefs that are consistent with those of terrorist organizations. We began our conversation by asking the Nafees how he got interested in this line of research.
Interest in sacred values
Hamid: So my undergrad, I would say the person influenced me the most as a researcher was a woman named Patricia Churchland, who was a philosopher. She kind of helped develop this field called neurophilosophy. And was looking at how things that we can discover in neuroscience at psychology might have some practical import in a variety of different philosophical topics, but moral psychology – moral philosophy – was one of them. And then when I got to Paris I was doing my master’s degree, I worked with a guy named Hugo Mercier, and with Dan Sperber, another anthropologist / cognitive scientist, developed this thing called “argumentative theory of reasoning,” where they basically said evolutionarily we developed reasoning to win arguments, essentially. Not really to problem-solve. And so I was then doing some research with him, mostly on line experiments on a … A lot of people felt like moral reasoning is just post hoc. That people just use it to sort of justify their opinions and even, like, good arguments don’t change people’s minds when it comes to moral issues. So we were looking to see if that’s the case, and basically what we found is: It’s true that it’s very difficult to change people’s minds even if you can address every single concern they have if they have very strong moral commitments; if they have moral convictions. However, if they’re kind of in the middle, if there’s a gray area, if it’s sort of a a weak conviction, then arguments actually do have purchase, and they actually are able to persuade people. And so, I kind of was interested in that extreme end of people’s moral decision-making, that moral convictions, and there was a whole literature out there on a category of moral convictions called “sacred values.” And sacred values are like these ideas that we have that are considered protected, you know, they don’t really in think about cost-benefit analysis with them. Some people say they’re absolute, other people just say that you can’t mix them with profane or material values; you simply can’t get this person to think about in a utilitarian fashion.
Must sacred values be religious?
Watkins: Since people tend to be unwilling to trade off on their sacred values no matter what the benefits of doing so may be, Doug and I were curious whether sacred values must also necessarily be religious in nature.
Hamid: We use the word “sacred value,” but I always try to remind people that even if you’re an atheist you probably have some sacred values as well, and people living in the West have sacred values. They don’t need to be religious. For some people freedom of speech can be a sacred value. I mean take, for example, the the Charlie Hebdo cartoonists, the guys who were killed by jihadists in January 2015 in Paris who were drawing cartoons of Prophet Muhammad. Their offices had been bombed a few years earlier, but under the banner of freedom of speech they said “We will not stop doing this, even though we know that we’re getting death threats.” And, “We know that they actually bombed our offices.” And, you know, “Maybe our time on this planet may be limited, but we are not going to stop, because if we just believe in freedom of speech so deeply.” And of course they were they were killed for it, eventually. So I always tell people: It’s important that we not just assume that we’re talking about religion here in particular. And when people feel like their sacred values are under threat, you know, they muster the will to fight for them and in some cases are willing to give their lives – and even the lives of their loved ones – in defense of those sacred values. And not only do how many sacred values a person has, or whether they have any sacred values, may vary amongst recruits into terrorist groups, but the thing that we do find is that people who are part of even a similar cohort of radicalized people – like in the study that we ran – there was quite a bit of variability in terms of the actual content of sacred values. Sometimes there’s one or two issues that are pretty consistently sacred, like amongst our population, not doing cartoons of the Prophet Muhammad was the sacred value, I think for every single person. Not allowing gay marriage was a sacred value for every single person. And then there were some issues that were pretty commonly not sacred values, like Islamic teaching in schools, for example, in Spain, was not a particularly sacred value for most people. But there was quite a bit of variation from person to person.
How sacred values develop
Leigh: Sacred values are, by definition, deeply held convictions for those that hold them. Nevertheless, people aren’t born with them. Nafees explains how sacred values develop and become such deeply seated convictions.
Hamid: So we don’t really know from, like, adolescence and to adulthood and so forth, how much people change their sacred values in general. But it is true that sacred values are very difficult to change once you really have a real proper sacred value. I mean the only thing that we would guess from all of our other work that might lead to a change is if a person is put in a totally different social context, and has a totally different sort of reference group. Totally different friends who share completely different norms, different sacred values amongst them, and this person wants to gain access into that new reference group. Which is exactly how radicalization itself happens. Most people who join groups like ISIS or Al-Qaeda, or even extreme right-wing groups, they weren’t raised with those values. Most of them kind of were raised probably, with the same, similar values you and I have, which is generally they believe in – I mean I’m talking about Western people here – they believe in liberal democracy. They probably believe in pluralism. A lot of them are raised in multicultural kind of communities. They generally got along with everyone. If you asked them, probably at a young age: do they believe in freedom of speech, and, you know, equality before the law, and free and fair elections, and all of that. They would they would generally agree with all that. Now, what are those values were sacred for them or not I don’t know, but that’s generally sort of where their political allegiances would would lay. But then it’s through this process of usually one person ceding in a totally new set of values, into usually a tight-knit network of friends, do they start changing their values. And initially, I mean, they don’t just go from probably non sacred to sacred. It’s gonna be a slow process – an incubation process – of friends spending a lot of time with each other watching videos, discussing, hanging out on soccer fields, hanging out in the gym, hanging out in parks. Sometimes the radicalization process, from the period that they first hear about these radical ideas to the time they join … With ISIS it happened pretty quickly. It would happen within, like; sometimes it would happen just in a month to a few months. With Al-Qaeda, back in the day, it would usually be a longer process. But again, it’s questionable how much of their values become sacred. I would say that a person who really, genuinely, has sacred values – that is a highly radicalized individual – and it’s gonna be very difficult to deradicalize that person.
Why a control or comparison group wasn’t used
Watkins: In the design of their project, Nafees and his colleagues didn’t make use of a control or a comparison group. Doug and I were interested in hearing what led the team to make this decision.
Hamid: One reason is that we didn’t want to pathologize radicalization. A lot of people sort of think that radicals are people who join extremist groups, or who support them, are somehow just inherently different in terms of their psychology than normal people. But if that were true, we should expect, behaviorally, to see some evidence of that. And, well, if you look at a population level you might be able to say “Okay there’s more people with psychopathic tendencies who may join terrorist organizations,” then, “What’s then the base rate of the population?” I think there was some studies that show that generally about 3% of the population are psychopaths, whereas in terrorist organizations it can be as high as 9% or something. Okay, that may be the case. You may have, you know, it may be a disproportionate number of them, but it’s still 9%. And, you know, it’s 91% if that’s the case then, are not psychopaths. So we sort of knew that the general profile from all the fieldwork, from all the the case studies that have been done, the general profile of people who are radicalized, sort of span the normal distribution of intelligence, of economic backgrounds, social backgrounds, personality types. Even in this study we also did a battery of neuropsychiatric disorder tests. We had a clinician come in and measure them on a variety of different mental disorders. We looked at their IQs, we gave them personality inventories. They were normal. They were normal across the board. So if you if you don’t really have any basis to say that this person – extremists or supporters of extremist groups – are pathological in some way, then you really shouldn’t be doing some sort of base rate neuroimaging study looking for differences in neural activation, you know, between supporters of extremist groups and the general population.
How supporters of a radical islamist group were recruited
Leigh: Recruiting subjects for any research project can be one of the most challenging aspects of the study. But recruiting supporters of a radical islamist group in Barcelona – one which the USA, the European Union, and Russia have all designated as a terrorist organization – would seem nearly impossible. Ryan and I asked Nafees just how he went about doing so so.
Hamid: So it’s almost embarrassing now, when I look back on how I was actually recruiting some of these people. I mean, the first thing I did I get there in Barcelona: I have like some – I think it was like maybe an Al-Qaeda video or something – on my computer. And so I just open it up in a cafe filled with, like, other Pakistani and Moroccan people. I’m of Pakistani origin; Indian Pakistani origin myself. So it kind of look like them. So I just opened up my computer and I just start watching one of these videos thinking, well, maybe if there’s an extremist around me they’ll kind of come and sit next to me and be like “Hey what you watching there man?” you know, like, “Let’s talk about …” Like, I actually thought that might work. And of course it didn’t work. I’m sure they all thought I was a lunatic, and I’m surprised they didn’t call the police on me. They probably should have. And so that was sort of, you know, the first failed attempt of trying to reach out to people. And then I just started talking to talking to, like, you know, taxicab drivers or people working in restaurants. Because they’d see me and they’d come up and they’d start – especially the Pakistanis – they’d start speaking to me in Urdu, although usually Punjabi. I don’t speak Punjabi, but they speak to me in Urdu and I can get by in Urdu, though it’s not, it’s not great. But it’s enough that I can talk to people. And so I would kind of talk back with them, and then someone would introduce me to someone else. Or I would go to a mosque and talk to an imam. And there was a lot of undocumented workers selling beer, cigarettes, or even weed on the streets. So I would just go and say “Hey, can I just talk to you for a little bit?” and, yeah, I mean it’s kind of a mixture of Pakistani … kind of, I guess you could say, working class culture … That people generally like to hangout like the talk; they’re not so work oriented. But it’s also that added into sort of a Spanish mentality of also the “mañana mañana” kinda attitude where it’s pretty easy to get someone to stop working. Even to just sit down and talk to you for a couple hours, and then maybe just kind of get up once in a while and do their – do what they do the minimal amount of work they need to do. So it was a lot of that. And people some people were very open and eager to talk to us. Other people were very suspicious of us. I mean by the end I’m sure probably half the Pakistani population of Barcelona thinks I work for the CIA or they think I work for ISA, the Pakistani intelligence organization. ISI sorry. So, yeah, those are kind of the experiences. And, yeah, sometimes you know you’d meet people who, you know, who found the questions offensive. And they just couldn’t believe that anybody would ever, you know, say “yes,” because the questions were quite hardcore. I mean, for it – for some of the values, you know, we would be asking, like, “check as many of these as apply for each value.” So maybe the value could be “armed jihad” or “expansion of a caliphate” and you would say “I would be willing to do nothing at all. ” Or “I’d be willing to talk to people one-on-one.” Or “I’d be willing to peacefully protest.” Or “I’d be willing to violently protest.” Or “I’d be willing to financially support a militant group.” Or “I would be willing to join a militant group.” Or “I would carry out an attack on my own outside of a militant group for this values.” Those are pretty, you know, intense questions and some people would just look at us like we’re crazy. “Are you seriously asking me this question?” Like, “Who do you think I am?” And other people were like, “Oh, yeah. Totally. Absolutely I would join a militant group. Thanks for putting that option in there.” And so that was interesting. Yeah, I mean to see sort of that variation. To really be able to see how the same survey for a kind of a non radicalized person just seems the craziest survey ever, but for radicalized person it’s like just completely tapping into all of their beliefs and values, and they really appreciate that you worded the questions the way you did.
Developing skills for naturalistic interviewing
Watkins: Nafees is also a research fellow with Artis International, a scientific research organization which focuses on the behavioral dynamics that affect conflict. Doug and I were curious to learn if it was there that he developed the skills necessary for engaging strangers in conversation about their radicalist beliefs, then potentially asking them to volunteer to go into an fMRI scanner while they answered questions about those values.
Hamid: Yeah, well, to be honest … So when we started doing this research, this is right when I first started working for Artis and started working on this research. So I hadn’t really developed my skills as an ethnographer yet. My problem was that I was too used to the laboratory. I was too used to very controlled conditions. And you’ve got to make sure that, you know, you asked the same question in the exact same way every time. However in the field it’s messy. What you really need to do is to be able to understand the conceptual basis of what you’re doing. Understand what these questions are supposed to be getting at, and then throw the measures out the window if you need to, and just kind of have a very normal conversation with someone and see if you can kind of guide them – in a very normal conversational way – into kind of, you know, prodding and poking and figuring out sort of what their identity is. What their values are. What the threat perception is. What their grievances are. And that’s kind of an art form that’s difficult if you spend so much time studying sort of, you know, experimental psychology in the laboratory. You just, you’re afraid of being too messy. Whereas in the ethnographic phase of any study, I mean, yes: by the end you want to develop a survey that will be asked for every question; will be asked in the exact same way. But the only way you can create that survey is that you’ve got to first just talk to people in a normal way so that you can actually adjust all the questions in each of your inventories. So as soon as you asking it to the person they get what you’re saying without even thinking about it, and it makes immediate sense, and that it’s intuitive. So the only way you can actually adapt a survey is to first go into this very long messy process, of just going out and talking to people in a way that they want to be talked to, in a way that they talk, and figuring out what words they use. What their vernacular is. So that was one part that I was … I had to kind of learn to do.
Reasons subjects chose to participate
Leigh: While Spain has abolished the death penalty, if Nafees’ participants were to actually carry out the jihadist acts that they advocated, they almost certainly would be jailed for life. Or depending on where their acts of terrorism were committed, they could be extradited to a country that does practice capital punishment. So Ryan and I couldn’t help but wonder why they agreed to participate in the study in the first place.
Hamid: I think there’s a few different reasons for why they did it. I think first and foremost that people who do hold these kind of more extremist beliefs, they want to be understood. I’ve talked to members of terrorist organizations as well who were currently at the time members of ISIS and Al-Qaeda. And I went to a, you know, another country and secretly met with them. And they had read some of my research, and that’s why they agreed to meet with me: Because they knew I wasn’t a journalist, and they felt like the work I was doing was fair. And so I think, first of all, people when they talk to you, and they can see the questions on your survey, they see this is serious. “These people actually, genuinely want to understand who I am and what I believe. And they’re not trying to do some gerrymandered study that’s, you know, trying to make me look like a lunatic or something.” I think the thing is most extremists feel that they have not been given a fair depiction in the media, and so when they hear that someone’s really going to do, you know, ethnographic interviews, survey studies, a social neuroscience experiment, they’re like, “Holy crap! You really want to actually, objectively understand us.” And I think, so that’s particularly appealing. I think the neuroscience part was exciting and interesting for them too, because they would ask us questions like, “What you want to scan our brains? You think there’s something wrong with our brain?” and we have to explain to them, “No, we’re not. We don’t think it’s pathological. We’re gonna do you know brain scan studies with other groups as well.” We did another study with sort of younger, slightly less radicalized Moroccans. We were going to do one on Catalan independentists as well. So we would kind of tell them overall about the overall project and the different populations we’re working with, and they were like, they kind of became curious. They kind of became a little nerdy like scientists, you know? They were like “Wait, so what part of the brains light up when this happens? When that happens?” And so I think there was sort of a seriousness that they responded to … of the work. I think there was a curiosity themselves. I think it also somehow … it made them feel important, you know? Because, if you’re a member of one of these groups, you’re saying you’re willing to do all these things, and you have this propensity – but there’s no avenue for you to actually participate in any of these activities – well, this becomes one way, sort of, of I guess “helping the cause” right? Because you’re contributing to a scientific and better understanding of who these people are, essentially. So I think that they gave some sense of significance as well. To be able to … I mean, they were proud to participate in the study. “Yes! I will be a representative of Lashkar-e-taiba!” … you know, an Al-Qaeda associate supporter. “You can scan my brain!”, you know.
Regions of brain activity in fMRI scans
Watkins: Nafees and his colleagues concentrated their analysis on activity in several regions of the brain. We asked him to describe where these regions are located, as well as what makes them relevant to learning why people may have the will to fight for their sacred values.
Hamid: So, the dorsal lateral prefrontal cortex is … frontal cortex … it’s front, but it’s on the side. And ventromedial prefrontal cortex is basically kind of in the middle, just … You can think of it as sort of the area between your eyebrows and a little bit higher than that, basically; so it’s in the middle and sort of back and to the sides. And so there’s a lot of research that has been done in moral psychology on the relationship between ventral medial prefrontal cortex and dorsal lateral prefrontal cortex. There’s been work done on identities for the vmPFC and work on sort of subjective value computation for vmPFC. Whereas the DLPFC is sort of more executive decision-making. It’s been involved in things like impulse control, self-control, executive functions, deliberative reasoning. So there’s often times when people look at sort of … they’re called “dual process models” of a variety of activities, from moral decision making to general reasoning, to even very mundane economic decisions: “Do I want to eat that chocolate cake, or do I not want to eat that chocolate cake? On the one hand I want it,” which might activate something like the vmPFC, its subjective value. “Ah, that looks good, but now that’s gonna be a lot of calories and that’s not good for my diet. And I said that I wouldn’t do it.” So you might get more activation and DLPFC there; more deliberative reasoning. And so oftentimes in most of our ordinary decisions you have both of these areas that active; they kind of work in tandem together, the vmPFC and DLPFC. So you have your wants and then, sort of, your wants are being regulated by these decision control mechanisms. And so sacred values are kind of interesting because they’re very emotionally laden values. And they seem to be very difficult to be able to negotiate or to reason someone who holds a sacred value. And so we were kind of interested in looking at what’s going on in this part of the brain when it comes to this particular category of values.
The study’s findings
Leigh: Of the 45 people that Nafees and his colleagues recruited into the ethnographic study 30 agreed to participate in the fMRI study. In it, the team recorded the brain activity of participants as they indicated their willingness to fight and die for their values, and as they reacted to their peers ratings for the same values. Nafees shares what he and his team found, after this short break.
Leigh: Here again is Nafees Hameed.
Hamid: We generally found that when people at high willingness to fight and die for their values, regardless if it was sacred or non sacred, there was a disconnect. Suggesting that when people are really willing to fight and die for a value, that part of the brain – let’s say that’s associated with deliberative reasoning – is not regulating their decisions, essentially. However, when they had lower willingness to fight and die, then you saw the two areas of the brain kind of working in tandem again, like it does for most people for most decisions. So lower willingness to fight and die means that you kind of bring these two areas of the brain back in coordination together. Now parts of the brain that are associate with deliberative reasoning is now working in tandem with parts of the brain that are associated with subjective value. And now you have a person that can be talked to. Now you have a person that can be negotiated with. Now you have a person that can be persuaded. Now you have a person that’s not just in a sort of “close my eyes and act” sort of mode. Which is important for disengagement, de-radicalization, negotiation, and so forth. So the argument was: are these differences in activation that we’re seeing the result of sort of more of an automated response. Things like: is it just that they’re more familiar with it? Is it just that it’s a more emotionally laden response? Is it just that it’s more salient; they just sort of are aware of the issue more, that they know exactly how they want to respond to it? So how do we know that that’s not what’s explaining the differences in their neural activation? And that’s why we actually went back to the participants and we had them fill out – for each value – how familiar the issue was, how salient it was, and sort of how how emotionally intense it was. It was an inventory; we added up emotions – I don’t remember exactly what it was – for each single issue. And then we were able to regress out emotional intensity, familiarity, and saliency. And yet we still saw the same activation difference. That means it couldn’t be explained by any one of those other things: those are not potential confounds for the differences that we found there. It just sort of adds more evidence that the the contrast that we found was the result of sacredness and not some other potentially confounding variable.
Conforming to community standards
Watkins: In addition to using neuroimaging to monitor the brain activity of participants as they argued for their sacred values, the team was also interested in knowing if they could change those beliefs. So we asked Nafees to tell us how they tested the plausibility of doing so.
Hamid: So we know the people didn’t really change their opinion for their sacred values; the content of their sacred values. But maybe when it comes to their actions – their willingness to fight and die – that might conform. That might get a little bit more malleable. And so they had this scale where they were saying on a scale of one to seven how much would they be willing to fight and die – you know, one being nothing seven being, you know, martyrdom, basically: fight and die, kill yourself, kill other people. But because they were radicalized people, they generally had pretty high willingness to fight and die. And so what we wanted to do an experimental manipulation was to give them false consensus feedback, where they see the value – they see the rating that they gave for that value – they press a button and then they see the general Pakistani community in Barcelona’s response to that value – which we made up; we told them during the debrief that we made it up. And so what we wanted was one third of the time for it to be lower than what they said in terms of willingness to fight and die, one third of the time the same, one third of the time higher. But the thing is because they were so high in their willingness to fight and die to begin with, we get ceiling effects in people’s responses to willingness to fight and die that we just couldn’t see whether they conformed – whether they were more willing to fight and die – if their peers were more willing to fight and die. So we really had to look at comparing when their peers said that they were as we went to fight and die versus when they had lower willingness to fight and die. And then they get out of a scanner and then we asked them to reevaluate their willingness to fight and die on each issue. And we saw that people conformed. There was a decrease in willingness to fight and die when they received that false consensus feedback about their peers being less willing to fight die than they were. So they did conform. What was interesting was that conformity – that decrease in stated willingness to fight and die – was correlated with increased activity in the DLPFC. Part of the brain that’s generally decreased in activation during sacred value processing. So when they’re sitting in the scanner and they’re seeing their peers are saying that they’re less willing to fight and die than they are, we’re seeing more DLPFC activation in that area.
Leigh: If you learned that members of your community didn’t endorse the same sacred values you did, you’d probably be pretty angry about it. You might even become more recalcitrant in your beliefs. And so, after completing the tasks in the scanner, Nafees and his team were interested in learning if this was so for the study’s participants as well.
Hamid: If your own community is saying that they’re not as willing to fight and die for their values as you are – especially for your sacred values – one would predict that people are not going to be so happy about this. So, when they got out the scanner, we did have like a whole post scan inventory where we were asking them – we had like a moral outrage scale that’s been used a lot; it’s a conglomerate of contempt, anger, and disgust. And so we would show them “tTis is in the rating you gave for this value in terms of your willingness to fight and die. And this is the rating the general population gave,” again, made up. And we would kind of ask them “how does how does it make you feel?” for each one of these things … And so I knew that when people were less willing to fight and die for them, especially for the sacred values, they were going to be more morally outraged. And they were probably wouldn’t be at all and when people agreed with them. And so, of course, we did find that people were outraged that people disagreed with them, and yet they still conformed, nonetheless. So we did see this correlation – really, basically – when people were at a higher self-reported moral outrage. But the truth is we don’t really know exactly what that means. I think that’s just gonna be another question for research. Okay, so you have increased insula activation that’s correlated with high moral outrage, and yet despite the moral outrage, people still conform: They’re angry that other people disagree with them about what they’re saying, and yet they still lower their willingness to fight and die. Because – one can imagine, for example – if you’re thinking of this in practical terms, you realize, “Okay, so influencing people’s perception of what their peers think might actually lower their tendency towards violence.” But there’s probably so many different ways in which you can deliver that message: in person, online, through satire, through unconscious or conscious means. The question is maybe some means might cause more of moral outrage than other means. And what we’re saying is that, “Well, however much moral outrage that may cause, people are still gonna lower their willingness to fight and die, so you should still try to do that.” But the question that I have is if there is a way to avoid the moral outrage altogether, would that lead to a stronger effect size even? And I don’t have the answer to that question yet. I think it’s an area for future research.
Watkins: Doug and I found it striking that the people in this study were able to both be outraged that their peers’ disagreed with them, while simultaneously decreasing in their willingness to fight and die for their cause. We asked Nafees to discuss what he believes the implications of this finding are.
Hamid: I guess what also is so fascinating about that result is it shows you just how simple little minor changes in terms of the circuitry of the brain can lead to these huge behavioral differences. You just have this lowered activation in the dorsal lateral prefrontal cortex, right? This sort of discorrelation that all of a sudden happens between these two areas of the brain that normally work in common. And you have the difference between a non sacred value – a value you’re willing to think about and in cost-benefit terms. A value you could negotiate and persuade someone with value. A value you could probably even buy off, you know? You can even actually get them to trade off this value for other values. So, a set of values that makes this person a totally rational, reasonable person you can have a civilized conversation with. You can mitigate conflict with. Versus this one part of the brain deactivating, and now all of a sudden you have sacred values. And now this person is just completely impenetrable in terms of being bought off: material incentives, cost-benefit analysis. This is now a devoted actor: a person who is just totally stubborn about these issues and is just not willing to budge on these issues at all. And now all of a sudden you have potentially intractable conflict. That’s kind of fascinating to me.
Applications to policy and practice
Leigh: We wrapped up our conversation by asking Nafees what application the team’s findings may have on future policy or practice aimed at deradicalization.
Hamid: You know, this can be actually used in a variety of ways in terms of policy implications for how you might try to disengage people from violence. From how you might use social media or counter-messaging. So counter-messaging is this kind of tool that’s used, which is like you put up videos on YouTube or Facebook or whatever, and you try to get people to essentially change their beliefs, or change their minds, about what they’re doing, by trying to show them videos about how bad ISIS or Al-Qaeda is, or about these extreme right-wing groups are: “They say they’re defending Muslims, but really they’re killing Muslims.” First of all, they’re usually coming from totally uncredible sources, right? I mean, it’s one thing to get a message from your friends or from a from a violent anti-establishment group that you’re kind of flirting with, in terms of, maybe, your interest in them. It’s a whole ‘nother thing to get a message from, you know, the French government saying “These guys are bad.” I mean, if you have anti-establishment tendencies, a message from the establishment is probably not going to have much sway on your thinking. But then on top of that, if you’re trying to get people to deliberate – but when it comes to their sacred values, this part of the brain is deactivated – so, first of all, you probably shouldn’t be even wasting your time trying to do that. But here’s something you can do with messaging: change people’s perception of what they think the social norms are. Make it very clear and targeted that in this local community your peers, your friends, your family members: they don’t agree with you. So you probably can do that with some sort of civil society organizations, local groups, putting up messages that amplify the voices in that community that in fact people do not endorse these values. That could change someone’s perception of social norms, which could lower their overall propensity to fight and die for these values. But also if you can even go offline – which would be even better – and if you are trying to disengage someone who maybe is going down a dangerous pathway: most these people have lots of friends and family members who are not themselves at all radicalized. And the last thing they want is that their loved one to go off and kill themselves in Syria or, even worse – kill themselves locally in a terrorist attack. You can recruit them. You know, it’s the friend to radicalize people ,but it’s also the non-radical friends who can who can disengage them as well. So it points to just how important community and friends and loved ones are when it comes to both the radicalization process as well as the de-radicalization disengagement process.
Links to article, bonus audio and other materials
Watkins: That was Nafees Hameed discussing his article “Neuroimaging ‘will to fight’ for sacred values: An empirical case study with supporters of an Al-Qaeda associate,” which he published, along with 10 other researchers, in the open access journal Royal Society Open Science, on June 12, 2019. You’ll find a link to their paper at parsingscience.org/e58, along with bonus audio and other materials we discussed during the episode.
Leigh: Reviewing Parsing Science on Apple Podcasts is a great way to help others discover our show. If you’re up for doing so head over to parsingscience.org/review to learn how. Or if you have a comment or suggestion for future topics or guests, visit us at parsingscience.org/suggest. You can also leave us a voice message toll free at 1-866-XPLORIT. That’s 1-866-975-6748.
Preview of next episode
Watkins: Next time, in episode 59 of Parsing Science, we’ll be joined by Brooke Macnamara from Case Western Reserve University. She’ll discuss her attempt to replicate a seminal study from the 1990s which led to the mantra popularized by Malcolm Gladwell that 10,000 hours of deliberate practice is necessary to become an expert performer.
Macnamara: The violinists rated “practice alone” as more relevant to improvement on the violin than teacher-design practice, which I thought was interesting. And there was a significant difference in that, but there was not a significant difference in the amount of variance explained by either practice alone or teacher-design practice. In both cases it was roughly 25% of the variance.
Watkins: We hope that you will join us again.