TerrorismLack of evidence-based terrorism research hobbles counterterrorism strategies
The Global Terrorism Database at the University of Maryland estimates that groups connected with al-Qaeda and the Islamic State committed almost 200 attacks per year between 2007 and 2010. That number has increased to about 600 attacks in 2013. As terrorism becomes more prevalent, the study of terrorism has also increased, which, in theory, should lead to more effective antiterrorism policies, and thus to less terrorism. The opposite is happening, however, and this could be partly due to the sort of studies which are being conducted. The problem: few of these studies are rooted in empirical analysis, and there is an “almost complete absence of evaluation research” concerning anti-terrorism strategies, in the words of a review of such studies.
The Global Terrorism Database at the University of Maryland estimates that groups connected with al-Qaeda and the Islamic State committed almost 200 attacks per year between 2007 and 2010. That number has increased to about 600 attacks in 2013. As terrorism becomes more prevalent, the study of terrorism has also increased, which in theory should lead to more effective antiterrorism policies, and thus to less terrorism.
The opposite is happening, however, and this could be partly due to the sort of studies which are being conducted, according to Anthony Biglan of the Oregon Research Institute. Writing in theNew York Times, he notes that a 2008 review of terrorism literature in the journal Psicothema found that only 3 percent of peer-reviewed articles appeared to be rooted in empirical analysis, and that in general, there was an “almost complete absence of evaluation research” concerning anti-terrorism strategies.
Using techniques of prevention science, researchers studying terrorism could identify key risk factors, develop interventions to modify those risk factors, and test those interventions through randomized trials. Scientists have used this methodology to identify interventions which effectively prevent problems including antisocial behavior, depression, schizophrenia, cigarette smoking, alcohol and drug abuse, academic failure, teenage pregnancy, marital discord, and poverty.
Jon Baron, head of the Coalition for Evidence-Based Policy, which supports the use of randomized trials to evaluate government programs, says that he has been able to identify only two experimental evaluations of anti-terrorism strategies. One of them is a field experiment reported in a paper from a World Bank office in 2012, in which 500 randomly assigned Afghan villages received a development aid program either in 2007 or after 2011. The program had positive effects on economic outcomes, villagers’ attitudes toward the government, and villagers’ perceptions of security. Researchers also reported a reduction in the number of security incidents, though that effect was not sustained after the program ended and was evident only in villages that were relatively secure before the program began. Overall, the study found a limited benefit of an aid program in reducing insurgent violence.
The second study published in 2014 in theEconomic Journal, details researchers randomly assigning neighborhoods in Nigeria to have or not have a campaign to reduce pre-election violence. The study found that the campaign, which made use of town meetings and print materials, increased empowerment to counteract violence and voter turnout, and reduced the perceptions and intensity of violence.
Researchers should extend these same evaluation techniques to U.S. government programs meant to counter radicalization. The Justice Department’s National Institute of Justice has spent over $9 million on a program launched in 2012 to study domestic radicalization. The Times notes that none of the projects funded to date are adequately evaluating a strategy to prevent radicalization.
One project in particular is an effort to increase awareness of risk factors for radicalization as well as community-based responses to them among members of a U.S. Muslim community. The project’s impact will be assessed by comparing outcomes for those who never participate, those who participate once, and those who participate multiple times. If the project determines that those who participated multiple times were less radicalized than those who never participated, it is simple to assume that the program is effective.
Biglan writes that lessons in evaluation research, however, show that such a difference could be just as likely if those who were less likely to be radicalized were more likely to participate in the program. Randomly assigning some people to participate in the program and others not to is a more confident way to test the program’s impact. That way, the results are unlikely to be influenced by pre-existing differences between the two groups.
— Read more in Cynthia Lum et al., “Is Counter-terrorism Policy Evidence-based? What Works, What Harms, and What is Unknown,” Psicothema 20, no. 1 (2008): 35-42; Andrew Beath et al., Winning Hearts and Minds through Development? Evidence from a Field Experiment in Afghanistan (World Bank, July 2012); and Paul Collier and Pedro C. Vicente, “Votes and Violence: Evidence from a Field Experiment in Nigeria,” Economic Journal 124, no. 574 (February 2014): 327-55 (DOI: 10.1111/ecoj.12109)