Making Sense of the Increasingly Mixed Research on Body-Worn Cameras (BWCs)

Making Sense of the Increasingly Mixed Research on Body-Worn Cameras (BWCs)

Janne E. Gaub, PhD, BWC TTA Subject Matter Expert, Michael D. White, PhD, BWC TTA Co-Director

 

Over the past five years, the number of research studies on BWCs has exploded, from just five in 2014 to nearly 120 as of December 2019. The studies address numerous outcomes including use of force and citizen complaints, officer and citizen perceptions, court outcomes, and officer activity measures (e.g., arrests and self-initiated calls). Some utilize “gold standard” randomized controlled trials (RCTs), whereas others use less rigorous methods. The early studies on BWCs were almost universally positive (e.g., studies of police departments in Rialto (CA), Mesa (AZ), Phoenix (AZ), and Orlando (FL)). Over time, however, the findings have become more mixed. Some studies look at the same outcomes but produce opposite findings. Some studies show that BWCs reduce complaints and use of force, which others show no impact. How can we make sense of this growing—and sometimes conflicting—body of research? We recently published an article in the American Journal of Criminal Justice in which we delve into this question. Below we summarize the main points from our article.

Five summaries (published between 2016 and 2019) attempt to bring together this disparate research (Cubitt, Lesic, Myers, & Corry, 2016; Lum, Stoltz, Koper, Scherer, & Scherer, 2019; Malm, 2019; Maskaly, Donner, Jennings, Ariel, & Sutherland, 2017; White, Gaub, & Padilla, 2019b, 2019a), but even these summaries draw different conclusions. For example, Maskaly et al. (2017) conclude: “The evidence from BWC evaluations suggests that the use of BWCs can have benefits across a number of citizen and officer outcomes” (p. 672). Lum et al. (2019) published the most comprehensive review of BWC research to date, and their ultimate assessment is rather negative: “Although officers and citizens are generally supportive of BWC use, BWCs have not had statistically significant or consistent effects on most measures of officer and citizen behavior or citizens’ views of police” (p. 93).

Why is consensus lacking even among those tasked with summarizing or assessing the state of the literature? And perhaps more importantly, why are the findings so different from study to study? We believe the answer to this question is tied to three important factors: the department and community’s starting point (i.e., local context), the manner in which the department implements its BWC program, and methodological variation across studies.

The State of the Department and Local Context

Agencies that deploy BWCs don’t start at the same place. Some begin in a state of upheaval or in response to a controversial incident, whereas others acquire BWCs as part of a general effort to remain professional. These variations in starting place significantly affect outcomes as well as the interpretation of outcomes. For example, the Rialto Police Department (RPD) experienced large declines in use of force and citizen complaints after deploying BWCs. But the local story matters. In the years preceding BWC deployment, RPD experienced a series of scandals, and the city council actually attempted to disband the department. RPD was at a low point when the city hired a new chief who immediately set about reforming the troubled agency, and this reform included the use of BWCs. The magnitude of the reductions in use of force and of complaints after the BWC deployment reflected, at least in part, the poor state of the agency prior to Chief Tony Farrar’s arrival.

On the other hand, departments such as the Washington, DC, Metropolitan Police Department and the Los Angeles Police Department deployed BWCs after successfully navigating a decade or more of federal oversight through a US Department of Justice consent decree. Those departments were at a high point when BWCs were rolled out because federal oversight likely helped address their organizational deficiencies—as evidenced by their successful completion of the consent decree process. Perhaps not surprisingly, both departments saw little change in use of force and complaints after BWC deployment. Bottom line: police departments have different starting points when BWCs are deployed, and those starting points must be considered when examining the impact of BWCs.

BWC Implementation

Similarly, the local story of how and why a department decided to implement BWCs can significantly affect outcomes. Some departments were forced to acquire BWCs (by community demands or local political bodies), while others obtained them as a response to a high-profile event or series of events. Still others made a deliberate choice to implement a BWC program—and did so methodically and thoroughly. How the department rolled out BWCs can also influence officers’ reaction to the cameras. Were they involved in the adoption process? Did the leadership hear their concerns? Was the union consulted on key policy issues? The answers to these questions can help explain problems with implementation, such as low activation rates and vast amounts of untagged videos. The extent to which a department follows best practices in program planning and implementation can affect such things as the degree of resistance from officers, activation and policy compliance rates, and use of footage by courts and other criminal justice actors. In plain terms, how can we possibly expect BWCs to have any impact if they are not used as intended?

Methodological Variation

A host of differences in research methodology can affect a study’s findings, as well as the degree of confidence we should place in those findings. Below we describe three of the most important.

Rigor

BWC studies have varied widely in terms of methodological rigor. Sherman et al. (1998) developed the Maryland Scale of Scientific Methods (MSSM) to assess rigor. This scale ranges from 1 to 5, with a Level 5 study being the strongest (an RCT). Most BWC research is either a 3, 4, or 5 on the MSSM, which is good. That said, results from Level 3 studies are not as strong as those from Level 5 studies. Police departments and researchers should seek to employ the most rigorous research designs possible.

Randomization and Contamination

Even the most rigorous studies can have methodological issues, especially regarding randomization and contamination. First, what gets randomized? Are officers randomly assigned to wear BWCs or not? If yes, some officers will have BWCs and others won’t. Or are work shifts randomized? If so, the same officer will sometimes have a BWC and sometimes won’t. Researchers disagree about which approach is better.

The randomization question is important because it affects the degree of contamination, or when non-BWC officers are exposed to the BWC (most commonly when BWC and non-BWC officers respond to the same call). This violates the foundational principle of an RCT. Think about a medical RCT testing a new drug. Patients are randomly assigned to get the new drug or a placebo (control group). Every time a control group patient gets the real drug instead of the placebo, this contamination limits the ability to isolate the effect of the new drug. This, of course, shouldn’t happen in a medical trial because the study happens in a controlled setting. BWCs are certainly not carried out in a controlled setting—quite the opposite in fact! Still, the principles of randomization are the same in BWC studies. Contamination is often not measured in BWC studies, and when it is, the rates vary considerably (from 20 percent to 100 percent). The greater the contamination, the less confidence we should have in the study findings.

Defining Outcomes

Finally, sometimes two studies examine the same outcome but measure it differently. Consider use of force, a common outcome in BWC studies. Pointing (but not discharging) a firearm or deploying a canine that does not result in a bite may be a reportable use of force in one jurisdiction but not in another. Similar problems arise with other common outcomes such as complaints and injuries. These differences make it difficult to compare BWC studies. The adoption of common metrics for outcomes in BWC studies is the ideal, but may not be realistic. After all, is it fair for a BWC researcher to ask a chief to change his or her use of force reporting to match processes used in other departments? The best a chief can do is follow best practices for training, policy, and practice, applied to the local needs of his or her specific agency.

Final Thoughts

Academics, practitioners, and the public view BWC research through different lenses, but we all seek to answer two basic questions: Do BWCs “work”? And are they worth the cost? Those are simple questions with very complex answers. As the research on BWCs grows, we believe the findings will become increasingly mixed, and this is okay. As Malm (2019) notes:

 

There are no absolutes….Medications approved by the US Federal Drug Administration (FDA) rarely cure everyone afflicted with a disease, and many medications produce a host of side effects. Programming in criminal justice is no different. Even programs that are considered to have a robust evidence base, such as hot-spots policing, are still vulnerable to implementation issues, dosage concerns, and vagaries of reporting mechanism. We should not expect any difference with police BWC studies.

 

The challenge for police practitioners, policy-makers, and researchers centers on interpreting those mixed findings, and understanding which studies to trust (and which to be skeptical about). This in-view provides three insights in that regard. First, local context matters. What was the state of the department pre-deployment? Did the department adopt BWCs in response to a scandal or controversial incident? Or did the department adopt BWCs as part of a larger professionalization effort? Second, implementation matters. Did the department engage in a thoughtful, deliberate planning process? Did they engage with relevant stakeholders, both internally and externally? Or was planning and implementation rushed? Researchers should focus on these critical issues because they provide the necessary lens for considering research findings. Last, the degree of confidence we have in a study’s results must be shaped by the rigor of the study. We should have little confidence in studies rated Level 1–2, healthy skepticism about Level 3 studies, and confidence about Level 4 and 5 studies.

 

For more detail, see our recently published article.

 

References

Cubitt, T. I. C., Lesic, R., Myers, G. L., & Corry, R. (2016). Body-worn video: A systematic review of literature. Australian and New Zealand Journal of Criminology, 50(3), 379–396. https://doi.org/10.1177/0004865816638909

Lum, C., Stoltz, M., Koper, C. S., Scherer, J. A., & Scherer, A. (2019). Research on body-worn cameras: What we know, what we need to know. Criminology & Public Policy, 18(1), 93–118. https://doi.org/10.1111/1745-9133.12412

Malm, A. (2019). The promise of police body-worn cameras. Criminology & Public Policy, 18(1), 119–130. https://doi.org/10.1111/1745-9133.12420

Maskaly, J., Donner, C., Jennings, W. G., Ariel, B., & Sutherland, A. (2017). The effects of body-worn cameras (BWCs) on police and citizen outcomes: A state-of-the-art review. Policing: An International Journal of Police Strategies & Management, 40(4), 672–688. https://doi.org/10.1108/13639510210450631

Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L., Eck, J., Reuter, P., & Bushway, S. D. (1998). Preventing crime: What works, what doesn’t, what’s promising. Washington, DC: US Department of Justice. https://doi.org/10.1111/an.1986.27.3.11.5

White, M. D., Gaub, J. E., & Padilla, K. E. (2019a). Impact of BWCs on citizen complaints: Directory of outcomes. Retrieved June 19, 2018, from Bureau of Justice Assistance Body-Worn Camera Training and Technical Assistance website: http://www.bwctta.com/resources/bwc-resources/impact-bwcs-citizen-complaints-directory-outcomes

White, M. D., Gaub, J. E., & Padilla, K. E. (2019b). Impacts of BWCs on use of force: Directory of outcomes. Retrieved June 18, 2018, from Bureau of Justice Assistance Body-Worn Camera Training and Technical Assistance website: http://www.bwctta.com/resources/bwc-resources/impacts-bwcs-use-force-directory-outcomes

 

Author Bios

Janne E. Gaub, PhD, is an assistant professor in the Department of Criminal Justice and Criminology at the University of North Carolina at Charlotte, and she is a BWC subject matter expert for Training and Technical Assistance for the US Department of Justice Body-Worn Camera Policy and Implementation Program. She received her PhD in criminology and criminal justice from Arizona State University in 2015.

Michael D. White, PhD, is a professor in the School of Criminology and Criminal Justice at Arizona State University, and he is the associate director of ASU’s Center for Violence Prevention and Community Safety. Dr. White is co-director of Training and Technical Assistance for the US Department of Justice Body-Worn Camera Policy and Implementation Program. He received his PhD in criminal justice from Temple University in 1999. Prior to entering academia, Dr. White worked as a deputy sheriff in Pennsylvania.


This project was supported by Grant No. 2015-DE-BX-K002 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justiceand Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the author and do not necessarily represent the official position or policies of the U.S. Department of Justice