Ana səhifə

May 4, 2004 Executive Director


Yüklə 267 Kb.
səhifə1/7
tarix26.06.2016
ölçüsü267 Kb.
  1   2   3   4   5   6   7

b




Massachusetts Chiefs of Police Association

PRESS RELEASE

FOR IMMEDIATE RELEASE Chief Richard A. Marchese, Ret.


May 4, 2004 Executive Director


508 842-2935

RACIAL PROFILING DATA COLLECTION

Police officers in Massachusetts do a remarkable job at protecting our citizens, addressing crime, and enforcing the law in an even-handed manner. Putting their lives on the line daily, police officers deserve our praise and thanks. Anyone who thinks that the police in this state have a practice of stopping motorists on the highway based on their race or skin color is sadly misinformed.


The elimination of all forms of race-based enforcement is a commitment shared by the police chiefs and virtually all police officers across this state. Unfortunately, the single greatest threat to progress in “color blind policing” is faulty data collection and analysis that erroneously seeks to label certain departments as likely to be engaging in racial profiling. The time, energy, resources and funds expended over the past three years on data collection and analysis diverted attention from and hindered efforts at addressing the underlying causes for real or perceived biased policing.

The data on a statewide basis compiled by the Northeastern University’s Institute on Race and Justice (IRJ) confirms that, overall, Massachusetts police departments did a remarkable job at enforcing the traffic laws in a fair and unbiased manner. The data reveals a statewide “racial disparity” of only 2.8%. This is truly a credit to the evenhanded enforcement practices of the overwhelming majority of our dedicated police officers.

By way of comparison, the data collection projects in other states where racial profiling was “confirmed,” had disparities of 40-80%! (In some states, e.g., Connecticut, a disparity level below 5% resulted in a statewide declaration of “no profiling.”)

This is not to say that any level of biased policing is acceptable – it is not.

Whenever a person believes they have been victimized on the basis of their race, skin color or nationality, we have a problem. Community policing depends on creating trust so that we can all address community concerns. However, all the data available tells us that the problem is not widespread. For example, the state installed a toll-free hot line three years ago and publicized it with radio ads, billboards and the like, encouraging citizens to call if they believed they had been the victim of racial profiling. As was the experience in other states, the phone hardly ever rang. This corresponds to the experience virtually all police chiefs have reported. Few, if any, complaints of profiling have been made by minority drivers to most municipal police departments in this state.

With this said, we have prepared a comprehensive “Action Plan” to further address real or perceived biased policing in Massachusetts. Since the overall disparity percentage is so low, it is our hope that the Secretary of Public Safety, the Attorney General and the legislature will agree that no departments should be compelled to engage in additional data collection. Rather, the present level should continue, with frequent (monthly?) user-friendly analysis being provided to Chiefs to help monitor progress. As more information is available about what type of data collection is beneficial, the usefulness of this tool could be reconsidered.

The Police Executive Research Forum (PERF) report issued in April 2004 entitled “By the Numbers: A Guide for Analyzing Race Data from Vehicle Stops,” funded by the US Department of Justice Office of Community Oriented Policing Services (COPS), concludes by stating:

We strongly recommend, however, that agencies focus not merely on measuring racially biased policing but on responding to it.”

IS DATA COLLECTION WORHT IT?
After spending over three years and possibly upwards of a million dollars on data collection and analysis, what do we have to show for it? This Association issued an “Action Plan to Address Racial Profiling” in the year 2000. It recommended that rather than wasting years of time and scarce tax dollars, we acknowledge that there is a problem. The report detailed several ways to address the issue. Enhancement of training, supervision, recruitment and discipline were suggested. Changes in policies and procedures were also included. We predicted, correctly as it turns out, that focusing everyone’s attention on data collection would mean ignoring those steps which could actually produce the desired result.
If the state insists on more data collection, rather than the other enhancements, it will defer and even setback the effort at eliminating the perception and reality of racial profiling.

GOVERNMENT STUDY FAULTS ALL DATA COLLECTION


A report released in April 2004 by the Police Executive Research Forum (PERF) and funded by the U.S. Department of Justice Office of Community Oriented Police Services (COPS) entitled “By the Numbers: A Guide for Analyzing Race Data from Vehicle Stops” helps document the weaknesses in the Massachusetts project as well as those across the country. While it might be possible to design a more accurate data collection and analysis effort in the future, the numerous errors built into the Massachusetts Racial Profiling Data Collection Project to date have doomed it to failure. Flawed data, erroneous assumptions, and untested speculation cannot produce reliable conclusions.
The Secretary of Public Safety has a unique opportunity to advance race relations and effective law enforcement, depending upon how he responds to the racial profiling data collection report. By bringing together a diverse group of police professionals, community groups and researchers over the past eight months, the Secretary has started a process that could help lead the way to real progress. It will take great courage on the Secretary’ s part to stand up to those who insist that the police cannot be trusted and that data collection will confirm their accusations of biased policing.

It is unrealistic to expect police officers to welcome more data collection after their experience with the past three year’s problems.


EXAMPLES OF DATA COLLECTION ERRORS IN MASSACHUSETTS





  1. Most police officers received little guidance and no training from the state on how to complete the new citation forms. No test phase was attempted to work out the “bugs”. No effort was made to learn from mistakes.

  2. Despite a requirement that all police departments receive monthly reports on the data related to their department – presumably so they could make corrections and identify problems early - this never happened. Only once during the past three years was any data sent, and it was in a form that was unintelligible to most non-research professionals.

  3. Officers were instructed to “guess” at a driver’s race. Brazilians, for example, were classified as Hispanic by some officers (even though they do not speak Spanish), and as white by others – as well as by the Census? The US Census allows everyone to “self declare” and often people list themselves as a certain % of more than one race or nationality. The state refused to address this and would not put the information on drivers’ licenses, forcing an unreliable guessing or “unknown” reply by many officers.

  4. Despite massive confusion over when to check off the “search” box, no corrective training or instructions were issued by the state. It has been widely acknowledged by the researchers and state officials that with the low number of searches conducted, the errors have irrevocably skewed the results.

  5. The lack of any funding from the state meant that the money for data analysis had to be diverted from federal grants by the Executive Office of Public Safety. For the small amount involved (reportedly in the vicinity of $100,000), the IRJ has done a remarkable amount of work. Since funds were scarce, several “shortcuts” were attempted, often with disastrous results. For example, rather than conducting on-site traffic studies as the rest of the country has done, the IRJ team resorted to modifying the “Dominos Pizza” delivery computer program to come up with an “estimated driving population.” They estimated wrong in every case that we tested. After putting observers on the roadways where most tickets were issued, the percentage of non-white drivers actually on the road was nearly identical to the percentage of non-white drivers cited. This varied markedly from what the researchers used as an estimate and upon which they based a major portion of their report. (To show the absurdity of the results, the map the researchers produced for drivers likely to frequent Cambridge includes those from Attleborough – near the Rhode Island border!)

  6. The only time the researchers made any on-road observations was in connection with trying to determine an estimated driving population for the State Police. They picked out a sample roadway in each barracks region and sent some college students in a car to make observations of the race or nationality of drivers. As an acknowledgement of the difficulty of identifying non-white drivers before they are pulled over, the researchers only made observations during the daytime (and with three people in each car!)


Getting Data Collection Right

The Police Executive Research Forum (PERF) issued the first ten chapters of a report in April 2004 entitled “By the Numbers: A Guide for Analyzing Race Data from Vehicle Stops.” Funded by the U.S. Department of Justice Office of Community Oriented Policing Services (COPS), the report is intended as a guide for law enforcement agencies, municipal officials, citizens and community groups on how to analyze, interpret and understand vehicle stop data being collected on drivers’ race. (The second volume, containing chapters 11-13, will be issued in August 2004.)


Unfortunately, the information contained in this report was not available in 2000 when Massachusetts enacted its Racial and Gender Profiling Data Collection law. In nearly every area, the Massachusetts effort falls short of the report’s recommendations on how to properly collect and analyze data. This is not from a lack of effort or professionalism on the part of the dedicated researchers at Northeastern University’s Institute on Race and Justice (IRJ). As the IRJ’s “preliminary Tabulations” report issued in January 2004 stated:

From the outset it is important to note that aggregate data, such as the data presented in this preliminary report can indicate patterns of disparate traffic citation activity in a department but cannot identify the motives involved in individual traffic stop, citation or other enforcement decisions. Therefore, this preliminary report should not be read as an indication of racial profiling by any Massachusetts law enforcement agency. Social science cannot provide reliable explanations for what individual officers are thinking when they decided to stop or cite a particular motorist. Social science can, however, help to identify whether certain groups are disproportionately targeted for enforcement practices. (emphasis added)

Research on racial profiling in traffic stops is a relatively new area of inquiry. Although numerous studies have begun to address questions of differential treatment in traffic stops, no absolute consensus exists about the best way to determine disparities. Racial disparities in citations can result from a number of factors that social scientists are just beginning to understand. Bias on the part of an individual officer is one of several possible explanations for disparities in citations. For example, certain department enforcement strategies or allocation of patrol resources. while perhaps race neutral on their face may, result in the disparate treatment of particular racial groups. In some communities, police commanders may assign a larger number of officers to a particular neighborhood because that neighborhood has more crime and thus an increased need for police services. It may then be the case that police assigned to this high crime area engage in traffic enforcement as part of their normal patrol activities and since there are more police working in this neighborhood, individuals who live, work or drive through this neighborhood are more likely to be stopped and cited than individuals who live in other neighborhoods. If the neighborhoods where police assign additional patrols are neighborhoods where people of color are more likely to live, then the deployment decision may result in racial disparities in traffic citations. (emphasis added)

The following excerpts from the PERF report are helpful in understanding that the Massachusetts traffic stop data cannot identify instances of racial profiling:


It is not difficult to measure whether there is disparity between racial/ethnic groups in stops made by police; the difficulty comes in identifying the causes for any disparity. For instance, a jurisdiction might compare the demographic profile of people stopped by police to the demographic profile of residents as measured by the census. The results might show “disparity”; that is, the results might show that some groups are stopped disproportionate to their representation in the residential population. The jurisdiction, cannot, however, identify the causes of that disparity using this measure. Only after controlling for driving quantity, driving quality, and driving location, can a researcher who finds that minorities are disproportionately represented among drivers stopped by police conclude with reasonable confidence that the disparity reflects police bias in decision making. (emphasis added)
Although jurisdictions nationwide have invested considerable resources to collect race data from vehicle stops, most jurisdictions do not know how to analyze the collected data properly. They are either ill-equipped to do the analysis, or they are misinformed about what should be done. An overwhelming majority of the data analyses reviewed by PERF staff for this project were based on substandard methods. Most agencies are using models for their analyses that fall far short of minimal social science standards. In jurisdictions across the country, reports prepared by agencies or external groups (for example, some civil rights groups) draw conclusions wholly unsupported by the data. Other reports indicate that despite all the efforts and resources that were dedicated to the data collection, no conclusions can be drawn. These failures can largely be explained by the complexity of the task of measuring whether policing in a jurisdiction is racially biased. A tremendous number of factors other than bias can legitimately influence police decisions to stop drivers, and these “alternative hypotheses” must be ruled out before the “bias hypothesis” can be tested. A lack of understanding about which benchmarking methods will yield the most valid interpretations of the data is hindering agencies’ efforts to reach valid, responsible conclusions.
A key aspect of analyzing vehicle stop data is to determine whether the driver’s race/ethnicity has an impact on police stopping decisions. In order to assess whether there is an impact, however, we must exclude or “control for” factors other than race/ethnicity that might legitimately explain police stopping decisions.
In developing “benchmarks,” the researcher is attempting to construct a comparison group that represents the drivers at risk of being stopped by police—absent bias. This group is compared to the group of drivers actually stopped to help determine whether racial bias may have been a factor in police officers’ decision-making process. The variation in quality across benchmarks is directly related to how closely each benchmark represents the group of people who should be at risk of being stopped by police if no bias exists. The strongest benchmarks take into consideration variations in driving quality, driving quantity, and driving location.
We emphasize that an agency should, if feasible, select a plan for analyzing the data at the same time that the decision makers decide what stops to target and what information to collect on stops.1

We recommend that decision makers select all traffic stops or all vehicle stops, and not a subset of these categories as defined by their outcomes (for example, citations, arrests). Some jurisdictions (indeed, some entire states) are collecting data only on subsets of stops, such as traffic stops that result in a citation. In Chapter 3 we explain why this practice produces substandard data for analysis.

In Chapter 3 we also encourage agencies to involve residents and agency personnel from all levels in planning data collection and analysis.
We start by explaining how the data that have been collected from officers can be checked for quality, an important first step in any type of social science research and not unique to the analysis of police-citizen contact data. Although there is no cost-effective way to ensure that the data are 100 percent accurate, the methods described in the chapter can help the researcher check for and enhance the quality of their data. A range of methods can be used to ascertain whether officers are submitting forms to the agency for each and every stop targeted for data collection.
Additionally, there are methods for assessing the level and source of missing data, errors, and intentional misstatements of facts. When selecting reference periods we recommend that, if economically and politically feasible, agencies collect one year of data before analyzing it. Agencies are advised to delay the start of the reference period for several months after data collection begins. In the first few months, officers can become accustomed to the data collection process, and their data should be reviewed to identify particular problems (such as large amounts of missing data on certain variables or missing forms). Once the problems appear to be resolved, the reference period should begin.
For many reasons, it is appropriate for agencies to analyze subsets of their police-citizen contact data. In Chapter 4 we describe why a researcher might choose not to analyze all of the data submitted during the reference period but only a portion, and how and why a researcher might conduct separate, multiple analyses using subsets of the data. For example, the researcher might choose to analyze for his or her report only proactive stops (stops in which police have discretion regarding whom to stop); then the researcher might choose to conduct separate analyses of these data within geographic sub-areas of the jurisdiction. We discuss subsets based on (1) whether stops are proactive or reactive, (2) whether the officer could discern the driver’s race/ethnicity, (3) whether the driver appears in the database once or multiple times, (4) geographic locations of stops (to allow for analyses within sub-areas of the jurisdiction), and (5) whether the stops are for

traffic violations or for the purpose of investigating crime.


The final section of Chapter 4 explains the need for comparability of the stop data and benchmarking data or what we call “matching the numerator and the denominator.” The “numerator” refers to the data collected on stops by the police, and the “denominator” refers to the data collected to produce the comparison group, or benchmark. To “match the numerator and the denominator” the researcher adjusts the stop data to correspond to any limiting parameters of the benchmark or vice versa. For instance, in the observation benchmarking method, researchers collect data from the field regarding the race/ethnicity of drivers. Placed at various locations, the observers count the drivers in different race/ethnicity categories. This process produces a racial/ethnic profile of drivers observed at these locations that can be compared to the people who are stopped by police. Since the “denominator” (observation data) pertains only to certain areas, the relevant analysis will only include in the “numerator” the police stops in that area. Using this method, the researcher will compare the demographics of the people who are observed driving through Intersection A, for example, to the demographics of the people stopped by police in and around Intersection A. (This type of analysis will be conducted separately for each intersection.)

The numerator and denominator must be matched with regard to other parameters as well. For example, if observation data were collected from January through May 2002, the analysis should involve only police stops that occurred during roughly that same time period. If the researchers collected observation data only during daylight hours because of visibility issues, then the analysis should include in the numerator only those stops that occurred during daylight hours.


Chapters 5 through 10 target some of the mistakes often made when comparing stop data to commonly used benchmarks. For example, many law enforcement agencies and outside analysts will compare the percentage of stops that involve African Americans or other minorities to the racial make-up of the residents of a particular area as measured by census data. More often than not, the mass media, civic groups, and citizens draw conclusions from this comparison regarding the existence or lack of racially biased policing in the jurisdiction; these conclusions are wholly unsupportable using this method of analysis. Frequently, no mention is made of non-race-related explanations for the disparity between the census population and the population of stopped drivers, explanations that relate to driving quantity, driving quality, and driving location. These are all factors that legitimately affect stopping behavior by police.

Despite the weaknesses of using census data as a diagnostic tool, some jurisdictions (limited by resources or time) may have no option other than to use this method. This will be particularly true of researchers charged with analyzing data for an entire state. The obligation of the researcher in this position is to ensure that the results are conveyed in a responsible fashion. In fact, this obligation falls to all stakeholders, including concerned citizens, civil rights groups, and the media. No one interpreting results based on census benchmarking—even adjusted census benchmarking—can claim they have proved the existence or lack of racially biased policing.

This caveat is not unique to adjusted census benchmarking, and the inability to identify a causal connection between driver race/ethnicity and police decisions does not mean that data collection is without value. Even if the results from data collection do not provide definitive conclusions, they can serve as a basis for constructive discussions between police and citizens regarding ways to reduce racial bias and/or perceptions of racial bias.
The chapter also explains how social scientists have addressed these questions in the context of their research. A key point of controversy is whether to use as a benchmark all drivers at the selected site or only traffic law-violating drivers at the site. We recommend that the observation benchmark be based on law-violating drivers, not all drivers, because this model encompasses the fact that drivers who drive poorly are at greater risk of being stopped by police. (We present the alternative viewpoint in an appendix.)
Chapter 13, the final chapter in By the Numbers, will discuss how law enforcement agencies can use the results from data collection to achieve reform. Even results based on weak benchmarking methods can stimulate productive discussions between police and residents about the issues of racially biased policing and the perceptions of its practice. The chapter suggests how these discussions can be structured to produce action plans for reform. We strongly recommend, however, that agencies focus not merely on measuring racially biased policing but on responding to it. Varied responses to racially biased policing are set forth in PERF’s first DOJ COPS-funded report, Racially Biased Policing: A Principled Response, available on the PERF Web site. They can be grouped in the following areas: supervision/accountability, policy, recruitment/hiring, training/education, and outreach to diverse communities.


MCOPA’s Recommendations


The cost of data collection and analysis in Massachusetts has not proven to be worth the effort. At a time when police budgets are being cut, training is being reduced, and layoffs are being implemented, it is irresponsible to divert hundreds of thousands, if not another million or so dollars, to study a problem rather than working on the solution. If there is money left over after this state funds training, policy development, supervision enhancements, community policing models, and improve recruitment, hiring and discipline, data collection might be considered; but next time, try and do it right!

  1   2   3   4   5   6   7


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət