4. IDENTIFY & PRIORITIZE PROBLEM Identify recurring problems of concern Identify consequences of the problem Prioritize identified problems Develop broad goals Confirm that the problem exists Determine how frequently the problem occurs and how long it has been taking place Select problem for closer examination
5. The CHEERS Test Community – must experience harmful events Harmful – property loss/damage, injury/death, mental anguish, undermining police (illegality not a defining characteristic of problems) Expectation – community members expect police to act (not necessarily a majority) Events – problems made up of discrete events Recurring – acute or chronic Similarity – recurring events must have something in common http://www.popcenter.org/learning/60steps/index.cfm?stepNum=14
6. UNDERSTAND YOUR PROBLEM 5 W + 1 H = Hypothesis Whois involved? Whatexactly do they do? Whydo they do this? Wheredo they do this? Whendo they do this? How do they carry out the crime? Hypothesis – a statement that explains why the problem is occurring
8. RESEARCH THE PROBLEM Identify/understand events/conditions that precede and accompany the problem Identify relevant data to be collected Research what is known about the problem type Take inventory of how problem is currently addressed and the strengths and limitations of current response Narrow the scope of the problem as specifically as possible Identify a variety of resources that may assist in developing a deeper understanding of the problem Develop working hypothesis about why problem is occurring
9. The Five Most Useful Websites Center for Problem-Oriented Policing (www.popcenter.org) National Criminal Justice Reference Service (NCJRS) Abstracts Database (http://www.ncjrs.gov/abstractdb/search.asp) The Home Office | Crime, United Kingdom (http://www.homeoffice.gov.uk/crime/) Australian Institute of Criminology (www.aic.gov.au) 9 Four
10. DATABASES at Hellman Library via Blackboard EBSCO Host (search single or multiple databases) Academic Search Premier Criminal Justice Abstracts ERIC SocINDEX Google Scholar JSTOR (historical) LexisNexis Academic ProQuest Academic SAGE journals online Interlibrary Loan 10 Log in and try it
12. INTERVENTION Brainstorm for new interventions Search for what other communities with similar problems have done Choose among the alternative interventions Outline a response plan and identify responsible parties State specific objectives for response plan Carry out the planned activities
13. POLICE-SPECIFIC PROJECTS Goldstein Awards (http://www.popcenter.org/goldstein/) Recognizes outstanding police officers and police agencies–both in the United States and around the world–that engage in innovative and effective problem–solving efforts and achieve measurable success in reducing specific crime, disorder, and public safety problems. Tilley Awards (http://www.homeoffice.gov.uk/crime/partnerships/tilley-awards/) Set up by the U.K. Home Office Policing and Reducing Crime Unit (now the Crime and Policing Group) in 1999 to encourage and recognize good practice in implementing problem–oriented policing (POP) 13
14. IDENTIFY RESPONSES Keep a summary record of responses Note primary source Explain how response works Under what conditions it works best Any special considerations (costs, legal requirements, etc.)
16. EVALUATE AND ASSESS KEY QUESTION: DID PROBLEM DECLINE ENOUGH TO END THE EFFORT? Determine whether plan was implemented (process evaluation) Collecting pre– and post–response data (qualitative & quantitative) Determine whether broad goals and specific objectives were attained Identify any new strategies needed to augment original plan Conduct ongoing assessment to ensure continued effectiveness
17. EVALUATION VS ASSESSMENT EVALUATION– scientific process for determining if a problem declined and if the solution caused the decline Begins the moment the problem-solving process begins and continues through the completion of the effort ASSESSMENT – the final stage of both evaluation and problem solving Answers the following questions: Did the response occur as planned? Did the problem decline? If so, are there good reasons to believe the decline resulted from the response
19. TYPES OF EVALUATIONS Process Evaluation Did response occur as planned? Did all response components work? involves comparing the planned response with what actually occurred Impact Evaluation Did the problem decline? If so, did the response cause the decline? To be able to reliably use again, it is important to determine if the response caused the decline in the problem
21. CONDUCTING IMPACT EVALUATIONS Part 1: Measure the problem Quantitative – counts and numerical estimate; adds comparability Qualitative – (e.g., photos, maps, interviews); allows comparisons, but not precision; reinforces quantitative information Part 2: Evaluation design Compare measures systematically
22. MEASURING THE PROBLEM Take the most direct measure of the problems The more indirect the measure, the less valid Use multiple measures, where possible Arrest, as a measure of impact, may be affected by citizen complaint activity and/or police practice. Whether a measure is direct or indirect depends on how the problem is defined Is focus on “behavior” or “perception of behavior”? Measure the problem systematically and use the same measures throughout
23. DID THE REPONSE CAUSE THE CHANGE Is there a Plausible Explanation that the response changed the level of the problem Based on detailed problem analysis, backed by research Is there an Associationbetween presence of the response and change in level of the problem Did the response Precede a change in the problem Have measures before and after response begins Are there No Plausible Alternative Explanations Could ‘something else’ have caused the results found
24. EVALUATION DESIGNS Pre-post designs: simplest Can establish ‘association’ and ‘temporal order” Weak at ruling out alternative explanations Can’t assess fluctuations between measurements Tool Guide No. 1 (2002)
25. EVALUATION DESIGNS Interrupted Time Series designs: superior Repeated measures assess problem trajectory before and after response Requires time intervals of sufficient duration to derive “meaningful” conclusions Easy to use with routine data Stability of impact after response controls for fluctuation Tool Guide No. 1 (2002)
26. EVALUATION DESIGNS Interrupted time series designs not often practical Measurement can be expensive or difficult (surveys) Data may be unavailable for many periods before response Decision-makers cannot want to wait for time required to establish results of the response If data recording practices change, inter-period comparisons become invalid Hard to interpret when problem events are rare in time period, forcing use of fewer intervals of longer duration Cannot account for ‘something else’ that occurred which caused the level of the problem to change