Department of Justice Canada
Symbol of the Government of Canada

DR PROGRAM EVALUATION

Slide 1: DR Program Evaluation

Presentation By Dispute Prevention and Resolution Services,
Department of Justice to the Philippine Delegation

September 22, 2003

Slide 2: Outline

  • Why Evaluate?
  • When to Evaluate?
  • What to Evaluate?
  • How to Evaluate?
  • Who should Evaluate?
  • Evaluation Checklist
  • Useful Evaluation Documents and A Few Evaluation Examples

Slide 3: Why Evaluate?

  • Evaluation is “an internal effort to define and improve operations over time while providing descriptive information to the field” (Janis Roehl)
  • Way to determine whether a DR program is meeting its goals and objectives
  • Allows program administrators to establish what works, what does not work, and to discontinue, modify or expand a DR program
  • Reveals the strengths and weaknesses of the DR program
  • Promotes consistent and proactive approach to continued DR program improvement
  • Identifies administrative needs (e.g. staffing requirements, bottlenecks in the process, etc.)

Slide 4: When to Evaluate?

  • Important that evaluation planning begin BEFORE the DR program is implemented (e.g. Develop an evaluation framework)
  • Evaluation can be undertaken at different times during the life of a DR program
  • Factors to consider include:
    • Whether the program has been in operation long enough to ensure that there are sufficient cases to examine
    • Whether the program has gotten the early bugs out
    • If pilot, whether the evaluation will be completed early enough to be a factor in the decision to continue/expand the program
    • Whether there are other deadlines relating to future decision-making which will affect the usefulness of the evaluation results
  • Formative (pilot stage) and summative evaluations (when DR program at mature point)

Slide 5:  What to Evaluate?

2 main types of DR evaluations:

  1. Program Effectiveness Evaluations (impact/outcome/summative)
    • focus on whether DR program meeting its goals and/or having the desired impact
  2. Program Design and Administration Evaluations (process/formative)
    • focus on how a DR program can be improved
    • Comprehensive evaluations should measure tangible and intangible benefits using both quantitative and qualitative data

Slide 6: What to Evaluate?

DR Program Managers typically seek to evaluate some/all of the following measures of “success” *:

  • Cost savings to both the organization and the parties
  • Time savings to both the organization and the parties
  • Participation rates of the parties in the DR process
  • Participant satisfaction with the fairness of the DR process
  • Settlement rates
  • Quality of settlements (durability, creativity, improvements to ongoing relationships)

(*See: Performance Indicators for ADR Program Evaluation included in materials)

Slide 7: What to Evaluate?

  • To evaluate the success of a given program, evaluators must structure their observations, measurements and reports to highlight a few characteristics/variables which are particularly significant
  • E.g. DR Fund sought to evaluate 4 results:
    1. Reduction in costs and time spent in managing disputes
    2. Increased party satisfaction with resolution outcomes
    3. Funded organizations would foster further internal DR developments
    4. Funded projects would serve as catalysts and/or models for other organizations

Slide 8: How to Evaluate?

Basic Steps in the Evaluation Process:

  • 1) Identify Participants (i.e. who uses the system – e.g. clients, lawyers, neutral service providers, internal staff, etc.)
  • 2) Identify Program Goals: (e.g. reduce costs, reduce delay, maintain/improve disputant satisfaction, preserve the equity of outcomes, promote a less contentious environment, etc.) - must be absolutely clear about what program is trying to accomplish in implementing a particular DR process

Slide 9: How to Evaluate? (cont'd)

  • 3) Identify Performance Measures/Indicators Appropriate to Measure Desired Outcomes:
    • e.g. if desired outcome is cost reduction -> whose costs? (agency’s/parties/both), what costs? (legal fees, administrative costs, etc.)
  • 4) Collect the Right Type of Data For the Measures Identified:
    • quantitative (file records, surveys, etc.) and/or qualitative data collection methods (interviews, focus groups, observations, participant surveys, etc.)

Slide 10: How to Evaluate? (cont'd)

  • 5) Choose an Appropriate Study Design:
    most effective is the true control group study:  
    • i) ensure that the population in the DR program is like that in the control group (or status quo) 
    • ii) hold all else constant 
    • iii) random assignment of cases to DR stream/traditional process 
    • iv) both processes operating contemporaneously  
  • 6) Collect and Analyze the Data
  • 7) Discuss Findings (oral/written/both)
  • 8) Make Necessary Changes to the DR Program

Slide 11: Who Should Evaluate?

  • Key factors for effective evaluation:
    • Objectivity (i.e. no stake in the outcome), experience in conducting program evaluations, sufficient technical expertise in terms of data collection and analysis
  • Internal v. external evaluators:
    • External:
      • Pros – credibility, objectivity, impartiality, specialized evaluation skills
      • Con – expensive
    • Internal:
      • Pros – specialized knowledge of organization, more cost effective
      • Con – potential perceptions of lack of impartiality

May wish to have “advisory committee” of key stakeholders to assist in evaluation design, implementation and reporting

Slide 12: Evaluation Checklist

  • Is your DR program ongoing or in the formative stage?
  • What are your goals and objectives for your DR program evaluation?
  • How will you pay for your DR program evaluation?
  • Who will do the evaluation?
  • Who is your audience?
  • What is your evaluation design strategy?
  • What are your measures of success?
  • What do you need to know about your program effectiveness (impact)?
  • What do you need to know about your program structure and administration?
  • How and when will you disseminate your evaluation results?

(Source: Federal ADR Program Manager’s Resource Manual)

Slide 13: Useful Evaluation Documents and A Few Evaluation Examples

  • A Checklist for Evaluating Federal ADR Programs: Long Form
  • Performance Indicators for ADR Program Evaluation
  • Evaluating Agency Alternative Dispute Resolution Programs: A User’s Guide to Data Collection and Use
  • Federal ADR Program Manager’s Resource Manual, Chapter 8, Evaluating ADR Programs
  • Assessing Efficiency, Effectiveness and Quality: An Evaluation of the ADR Program of the Immigration Appeal Division of the Immigration and Refugee Board
  • An Evaluation of the Notice to Mediate Regulation under the Insurance (Motor Vehicle) Act
  • Evaluation of the Family Justice Registry (Rule 5) Pilot Project: Final Report
  • Evaluation of the Ontario Mandatory Mediation Program (Rule 24.1): Executive Summary and Recommendations
  • Mandatory Parenting after Separation Pilot: Final Evaluation Report