Category Archives: Using Evaluation

Cast your vote: Which evaluation standard is most essential?

For more information, check out "The Program Evaluation Standards"

The Program Evaluation Standards

Are you familiar with the Program Evaluation Standards by the Joint Committee on Standards for Educational Evaluation? The five standards include utility, feasibility, propriety, accuracy, and accountability.

  • Utility: The utility standards “are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.” For example, these standards remind us that evaluations should involve Timely and Appropriate Communicating and Reporting and Meaningful Processes and Products.
  • Feasibility: The feasibility standards “are intended to increase evaluation effectiveness and efficiency.” For example, “evaluation procedures should be practical and responsive to the way the program operates” (Practical Procedures) and “evaluations should use resources effectively and efficiently” (Resource Use).
  • Propriety: The propriety standards “support what is proper, fair, legal, right and just in evaluations.” For example, “Evaluations should be designed and conducted to protect human and legal rights and maintain the dignity of participants and other stakeholders” (Human Rights and Respect) and “Evaluations should be responsive to stakeholders and their communities” (Responsive and Inclusive Orientation).
  • Accuracy: The accuracy standards “are intended to increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality.” For example, evaluations should yield Valid and Reliable Information and should employ Sound Designs and Analyses.
  • Accountability: The accountability standards “encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products.” For example, “Evaluations should fully document their negotiated purposes and implemented designs, procedures, data, and outcomes” (Evaluation Documentation) and “Evaluators should use these and other applicable standards to examine the accountability of the evaluation design, procedures employed, information collected, and outcomes” (Internal Metaevaluation).

Which standard is most essential, relevant, and central to your everyday work as an evaluator? Is there a standard that guides your decision making during evaluation projects more than others? Please share your votes and comments below.


P.S. One of the most exciting things about the internet is that our various communication channels (Twitter, email, LinkedIn, etc.) can often blend together seamlessly. Thanks to everyone who’s sharing their reactions! Here are a few of the responses:

  • Brian Hoessler’s reaction to this question in his blog post, Value, via Strong Roots Consulting.
  • The LinkedIn discussion in the American Evaluation Association group here.
  • Votes coming in through Twitter:

How can findings influence decision-making? [Guest post by Jonathan O’Reilly]

The dusty shelf report: The best way to keep evaluation results far, far, away from decision-making

After spending the past couple years as an internal evaluator, I decided to start addressing other internal evaluators’ questions, comments, and concerns. I’ll be sharing their questions on my blog, connecting them to other evaluators, and offering advice from my own experiences with internal evaluation.

Here’s a question from Jonathan O’Reilly, my friend from the Washington Evaluators. Jonathan recently accepted an internal evaluation position with the Circuit Court for Prince George’s County in Maryland. He writes:

“I’d like to know more about internal evaluators’ experience with translating research to practice. My experience as an external evaluator witnessed the final report being the absolute end product – whether or not the client used the recommendations or had working groups around the evaluation report were beyond our involvement. In my new position, I am more optimistic about my evaluation findings being used to effect change. As an internal evaluator my vision is to call together a working group to present evaluation findings and start the conversation about modifying procedures where necessary. What has your experience as an internal evaluator been with getting research findings to be a key part of administrative decision-making?”

Do you have any advice for this internal evaluator? Please share the good karma below.

Thanks, Ann Emery