Category Archives: What is Evaluation?

How do you visualize evaluation?

Lately, I’ve been busy designing graphics and diagrams to use in next week’s eStudy with Agata Jose-Ivanina. Sometimes the best way to explain a complicated topic is to break it down and display the little pieces and how they relate to each other in a simple graphic. Our dashboard automation process is one example where explaining the process through diagrams will (hopefully!) make a big difference for our students.

Evaluation is another example in which graphics can help evaluators explain the process to stakeholders.

So how do evaluators visualize evaluation? Is evaluation an ongoing cycle or a series of linear steps? How do you communicate this evaluation process to the program staff? Is the layout of your graphic connected to the evaluation’s purpose and goals?

Let’s look at a few examples.

Evaluation Cycles Evaluation Steps
Innovation Network’s Ongoing Learning Cycle
Safe Routes to Schools’s Six Steps for Program Evaluation
Social Research Methods’ Evaluation and Planning Phases
The Adam’s 14 Colorado School District’s 6 Steps of Program Evaluationevaluation_steps_adams14
Centers for Disease Control’s 6 Steps of Evaluation (Although these are called “steps,” they are displayed as a cycle, so I chose to include them here.)
The University of Wisconsin’s Cooperative Extension’s Evaluation Steps, shown as more of a timeline in which stakeholders are engaged throughout the entire processevaluation_steps_coopext

An evaluation cycle implies that evaluation is an ongoing process where data are continually used for learning and decision making. Perhaps displaying evaluation cycles is most appropriate when conducting formative evaluations or evaluations where organizational learning is a high priority.

On the other hand, evaluation steps imply that you’re building towards something. Perhaps there’s an end goal or final step. In the examples shown above, the final step is to “use results” or “publish.” Evaluation steps are probably most appropriate when conducting summative evaluations where there’s a one-time final report.

In both cases, pictures matter.

How do you visualize the evaluation process? Do you have additional examples to share?

Enjoy some haikus. / Isn’t evaluation / Serious enough?

Got some ideas? / Please share your eval haikus / In “comments” below.

Formative and summative evaluations [Guest post by David Henderson]

Last month I described some of the differences between researchers and evaluators (you can read the post here). David Henderson noticed that my ideas could also explain the differences between formative and summative evaluators. I was intrigued by his comment, and I asked David to guest-post on this topic.

I’m pleased to share this week’s post by David Henderson. David’s the founder of Idealistics and tweets about evaluation here. I hope you enjoy learning more about formative and summative evaluation from David.

— Ann Emery


David Henderson

“Much has been written about the social sector’s love-hate relationship with evaluation. On the one hand, there are those who presume that evaluation will lead the social sector into a data driven age of enlightenment where only proven interventions are funded. On the other hand, there are those who fear the decisive, and potentially incorrect, conclusions of evaluators who some argue are given too much power to determine which organizations thrive and which ones die.

The reality of evaluators’ roles in the social sector is far less extreme, and the general sectoral confusion over what evaluation is and is not is partly the result of our inability to effectively articulate the difference between formative evaluation and summative evaluation.

Summative evaluation is where an evaluator (or team of evalutors) seeks to conclusively test the hypothesis that a given intervention has an expected impact on a target population. This type of evaluation has been popularized recently by the work of Esther Duflo and Innovations for Poverty Action through their use of randomized control trials in evaluating international aid efforts.

Formative evaluation is where an evaluator works collaboratively with an organization to evaluate outcomes and try to use program data to improve the effectiveness of an organization’s interventions. This is the kind of evaluation performed by internal evaluators and by most evaluation consultants, including myself.

The standard of proof used in formative evaluation is significantly lower than in summative evaluation. Summative evaluation is concerned with isolating causal effects, usually through an experimental design where a treatment group is compared to a control group to identify an average treatment effect on the treated (ATT).

Some organizations’ evaluation anxiety stems from the inaccurate assumption that all evaluation is summative, and therefore potentially punitive. However, as Isaac Castillo, Senior Research Scientist at Child Trends recently said at a conference, evaluation “is an activity that produces useful content for everyone, and it should be undertaken for the purpose of program/service improvement.”

Isaac is right that the real promise of evaluation is to help organizations improve program outcomes. While this does require a certain level of statistical sophistication, formative evaluation does not have the same confirmatory burdens of summative evaluation, nor does it incur the considerable costs that come with true experimental design.

As evaluators, we would do well to educate those we work with about the difference between evaluative approaches. Doing so might help to mitigate the wrong-headed assumption that an evaluator’s role is to assign a letter grade to social interventions.”

— David Henderson

My inner evaluation demons [Guest cartoon from Chris Lysy]

We all have our inner evaluation demons. In my quest towards evaluation use, I encounter this little angel and demon on a near-daily basis:

What are you inner evaluation demons, and how do you battle them? Is it possible to strike a balance between your devil and angel, or have you been forced to choose one or the other at specific career crossroads?

— Ann Emery


Drawing courtesy of Chris Lysy, evaluation blogger at and evaluation cartoonist here. Chris also runs Eval Central, an online evaluation platform that brings together blog feeds from 27+ different evaluation bloggers. To read more about Chris, check out his LinkedIn profile here.

Researchers vs. evaluators: How much do we have in common?

I started working as a research assistant with various psychology, education, and public policy projects during college. While friends spent their summers waitressing or babysitting, I was entering data, cleaning data, and transcribing interviews. Yay. Thankfully those days are mostly behind me…

A few years ago, I (unintentionally) accepted an evaluation position, and the contrast between research and evaluation hit me like a brick. Now, I’m fully adapted to the evaluation field, but a few of my researcher friends have asked me to blog about the similarities and differences between researchers and evaluators.

Researchers and evaluators often look similar on the outside. We might use the same statistical formulas and methods, and we often write reports at the end of projects. But our approaches, motivations, priorities, and questions are a little different.

The researcher asks:

  • What’s most relevant to my field? How can I contribute new knowledge? What hasn’t been studied before, or hasn’t been studied in my unique environment? What’s most interesting to study?
  • What’s the most rigorous method available?
  • How can I follow APA guidelines when writing my reports and graphing my data?
  • What type of theory or model would describe my results?
  • What are the hypothesized outcomes of the study?
  • What type of situation or context will affect the stimulus?
  • Is there a causal relationship between my independent and dependent variables?
  • How can I get my research plan approved by the Institutional Review Board as fast as possible?

The evaluator asks:

  • What’s most relevant to the client? How can I make sure that the evaluation serves the information needs of the intended users?
  • What’s the best method available, given my limited budget, limited time, and limited staff capacity? How can I adapt rigorous methods to fit my clients and my program participants?
  • When is the information needed? When’s the meeting in which the decision-makers will be discussing the evaluation results?
  • How can I create a culture of learning within the program, school, or organization that I’m working with?
  • How can I design a realistic, prudent, diplomatic, and frugal evaluation?
  • How can I use graphic design and data visualization techniques to share my results?
  • How can program staff use the results of the evaluation and benefit from the process of participating in an evaluation cycle?
  • What type of report (or handout, dashboards, presentation, etc.) will be the best communication tool for my specific program staff?
  • What type of capacity-building and technical assistance support can I provide throughout the evaluation? What can I teach non-evaluators about evaluation?
  • How can we turn results into action by improving programs, policies, and procedures?
  • How can we use logic models and other graphic organizers to describe the program’s theory of change?
  • What are the intended outcomes of the program, and is there a clear link between the activities and outcomes?
  • How can I keep working in the evaluation field for as long as possible so I can (usually) avoid the Institutional Review Board altogether?

Researchers and evaluators are both concerned with:

  • Conducting legal and ethical studies
  • Protecting privacy and confidentiality
  • Conveying accurate information
  • Reminding the general public that correlation does not equal causation

What else would you add to these lists? I’ve been out of the research mindset for a few years, so I’d appreciate feedback on these ideas. Thank you!

— Ann Emery