Category Archives: Uncategorized

Is the evaluation hurting the program?

We were extremely fortunate to have past and present Presidents of the American Evaluation Association as our guest speakers at the 2012 Eastern Evaluation Research Society’s conference – Eleanor Chelimsky, Jennifer Greene, and Rodney Hopson.

Even though the conference was a couple weeks ago, I’m still thinking about one of Rodney Hopson’s comments. He mentioned that sometimes he wonders whether evaluators/the evaluation are actually hurting the program rather than helping it.

I’ve certainly had similar experiences. Mostly, I’ve seen program staff get so excited about data that they want to collect more, and more, and then even more data. You can read about one of my experiences here.

This is a great idea at first. What’s the harm? More data is better, right?

But… a few months down the road, the program staff and I are swimming in more data than we can handle. And, we often have more data than we really need. After all, my goal as a utilization-focused evaluator is to collect information that will directly influence decisions about the program or the participants. Simple, quick, streamlined data can be more useful than complex, time-consuming data.

Have other evaluators felt like this? Have you ever questioned whether your involvement is hurting rather than helping?

Assessments of learning vs. assessments for learning

Schools are goldmines of “lessons learned” for evaluators since schools have been immersed in (drowning in?) data since 2001’s No Child Left Behind. One of the most valuable lessons I’ve learned from school systems is that there’s a big difference between assessments of learning vs. assessments for learning. Here’s my explanation of the difference between the two.

Assessments of learning can verify learning.

  • Examples: Year-end classroom tests for report card grades, or standardized tests to verify that students learned material.
  • Purpose: To inform others about students’ achievement.
  • Focus: Standards – have students met the standards or not?

Assessments for learning can support learning.

  • Examples: Informal classroom-based quizzes, rubrics, portfolios, and other assignments. Most teachers collect this type of data every single day as a regular part of their teaching.
  • Purpose: To inform students themselves about their progress and to help teachers in their planning.
  • Focus: Measure progress towards standards. Where are students on the “scaffold” towards standards? Scores aren’t going to be high at first.

Both types of assessments are valuable, but they’ve got different uses and purposes.

– Ann Emery

Evaluators interviewing evaluators

When I get my New Directions for Evaluation journal from the American Evaluation Association in the mail every few months, I like to skim through the articles during my metro ride to work.

I recently read about an evaluator’s experiences at the U.S. Government Accountability Office (GAO) and I was intrigued. How is it possible that evaluators all seem to face the same challenges, no matter what the setting? My own internal evaluation work at a non-profit youth center is truly a microcosm of the larger evaluation world.

I’m a firm believer that the best way to get better at program evaluation is to talk to other evaluators and I thought, hey, I wonder whether I can have coffee with a GAO evaluator sometime and learn more??

Well, I emailed someone who I’d met through the Washington Evaluators, and one thing led to another…  and I have the lovely opportunity to interview not one but two evaluators from the GAO tomorrow over lunch! My Washington Evaluators friend and the article’s author! One of my favorite things about being an evaluator in DC is being close to so many skilled evaluators.

Stay tuned to hear some highlights from tomorrow’s conversation!

Ann Emery

What does capacity building look like on a day-to-day basis? … Part 2

As an internal evaluator, capacity building is a huge part of my job. Evaluators talk about capacity building all the time, but what does it really look like? On the front lines? In a community-based youth center?

I recently shared some notes from my algebra lesson with a youth worker.

Here’s another great example of capacity building: A program manager has volunteered to lead a Program Design 101 training for our other staff members!

She’ll share examples of how she’s incorporated formative data collection and analysis into her everyday programmatic activities to turn evaluation into a valuable learning experience. Stay tuned to see her resources from this training.

Program Design 101

What are SMART goals?

Evaluation is all about defining program outcomes and then measuring progress towards those outcomes.

“Outcomes” is a strange word. Think of outcomes as goals. In particular, programs need SMART goals. Programs with SMART goals are more likely to be successful and efficient than programs with vague, cloudy goals.

SMART goals are:

  • S = Specific. Goals should be as specific as possible. This helps the direct service staff who are doing the day-to-day work stay focused and keep their eye on the prize.
  • M = Measurable. You’re more likely to stay on track when you measure your progress towards your goals.
  • A = Attainable. Dream big and be optimistic, but remember that good intentions aren’t enough. It’s harder to commit to goals that aren’t realistic for your program.
  • R = Realistic. All programs hit roadblocks from time to time, like staff turnover or funding challenges. Set goals that are do-able given any restraints you might be facing.
  • T = Timely. You might have several short-term goals and several long-term goals for the program. Set timeframes to keep everyone on the same page and focused on the same goals.

SMART goals are, well, smart! They keep everyone focused on the same priorities, from direct service stafff to senior management.

— Ann Emery