Category Archives: Guest Posts

Guest posts by evaluators and non-evaluators.

Making Evaluation Happen When you are not in the Room: Part 1, Starting From Scratch [Guest post by Patrick Germain]

This is the first of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and sustainability and is based on a talk at Eval13 by the same name.


Any evaluator, internal or external, working to incorporate evaluative practices into nonprofit organizations must engage a wide variety of staff and systems in the design, implementation, and management of those practices.  The success of those efforts will be decided to a large extent by how non-evaluators are brought into the evaluation tent, and how evaluation is integrated into administrative and service delivery systems.  But how do we even begin?

Starting from Scratch

There are three main steps to coming up with any kind of strategy, including a strategy to build evaluation capacity.

1)   Understand the context

Without knowing where you are starting, it is very hard to set realistic goals.  So before you even start on your journey to build evaluation capacity, you have to know what you are working with.  Get to know the people you will be working with, the restraints and requirements, the values and priorities of the organization.  Conduct a SWOT analysis.  Determine who your allies will be, where your largest barriers will arise.  What will the culture of the organization support, and what is anathema to it?  Much like a body will reject any transplant that is incompatible with it, an organization will respond poorly to an intervention that doesn’t resonate with its culture.

2)   Define your destination and your path

Saying you want to ‘build evaluation capacity’ is not a good enough goal.  What does that mean?  What does that even look like? And how are you going to get there? What are interim benchmarks you can use to determine progress?

I have found three general strategies that have worked well for me: (1) make sure leadership is setting clear expectations for staff participation in evaluation activities, and holding them accountable for it, (2) start working with the high performers and people who already ‘get’ evaluation to create easy wins and visible successes, and (3) focus on the priorities of the people with influence – by convincing them of the value of evaluation, they will begin to shift the center of gravity in the organization closer to evaluation values.

3)   Prepare the foundation

What is the bare minimum in resource needs for you to accomplish your goal?  (Hopefully you were clear about resource needs before you even took the job.)  This is going to be different for every situation, but we probably all know the feeling of not having enough resources to accomplish our goals.

For me, these things recently included: technology, training for evaluation staff, time commitment from people throughout the organization, and coworkers who would support me if I got backed into a corner.  Some of these things I had to get budgetary approval for, but most of them were more about building strong and trusting relationships.  I had to be transparent about my intentions and manage everyone’s expectations about what they were expected to give, and what they could expect to get from working with me.  The first couple of months were more about creating strong relationships than about doing any ‘real’ evaluation work.

Patrick Germain

Patrick Germain

What strategies have worked for you?  What have your pitfalls been when starting a new capacity building effort?

Next post, I’ll discuss how to create momentum around evaluation capacity building efforts.




Patrick Germain is the Director of Strategy and Evaluation at Project Renewal, a large homeless services organization in New York City and is the President of the New York Consortium of Evaluators, the local AEA affiliate.

Four Steps: Social Network Analysis by Twitter Hashtag with NodeXL [Guest post by Johanna Morariu]

Note from Ann: Today’s guest post is from Johanna Morariu, Director of Innovation Network, AEA DVRTIG Chair, and dataviz aficionado.

snaBasic social network analysis is something EVERYONE can do. So let’s try out one social network analysis tool, NodeXL, and take a peek at the Twitter hashtag #eval13.

Using NodeXL (a free Excel plug-in) I will demonstrate step-by-step how to do a basic social network analysis (SNA). SNA is a dataviz approach for data collection, analysis, and reporting. Networks are made up of nodes (often people or organizations) and edges (the relationships or exchanges between nodes). The set of nodes and edges that make up a network form the dataset for SNA. Like other types of data, there are quantitative metrics about networks, for example, the overall size and density of the network.

There are four basic steps to creating a social network map in NodeXL: get NodeXL, open NodeXL, import data, and visualize.

Do you want to explore the #eval13 social network data? Download it here.

Here’s where SNA gets fun—there is a lot of value in visually analyzing the network. Yes, your brain can provide incredible insight to the analysis process. In my evaluation consulting experience, the partners I have worked with have consistently benefited more from the exploratory, visual analysis they have benefited from reviewing the quantitative metrics. Sure, it is important to know things like how many people are in the network, how dense the relationships are, and other key stats. But for real-world applications, it is often more important to examine how pivotal players relate to each other relative to the overall goals they are trying to achieve.

So here’s your challenge—what do you learn from analyzing the #eval13 social network data? Share your visualizations and your findings!

Building an Evaluation Culture with Top Ten Lists [Guest post by Corey Newhouse]

Hi, I’m Corey Newhouse, the Founder and Principal of Public Profit. We help public service organizations measure and manage what matters. As the leader of an 8-person evaluation firm, I think often about staff training and common organizational practices.

Anyone who has been in the field knows that there are hundreds of tips and tricks that we pick up along the way, ranging from the global (“Don’t falsify your data”) to the very local (“Meg the attendance clerk always has the file you want”).

And, to complex-ify things, one person’s “must-do-every-time-without-fail” tip is another person’s “what-the-heck-are-you-talking-about?” non-tip. So what’s an evaluation team to do?

Our team recently developed a “Top Ten Tips for Evaluation at Public Profit” in order to codify the most important of these practices for our work. To develop the list, each member of the team drafted as many tips as they wanted, and we discussed the tips as a group. We were able to whittle our list down to a set of tips that we agreed were essential to our work.


The exercise was hugely helpful for three reasons:

  • We were relieved to find that many of our tips were similar, suggesting that our team was already pretty good at sharing good ideas with one another.
  • The tip nominations process stimulated important conversations about the ways in which we work together, such as whether it was OK to ask for uninterrupted time to complete a task. (And leading to the tip, “Ask for what you need, even if it is time to focus.”)
  • We used our tips list to create a professional development calendar, in which some of the more complex tips were covered in a 30-60 minute training.

Our tips are now part of our data operations manual, and a key part of our staff on-boarding process. We’ll update the list every year or so to make sure that our best thinking is reflected.

The best Ignite presentations are when the presenter stumbles over words [Guest post by Kevin Flora]

Kevin Flora from

Kevin Flora is an evaluator and blogger at who watched, recorded, edited, and uploaded all 56 Ignite sessions from the 2012 AEA conference.

Ignite sessions… a presentation format started by O’Reilly that I thought would never stick around. The more I conduct my own presentations, I feel as though the audience engagement and personal enthusiasm is a direct reflection of my content. If this theory holds true, then the Ignite format should be more prevalent due to its ease of producing laughs and forcing the presenter to stay on top of their information. Essentially, there are 20 slides. Each slide advances automatically every 15 seconds whether the presenter is ready or not. Are you doing the math? Yes, it’s a 5-minute presentation.

The one thing I love about this format is the first-time presenter! When initially signing up, the thought is, “How hard could this be? Five minutes? That’s easy enough.” Well… I have found that it takes just as long to prepare for these 5 minutes as it does a 45-minute presentation, if not longer. Everyone wants to look at notes or the presenter view on the PowerPoint screen, but the best presentations are when the presenter stumbles over words, gets behind (or ahead) on their timing, and forgets why he/she used a particular image in their slide deck. I have heard that the more Ignite presentations an individual does, the less interesting they are to the audience because of the ability to almost perfect the structure. These are meant to be fun, with pictures, stories, and yes… screw-ups once and awhile.

Ignite presentations can be used for multiple purposes. My favorite Ignite session was that of Michael Szanyi at the 2012 American Evaluation Association (AEA) conference. His use of interpretive dance brought the data visualization and reporting topical interest group (DVR TIG) to their feet and moved some to tears. Szanyi produced an unrivaled passion for how his form of art and expression should be used in visualizing data. Szanyi not only memorized his slides, but the timing, movement, and slide descriptions. After seeing his 5-minute production, I saw where the future of Ignite presentations and evaluation was headed. This glimpse of our future was a sight to see. Here is Szanyi’s presentation:

After watching 56 presentations, it is difficult to find a comparison to Szanyi’s performance, so I will mention a couple of other neat ideas. The DVR TIG conducted their entire business meeting with the 5-minute presentation format (watch them all here), milking cows was related to strategic learning (here), and an improv Ignite was attempted (which made for a hilarious 5 minutes. Thank you Chris Lysy). Check out Chris Lysy’s improv Ignite here:

The Ignite format is both interesting and fun… informative, but short… and effectively cultivates an atmosphere conducive to questions, collaboration, and further discussion on certain topics. All 2012 AEA Conference Ignite videos can be found on the AEA YouTube channel.

Connect with Kevin Flora through his blog at or @edmatics on Twitter!

Exploring the Non-Profit Paradox – Evaluation and Non-Profits [Guest post by Jamie Clearfield]


Jamie Clearfield

Hello, I’m Jamie Clearfield from the Collaborative for Evaluation and Assessment Capacity (CEAC) at the University of Pittsburgh.

Having had the privilege to work with a range of community-based organizations (CBOs) and non-profit organizations in the United States and internationally, I have been continually struck by the ingenuity of organizations to work towards program goals, as well as a lack of understanding of the role of evaluation in the work.

I get it – small community organizations and non-profits are often (if not nearly always) in the fight for their survival – another crisis is always at the end of another email, phone call, or text message. How many times was I working in another country when I would get a text message from the director of a school I work with that the feeding program funding has been cut, we were about to be evicted, or a program partner decided to “go in a different direction”? Incorporating evaluation into the existing chaos – particularly when much of the program staff is unsure of what evaluation is, how it is conducted, or what it can mean for the organization – can be a difficult sell, even to the most innovative of organizations. Yet herein lies the paradox for small CBOs and organizations – evaluation can help avoid the ongoing crisis cycle and can help organizations plan and prepare for the days ahead, while also identifying where programs are succeeding or need to be rethought.

For those working in evaluation, this paradox offers many challenges and opportunities to work with CBOs and non-profits, helping them strengthen and expand their programming to a wider audience. The challenges however are many – among a few are how to reach small organizations that may not understand the need of your services, who may not be able to pay market rates for evaluation, and who may have limited human and social capital to make evaluation a worthwhile endeavor. There is also the challenge of dependency – small organizations may come to rely too heavily on the evaluator to conduct the evaluation year after year, regardless of whether it is the best use of the organization’s or evaluator’s time or resources. Dependency results from a gross misunderstanding of the role of evaluation and whom evaluation really should benefit.

So what are some solutions? There are no zero sum answers to these questions – it is an ongoing trial and error experiment. At the heart of many of these questions is the need to develop strong working relationships between evaluators and CBOs/non-profits. Doing so not only will allow for more candid honest working relationships in the long run, it can also help CBOs/non-profits understand that they are not alone, and that working with an outside entity (an evaluator) can help develop their community. Evaluations firms/evaluators can open the dialogue with CBOs/Non-profits by meeting organizations on their own terms – explaining the role of evaluation in terminology that works for the organization and mapping out, with concrete examples, how evaluation can benefit the organization. It is not merely a question of marketing your services, it is developing a lasting relationship. This may involve attending events hosted by the CBO, organizing meetings with staff, etc. It can be a slow process, however worthwhile.

Thoughts – how can evaluators form better relationships with CBOs/non-profits that result in stronger, more useful evaluations?