New-ish Charts for Evaluation

You know the drill: Better charts = better communication = better understanding = better decision making. Whether you’re trying to highlight the most important findings, simplify that lengthy report, or just get someone to open your report in the first place, charts can be one of your strongest communication tools.

Ready to move beyond the typical pie chart or line chart? Today I’m covering 4 awesome charts that are under-used (but extremely useful!) in evaluation.

Social Network Maps

Johanna Morariu's map of the #eval13 hashtag

Johanna Morariu’s map of the #eval13 hashtag

Although social network maps aren’t brand new to evaluation (read about them on aea365 here), I had to mention them because I’m still surprised how many evaluators aren’t using social network maps.

Social network maps help you understand relationships between organizations, people, or even conference attendees. But beware – social network maps aren’t for everyone.

Want to create your own? Check out Johanna Morariu’s tutorial on using NodeXL, a free Excel plug-in.

Tree Maps

Tree maps are for hierarchical or nested data, and they’re great for showing part-to-whole patterns. 

Here’s an example from Innovation Network’s State of Evaluation research in which Johanna Morariu, Kat Athanasiades and I examined the proportion of nonprofits demonstrating promising evaluation capacities and behaviors:

Sample treemap from State of Evaluation 2012

Sample treemap from State of Evaluation 2012

(Can you imagine that same data in a bar chart? It just wouldn’t work; all the relationships between nested variables would be lost.)

Want to learn more? Check out Johanna Morariu’s example that breaks down participants’ gender, age, and whether or not they completed a program. 

Dot Plots

Dot plots are similar to bar charts and clustered bar charts (but in many cases, they’re easier to read and a lot less cluttered).

Here’s a 5-minute overview about what dot plots can be used for:

(Better) Bar Charts

And don’t forget about the good ol’ bar chart, your go-to chart for most of your datasets.

But not all bar charts are created equal. It’s no longer acceptable to paste that default draft chart straight  into your report; you should expect to spend a few minutes cleaning up every single chart to improve its labeling and overall readability.

Once you’ve mastered the basic bar chart, try your hand at one of these newer variations, like a diverging stacked bar chart, floating bar chart, or small multiples bar chart. The bar chart’s versatility make it the most essential chart for evaluation.

Have you used any of these charts for evaluation purposes? Are there other new-ish charts you think the evaluation world should be aware of? Please share your ideas with the community!

Making Evaluation Happen When you are not in the Room: Part 1, Starting From Scratch [Guest post by Patrick Germain]

This is the first of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and sustainability and is based on a talk at Eval13 by the same name.

strategy

Any evaluator, internal or external, working to incorporate evaluative practices into nonprofit organizations must engage a wide variety of staff and systems in the design, implementation, and management of those practices.  The success of those efforts will be decided to a large extent by how non-evaluators are brought into the evaluation tent, and how evaluation is integrated into administrative and service delivery systems.  But how do we even begin?

Starting from Scratch

There are three main steps to coming up with any kind of strategy, including a strategy to build evaluation capacity.

1)   Understand the context

Without knowing where you are starting, it is very hard to set realistic goals.  So before you even start on your journey to build evaluation capacity, you have to know what you are working with.  Get to know the people you will be working with, the restraints and requirements, the values and priorities of the organization.  Conduct a SWOT analysis.  Determine who your allies will be, where your largest barriers will arise.  What will the culture of the organization support, and what is anathema to it?  Much like a body will reject any transplant that is incompatible with it, an organization will respond poorly to an intervention that doesn’t resonate with its culture.

2)   Define your destination and your path

Saying you want to ‘build evaluation capacity’ is not a good enough goal.  What does that mean?  What does that even look like? And how are you going to get there? What are interim benchmarks you can use to determine progress?

I have found three general strategies that have worked well for me: (1) make sure leadership is setting clear expectations for staff participation in evaluation activities, and holding them accountable for it, (2) start working with the high performers and people who already ‘get’ evaluation to create easy wins and visible successes, and (3) focus on the priorities of the people with influence – by convincing them of the value of evaluation, they will begin to shift the center of gravity in the organization closer to evaluation values.

3)   Prepare the foundation

What is the bare minimum in resource needs for you to accomplish your goal?  (Hopefully you were clear about resource needs before you even took the job.)  This is going to be different for every situation, but we probably all know the feeling of not having enough resources to accomplish our goals.

For me, these things recently included: technology, training for evaluation staff, time commitment from people throughout the organization, and coworkers who would support me if I got backed into a corner.  Some of these things I had to get budgetary approval for, but most of them were more about building strong and trusting relationships.  I had to be transparent about my intentions and manage everyone’s expectations about what they were expected to give, and what they could expect to get from working with me.  The first couple of months were more about creating strong relationships than about doing any ‘real’ evaluation work.

Patrick Germain

Patrick Germain

What strategies have worked for you?  What have your pitfalls been when starting a new capacity building effort?

Next post, I’ll discuss how to create momentum around evaluation capacity building efforts.

 

 

 

Patrick Germain is the Director of Strategy and Evaluation at Project Renewal, a large homeless services organization in New York City and is the President of the New York Consortium of Evaluators, the local AEA affiliate.

The Conference is Over, Now What? Professional Development for Novice Evaluators

At the American Evaluation Association’s annual conference in October 2013, I led a roundtable titled “The Conference is Over, Now What? Professional Development for Novice Evaluators.” We discussed ways that novices can deepen their knowledge, build their skills, socialize with other evaluators, and get involved in leadership positions. I compiled the notes here so more people can benefit from these resources.

Here are the best resources for novice evaluators:

aea365

aea365This is the American Evaluation Association’s daily blog located at aea365.org. You can read about everything from item response theory to slide design. Confession: I rarely read an entire post. Instead, I’m skimming the posts just to see the title, author, author’s organization, and the main gist of the content. This is a great way to stay up-to-date on the biggest trends in the field.

You should seriously write for aea365, probably 2-3 times a year, even if you’re new to the field. Just make sure you follow the contribution guidelines.

Affiliates and other organizations

AEA is the national-level mothership and there are more than 20 local and regional affiliates. You can find a full listing of affiliates here: http://www.eval.org/p/cm/ld/fid=12. Every affiliate is different. For example, the Washington Evaluators hold monthly brown bags, quarterly happy hours, and an annual holiday party. The Eastern Evaluation Research Society holds an annual 3-day conference. Other affiliates hold virtual book clubs, maintain blogs, or simply hold member meetings via teleconference.

You should join your affiliate. Seriously. The mailing lists are little nuggets of gold and worth every penny of that $25/year membership. The Washington Evaluators, for example, send job announcements almost every day, so you’ll always know which organizations are hiring and expanding. Don’t forget to attend the affiliate events too. (Sometimes people just pay dues but skip all the events, and then they don’t know why they’re not meeting anyone? This confuses me.) After a year, start planning small events yourself, like a brown bag. Then, join the Board.

Here are some additional reasons to join affiliates and ideas for getting involved:

data_community_dcThere are tons of additional evaluation groups. For example, the Environmental Evaluators Network, led by Matt Keene, holds forums for evaluators interested in environmental issues. If you’re in Washington, DC, the Aspen Institute holds quarterly breakfast panels focused on advocacy evaluation. At Innovation Network, we hold Ask an Evaluator sessions for nonprofit leaders. Tony Fujs and I also attend Data Science DCData Visualization DC, and Data Community DC monthly meetups. No matter your city, there are probably lots of events that fit your interests.

Blogs

First, check out evalcentral.com, run by Chris Lysy. Chris pulls in feeds from 60+ evaluation blogs so you’ll get exposed to a diverse set of perspectives. Chris even developed a daily email digest, so you can subscribe once to all 60+ blogs rather than monitoring your subscriptions to all the individual blogs. I suggest setting EvalCentral as one of your homepage tabs (along with your other must-haves like Gmail and Pandora) so it’s there every time you log into your computer. And again, I rarely read an entire blog post but I skim everything for the title, author, and main gist of what they’re talking about.

Second, check out AEA’s listing of evaluators and evaluation organizations who blog: http://www.eval.org/p/cm/ld/fid=71

I started blogging after watching Chris Lysy’s Ignite presentation at the 2011 AEA conference. Here’s Chris’ Ignite, which outlines just a few of the infinite reasons why evaluators should blog:

Coffee Break webinars

Coffee Break webinars are just 20 minutes long, so they’re a perfect way to squeeze in some quick professional development in the middle of a busy work day. The best part? They’re free for AEA members. I like to sign up for topics that I know nothing about. After 20 minutes, I’m not an expert, but at least I’ve got a basic understanding of that flavor of evaluation.

Conferences

Evaluation conferences include:

Do you know of additional evaluation conferences? Please link to them in the comments section below.

I also like to attend non-evaluation conferences to hear how non-evaluators are describing our work (they have completely different lingo and tend to value qualitative data way more than evaluators do).

eStudies

An eStudy is a 3- to 6-hour webinar run by AEA. eStudies are like mini grad school courses because they go in-depth on a particular topic (as opposed to 20-minute Coffee Break webinars, which just provide an overview of a topic). eStudies are broken into 90-minute chunks and there’s typically a homework assignment between each segment to help you practice your new skills.

For example, I participated in an eStudy about nonparametric statistics in which the instructor covered about 20 different nonparametric statistics, when to use each one, and how to perform the calculations in SPSS. We even got to keep her slides, which were full of step-by-step SPSS screenshots. Almost two years later, I still pull out my eStudy notes whenever I need to use some nonparametric statistics.

Journals

AEA offers two journals, the American Journal of Evaluation and New Directions for Evaluation. Both of these journals are included with your AEA membership. What a steal!

LinkedIn

These days, I can’t imagine an employer not doing a full internet search on new applicants. Make sure your LinkedIn profile has, at the bare minimum, a professional photo, your full work history (including dates), and your education history. You can also use LinkedIn to build your online portfolio (e.g., embedded slideshows from recent conference presentations, links to publications and projects, and your list of certifications).

Want to connect with other evaluators? Some awesome evaluation groups on LinkedIn include:

Do you know of additional evaluation groups on LinkedIn? Share your suggestion in the comments below. Thanks!

Listservs, mailing lists, and newsletters

First, check out EvalTalk: https://listserv.ua.edu/archives/evaltalk.html. This is a traditional listserv that goes directly to your email inbox. Subscribing to EvalTalk is a must (if only to watch the bloodbath as evaluators battle each other online). Make sure you adjust your settings so that you get a daily or weekly digest – otherwise you’ll drown in the sheer volume of messages.

Second, subscribe to mailing lists and newsletters specific to your client projects. Whenever I begin a new project, I search the client’s website and subscribe to everything I can (like their Twitter feed, email newsletter, and blog). As a consultant, I only see one slice of their work. Subscribing to all of their updates helps me get a fuller picture of their work, so I can make sure the evaluation fits their organization’s culture and needs.

Thought Leaders Discussion Series

AEA’s Thought Leaders Discussion Series is like a big message board to debate bigger-picture, theoretical issues in the field. Each series is led by a different person and has a different flavor.

Topical Interest Groups (TIGs)

Topical Interest Groups (TIGs) are known as affinity groups in other professional associations. You get to select five TIGs when you join AEA, and you can change your selection at any time. Each TIG is different–different sizes, leadership and committee structure, and different business meetings. I suggest attending business meetings for multiple TIGs at each conference. See which culture fits you best. After a few years, get more involved by running for a leadership position.

Twitter

Just getting started on Twitter? Here’s my list of 275+ evaluators and 80+ evaluation organizations who are using Twitter. Use #eval13 to tweet about that year’s AEA conference (not #AEA13 – the poor folks at the American Equine Association will get confused). Use #eval for all your regular evaluation-related content.

There's a huge online evaluation community. What are you waiting for?!

Here’s Johanna Morariu’s social network map of the #eval13 hashtag. There’s a huge online evaluation community. What are you waiting for?!

White papers and other gray literature

There are approximately 8000 evaluators in the American Evaluation Association. I estimate that maybe… 5%?… aim to publish articles in academic journals. Most of us are practitioners and consultants (not academics, theorists, or professors). White papers and other gray literature are a great way to learn about our work, our insights, and our tips. For examples, check out innonet.org/research and evaluationinnovation.org/publications.

Additional resources

What are your favorite resources? Which resources were most valuable during your first few years in the field? And, most importantly–do you have different viewpoints on any of the resources I described? Share your perspectives! I’ve presented one opinion and there are many more to add to the mix.

Four Steps: Social Network Analysis by Twitter Hashtag with NodeXL [Guest post by Johanna Morariu]

Note from Ann: Today’s guest post is from Johanna Morariu, Director of Innovation Network, AEA DVRTIG Chair, and dataviz aficionado.

snaBasic social network analysis is something EVERYONE can do. So let’s try out one social network analysis tool, NodeXL, and take a peek at the Twitter hashtag #eval13.

Using NodeXL (a free Excel plug-in) I will demonstrate step-by-step how to do a basic social network analysis (SNA). SNA is a dataviz approach for data collection, analysis, and reporting. Networks are made up of nodes (often people or organizations) and edges (the relationships or exchanges between nodes). The set of nodes and edges that make up a network form the dataset for SNA. Like other types of data, there are quantitative metrics about networks, for example, the overall size and density of the network.

There are four basic steps to creating a social network map in NodeXL: get NodeXL, open NodeXL, import data, and visualize.

Do you want to explore the #eval13 social network data? Download it here.

Here’s where SNA gets fun—there is a lot of value in visually analyzing the network. Yes, your brain can provide incredible insight to the analysis process. In my evaluation consulting experience, the partners I have worked with have consistently benefited more from the exploratory, visual analysis they have benefited from reviewing the quantitative metrics. Sure, it is important to know things like how many people are in the network, how dense the relationships are, and other key stats. But for real-world applications, it is often more important to examine how pivotal players relate to each other relative to the overall goals they are trying to achieve.

So here’s your challenge—what do you learn from analyzing the #eval13 social network data? Share your visualizations and your findings!

Eval13: 3000+ evaluators, 800+ sessions, and way too little time to soak it all in

Last week, more than 3000 evaluators descended on my hometown of Washington, DC for the American Evaluation Association’s annual conference. I learned this much + slept this much = rockstar conference.

#omgMQP

I had the pleasure of spending Monday and Tuesday in Michael Quinn Patton’s Developmental Evaluation workshop. Due 10% to my bad vision and 90% to being starstruck, I sought out front-row seats:

Along with many other nuggets of gold, MQP shared the Mountain of Accountability, a simple visualization demonstrating a Maslow’s hierarchy for organizations. (Start with the basics like auditing, personnel review, and outputs; then progress to typical program evaluation; then progress to developmental evaluation and strategic learning.) This visual was a fan favorite; the ipads and iphones were flying around as everyone tried to snap a picture. Anyone else think that MQP would be a great addition to the dataviz TIG?

My biggest takeaway? Developmental evaluation is probably the future of evaluation, or at least the future of my evaluation career. Also, many evaluators wouldn’t call this approach “evaluation,” which means I’m not an evaluator, but an “evaluation facilitator.” Time to order new business cards!

#thumbsupviz

On Tuesday night I had Dataviz Drinks with Stephanie Evergreen, Tania Jarosewich, Andy Kirk, Johanna Morariu, Jon Schwabish, and Robert Simmon, along with a few more poor souls who had to listen to our endless enthuasiam about charts, fonts, and other things “worth staying up late for.” We’ve each been trying to reshape the dataviz community from one of frequent criticism to one of encouragement and peer learning (e.g., the Dataviz Hall of Fame.) A few beers later, the #thumbsupviz hashtag was born.

Stay tuned for our growing gallery of superb visualizations at thumbsupviz.com.

omg Factor Analysis…

On Wednesday I attended a pre-conference workshop about factor analysis. I learned the approach in grad school a few years ago, have only used it twice, and wanted to brush up my skills. The instructor provided a wealth of resources:

My biggest takeaway? Ouch. My brain was hurting. Leave the factor analysis to the experts because 99% of us are doing it wrong anyway. You don’t have to tell me twice!

Performance Management & Evaluation: Two Sides of the Same Coin

On Wednesday afternoon, I gave an Ignite presentation with my former supervisor and performance management expert, Isaac Castillo. Paired Ignites are rarely attempted, and I’m glad we took a risk. I had a lot of fun giving this talk. Stay tuned for future collaborations from Isaac and I!

Check out our slides and the recording of our presentation:

Excel Elbow Grease: How to Fool Excel into Making (Pretty Much) Any Chart You Want

On Thursday morning, I shared four strategies for making better evaluation charts: 1) adjusting default settings until your chart passes the Squint Test; 2) building two charts in one; 3) creating invisible bars; and 4) really really exploiting the default chart types, like using stacked bars to create a timeline or using a scatter plot to create a dot plot.

Here are the slides:

The section about dot plots was pretty popular, so I recorded it later:

I thought the presentation went okay, but afterwards, an audience member came up to me and asked, “So if I wanted to make a different type of chart in Excel, like anything besides a typical bar chart, how would I do it? What could I make?” “That’s what I just spent the last 45 minutes showing you.” “No I mean, if I wanted to make one of these in Excel, could I do it?” “Weren’t you in the audience for the presentation I just did?” “Yes, that would be a cool presentation, you should show us how to make those charts in Excel.” Thanks for the great idea buddy, I’ll submit that idea to next year’s conference. :)

East-coast happy hour

For the second year in a row, the east-coast AEA affiliates got together for a joint happy hour on Thursday night. Good vibes and familiar faces.

eval13_happy_hour

The Washington Evaluators, Baltimore Area Evaluators, New York City Consortium of Evaluators, and the Eastern Evaluation Research Society


The Conference is Over, Now What? Professional Development for Novice Evaluators

On Friday afternoon I led a roundtable with tips for novice evaluators. The discussion was awesome, especially the great chats I had with people afterwards. I’m going to write a full post recapping that session. Stay tuned!

How to Climb the R Learning Curve Without Falling Off the Cliff: Advice from Novice, Intermediate, and Advanced R Users

On Saturday morning I had the pleasure of presenting with a former teammate, Tony Fujs, and my new teammate, Will Fenn. Tony dazzled the audience with strategies for automating reports and charts with just a few lines of R code, and Will shared tips to help novices avoid falling off the learning curve cliff. Check out their resources and tips in this handout.

tony_will

Tony Fujs (left) and Will Fenn (right)

I thought the presentation went okay, but afterwards, an audience member commented, “It would be really cool if you got some evaluators together to show us what kinds of things are possible in R.” “Umm yep, that’s what we just did, Will and Tony showed how to automate reports and create data visualizations in R.” “Yep exactly, that would be a great panel, you could get several evaluators together and show how to automate reports and make data visualizations in R.” “Did you see the panel we just did?” “Yeah you should put a panel together like that.” Okay thanks, I’ll consider it. :)

Evaluation Blogging: Improve Your Practice, Share Your Expertise, and Strengthen Your Network

Dozens of evaluators have influenced and guided my blogging journey, and I was fortunate to co-present with three of them on Saturday: Susan Kistler, Chris Lysy, and Sheila Robinson. I first started blogging after watching Chris’ Ignite presentation at Eval11, Susan’s initial encouragement kept me going, and Sheila provides a sounding board for my new ideas.

awesome_panelists

Left to right: Susan Kistler, Chris Lysy, and Sheila B. Robinson

Can you tell we presented on Saturday morning?! Chris and I arrived early. I almost panicked, but instead Chris and I started laughing hysterically, and then a second person arrived. Close call!

empty_ballroom

By the time we started, we drew a good crowd of 30-40 bloggers and soon-to-be bloggers. Same time next year??

Evaluation Practice in the Early 21st Century

Where have we come from, and where are we headed? Evaluators have accomplished some amazing things, and the future is bright. Patrick Germain and Michelle Portlock, evaluation directors at nonprofit organizations, shared strategies for making evaluation happen when you are not in the room:

eval13_nonprofiteval

For me, the mark of a good presentation is when the evaluator shows vs. tells us something new. Kim Sabo Flores, Chad Green, Robert Shumer, David White, Javier Valdes, and Manolya Tanyu talked about incorporating youth voices into policymaking decisions. The best part: the panelists invited a youth participant to speak alongside them on the panel so that she could share her experiences firsthand.

eval13_youth_voices

They taught us about youth presence vs. participation, and then they showed us about youth presence vs. participation. Well done!

A dataviz panel shared a brief history of dataviz; strategies for displaying qualitative data; and ideas for using graphic recording:

One of many, many graphic recording examples shared by Jara Dean-Coffey

One of many, many graphic recording examples shared by Jara Dean-Coffey

The Innovation Network team is pretty fond of graphic recording too, and Kat Athanasiades even recorded an entire advocacy evaluation panel. Thanks to Cindy Banyai for capturing this awesome video!

And just in case you’re not familiar with my plans for our field…

Wave goodbye to the Dusty Shelf Report!

Wave goodbye to the Dusty Shelf Report!

Lookin’ good, Eval! See you next year in Denver!