Author Archives: Ann K. Emery

About Ann K. Emery

I design information and share my strategies through keynotes and custom workshops.

New-ish Charts for Evaluation

You know the drill: Better charts = better communication = better understanding = better decision making. Whether you’re trying to highlight the most important findings, simplify that lengthy report, or just get someone to open your report in the first place, charts can be one of your strongest communication tools.

Ready to move beyond the typical pie chart or line chart? Today I’m covering 4 awesome charts that are under-used (but extremely useful!) in evaluation.

Social Network Maps

Johanna Morariu's map of the #eval13 hashtag

Johanna Morariu’s map of the #eval13 hashtag

Although social network maps aren’t brand new to evaluation (read about them on aea365 here), I had to mention them because I’m still surprised how many evaluators aren’t using social network maps.

Social network maps help you understand relationships between organizations, people, or even conference attendees. But beware – social network maps aren’t for everyone.

Want to create your own? Check out Johanna Morariu’s tutorial on using NodeXL, a free Excel plug-in.

Tree Maps

Tree maps are for hierarchical or nested data, and they’re great for showing part-to-whole patterns. 

Here’s an example from Innovation Network’s State of Evaluation research in which Johanna Morariu, Kat Athanasiades and I examined the proportion of nonprofits demonstrating promising evaluation capacities and behaviors:

Sample treemap from State of Evaluation 2012

Sample treemap from State of Evaluation 2012

(Can you imagine that same data in a bar chart? It just wouldn’t work; all the relationships between nested variables would be lost.)

Want to learn more? Check out Johanna Morariu’s example that breaks down participants’ gender, age, and whether or not they completed a program. 

Dot Plots

Dot plots are similar to bar charts and clustered bar charts (but in many cases, they’re easier to read and a lot less cluttered).

Here’s a 5-minute overview about what dot plots can be used for:

(Better) Bar Charts

And don’t forget about the good ol’ bar chart, your go-to chart for most of your datasets.

But not all bar charts are created equal. It’s no longer acceptable to paste that default draft chart straight  into your report; you should expect to spend a few minutes cleaning up every single chart to improve its labeling and overall readability.

Once you’ve mastered the basic bar chart, try your hand at one of these newer variations, like a diverging stacked bar chart, floating bar chart, or small multiples bar chart. The bar chart’s versatility make it the most essential chart for evaluation.

Have you used any of these charts for evaluation purposes? Are there other new-ish charts you think the evaluation world should be aware of? Please share your ideas with the community!

The Conference is Over, Now What? Professional Development for Novice Evaluators

At the American Evaluation Association’s annual conference in October 2013, I led a roundtable titled “The Conference is Over, Now What? Professional Development for Novice Evaluators.” We discussed ways that novices can deepen their knowledge, build their skills, socialize with other evaluators, and get involved in leadership positions. I compiled the notes here so more people can benefit from these resources.

Here are the best resources for novice evaluators:

aea365

aea365This is the American Evaluation Association’s daily blog located at aea365.org. You can read about everything from item response theory to slide design. Confession: I rarely read an entire post. Instead, I’m skimming the posts just to see the title, author, author’s organization, and the main gist of the content. This is a great way to stay up-to-date on the biggest trends in the field.

You should seriously write for aea365, probably 2-3 times a year, even if you’re new to the field. Just make sure you follow the contribution guidelines.

Affiliates and other organizations

AEA is the national-level mothership and there are more than 20 local and regional affiliates. You can find a full listing of affiliates here: http://www.eval.org/p/cm/ld/fid=12. Every affiliate is different. For example, the Washington Evaluators hold monthly brown bags, quarterly happy hours, and an annual holiday party. The Eastern Evaluation Research Society holds an annual 3-day conference. Other affiliates hold virtual book clubs, maintain blogs, or simply hold member meetings via teleconference.

You should join your affiliate. Seriously. The mailing lists are little nuggets of gold and worth every penny of that $25/year membership. The Washington Evaluators, for example, send job announcements almost every day, so you’ll always know which organizations are hiring and expanding. Don’t forget to attend the affiliate events too. (Sometimes people just pay dues but skip all the events, and then they don’t know why they’re not meeting anyone? This confuses me.) After a year, start planning small events yourself, like a brown bag. Then, join the Board.

Here are some additional reasons to join affiliates and ideas for getting involved:

data_community_dcThere are tons of additional evaluation groups. For example, the Environmental Evaluators Network, led by Matt Keene, holds forums for evaluators interested in environmental issues. If you’re in Washington, DC, the Aspen Institute holds quarterly breakfast panels focused on advocacy evaluation. At Innovation Network, we hold Ask an Evaluator sessions for nonprofit leaders. Tony Fujs and I also attend Data Science DCData Visualization DC, and Data Community DC monthly meetups. No matter your city, there are probably lots of events that fit your interests.

Blogs

First, check out evalcentral.com, run by Chris Lysy. Chris pulls in feeds from 60+ evaluation blogs so you’ll get exposed to a diverse set of perspectives. Chris even developed a daily email digest, so you can subscribe once to all 60+ blogs rather than monitoring your subscriptions to all the individual blogs. I suggest setting EvalCentral as one of your homepage tabs (along with your other must-haves like Gmail and Pandora) so it’s there every time you log into your computer. And again, I rarely read an entire blog post but I skim everything for the title, author, and main gist of what they’re talking about.

Second, check out AEA’s listing of evaluators and evaluation organizations who blog: http://www.eval.org/p/cm/ld/fid=71

I started blogging after watching Chris Lysy’s Ignite presentation at the 2011 AEA conference. Here’s Chris’ Ignite, which outlines just a few of the infinite reasons why evaluators should blog:

Coffee Break webinars

Coffee Break webinars are just 20 minutes long, so they’re a perfect way to squeeze in some quick professional development in the middle of a busy work day. The best part? They’re free for AEA members. I like to sign up for topics that I know nothing about. After 20 minutes, I’m not an expert, but at least I’ve got a basic understanding of that flavor of evaluation.

Conferences

Evaluation conferences include:

Do you know of additional evaluation conferences? Please link to them in the comments section below.

I also like to attend non-evaluation conferences to hear how non-evaluators are describing our work (they have completely different lingo and tend to value qualitative data way more than evaluators do).

eStudies

An eStudy is a 3- to 6-hour webinar run by AEA. eStudies are like mini grad school courses because they go in-depth on a particular topic (as opposed to 20-minute Coffee Break webinars, which just provide an overview of a topic). eStudies are broken into 90-minute chunks and there’s typically a homework assignment between each segment to help you practice your new skills.

For example, I participated in an eStudy about nonparametric statistics in which the instructor covered about 20 different nonparametric statistics, when to use each one, and how to perform the calculations in SPSS. We even got to keep her slides, which were full of step-by-step SPSS screenshots. Almost two years later, I still pull out my eStudy notes whenever I need to use some nonparametric statistics.

Journals

AEA offers two journals, the American Journal of Evaluation and New Directions for Evaluation. Both of these journals are included with your AEA membership. What a steal!

LinkedIn

These days, I can’t imagine an employer not doing a full internet search on new applicants. Make sure your LinkedIn profile has, at the bare minimum, a professional photo, your full work history (including dates), and your education history. You can also use LinkedIn to build your online portfolio (e.g., embedded slideshows from recent conference presentations, links to publications and projects, and your list of certifications).

Want to connect with other evaluators? Some awesome evaluation groups on LinkedIn include:

Do you know of additional evaluation groups on LinkedIn? Share your suggestion in the comments below. Thanks!

Listservs, mailing lists, and newsletters

First, check out EvalTalk: https://listserv.ua.edu/archives/evaltalk.html. This is a traditional listserv that goes directly to your email inbox. Subscribing to EvalTalk is a must (if only to watch the bloodbath as evaluators battle each other online). Make sure you adjust your settings so that you get a daily or weekly digest – otherwise you’ll drown in the sheer volume of messages.

Second, subscribe to mailing lists and newsletters specific to your client projects. Whenever I begin a new project, I search the client’s website and subscribe to everything I can (like their Twitter feed, email newsletter, and blog). As a consultant, I only see one slice of their work. Subscribing to all of their updates helps me get a fuller picture of their work, so I can make sure the evaluation fits their organization’s culture and needs.

Thought Leaders Discussion Series

AEA’s Thought Leaders Discussion Series is like a big message board to debate bigger-picture, theoretical issues in the field. Each series is led by a different person and has a different flavor.

Topical Interest Groups (TIGs)

Topical Interest Groups (TIGs) are known as affinity groups in other professional associations. You get to select five TIGs when you join AEA, and you can change your selection at any time. Each TIG is different–different sizes, leadership and committee structure, and different business meetings. I suggest attending business meetings for multiple TIGs at each conference. See which culture fits you best. After a few years, get more involved by running for a leadership position.

Twitter

Just getting started on Twitter? Here’s my list of 275+ evaluators and 80+ evaluation organizations who are using Twitter. Use #eval13 to tweet about that year’s AEA conference (not #AEA13 – the poor folks at the American Equine Association will get confused). Use #eval for all your regular evaluation-related content.

There's a huge online evaluation community. What are you waiting for?!

Here’s Johanna Morariu’s social network map of the #eval13 hashtag. There’s a huge online evaluation community. What are you waiting for?!

White papers and other gray literature

There are approximately 8000 evaluators in the American Evaluation Association. I estimate that maybe… 5%?… aim to publish articles in academic journals. Most of us are practitioners and consultants (not academics, theorists, or professors). White papers and other gray literature are a great way to learn about our work, our insights, and our tips. For examples, check out innonet.org/research and evaluationinnovation.org/publications.

Additional resources

What are your favorite resources? Which resources were most valuable during your first few years in the field? And, most importantly–do you have different viewpoints on any of the resources I described? Share your perspectives! I’ve presented one opinion and there are many more to add to the mix.

Eval13: 3000+ evaluators, 800+ sessions, and way too little time to soak it all in

Last week, more than 3000 evaluators descended on my hometown of Washington, DC for the American Evaluation Association’s annual conference. I learned this much + slept this much = rockstar conference.

#omgMQP

I had the pleasure of spending Monday and Tuesday in Michael Quinn Patton’s Developmental Evaluation workshop. Due 10% to my bad vision and 90% to being starstruck, I sought out front-row seats:

Along with many other nuggets of gold, MQP shared the Mountain of Accountability, a simple visualization demonstrating a Maslow’s hierarchy for organizations. (Start with the basics like auditing, personnel review, and outputs; then progress to typical program evaluation; then progress to developmental evaluation and strategic learning.) This visual was a fan favorite; the ipads and iphones were flying around as everyone tried to snap a picture. Anyone else think that MQP would be a great addition to the dataviz TIG?

My biggest takeaway? Developmental evaluation is probably the future of evaluation, or at least the future of my evaluation career. Also, many evaluators wouldn’t call this approach “evaluation,” which means I’m not an evaluator, but an “evaluation facilitator.” Time to order new business cards!

#thumbsupviz

On Tuesday night I had Dataviz Drinks with Stephanie Evergreen, Tania Jarosewich, Andy Kirk, Johanna Morariu, Jon Schwabish, and Robert Simmon, along with a few more poor souls who had to listen to our endless enthuasiam about charts, fonts, and other things “worth staying up late for.” We’ve each been trying to reshape the dataviz community from one of frequent criticism to one of encouragement and peer learning (e.g., the Dataviz Hall of Fame.) A few beers later, the #thumbsupviz hashtag was born.

Stay tuned for our growing gallery of superb visualizations at thumbsupviz.com.

omg Factor Analysis…

On Wednesday I attended a pre-conference workshop about factor analysis. I learned the approach in grad school a few years ago, have only used it twice, and wanted to brush up my skills. The instructor provided a wealth of resources:

My biggest takeaway? Ouch. My brain was hurting. Leave the factor analysis to the experts because 99% of us are doing it wrong anyway. You don’t have to tell me twice!

Performance Management & Evaluation: Two Sides of the Same Coin

On Wednesday afternoon, I gave an Ignite presentation with my former supervisor and performance management expert, Isaac Castillo. Paired Ignites are rarely attempted, and I’m glad we took a risk. I had a lot of fun giving this talk. Stay tuned for future collaborations from Isaac and I!

Check out our slides and the recording of our presentation:

Excel Elbow Grease: How to Fool Excel into Making (Pretty Much) Any Chart You Want

On Thursday morning, I shared four strategies for making better evaluation charts: 1) adjusting default settings until your chart passes the Squint Test; 2) building two charts in one; 3) creating invisible bars; and 4) really really exploiting the default chart types, like using stacked bars to create a timeline or using a scatter plot to create a dot plot.

Here are the slides:

The section about dot plots was pretty popular, so I recorded it later:

I thought the presentation went okay, but afterwards, an audience member came up to me and asked, “So if I wanted to make a different type of chart in Excel, like anything besides a typical bar chart, how would I do it? What could I make?” “That’s what I just spent the last 45 minutes showing you.” “No I mean, if I wanted to make one of these in Excel, could I do it?” “Weren’t you in the audience for the presentation I just did?” “Yes, that would be a cool presentation, you should show us how to make those charts in Excel.” Thanks for the great idea buddy, I’ll submit that idea to next year’s conference. 🙂

East-coast happy hour

For the second year in a row, the east-coast AEA affiliates got together for a joint happy hour on Thursday night. Good vibes and familiar faces.

eval13_happy_hour

The Washington Evaluators, Baltimore Area Evaluators, New York City Consortium of Evaluators, and the Eastern Evaluation Research Society

The Conference is Over, Now What? Professional Development for Novice Evaluators

On Friday afternoon I led a roundtable with tips for novice evaluators. The discussion was awesome, especially the great chats I had with people afterwards. I’m going to write a full post recapping that session. Stay tuned!

How to Climb the R Learning Curve Without Falling Off the Cliff: Advice from Novice, Intermediate, and Advanced R Users

On Saturday morning I had the pleasure of presenting with a former teammate, Tony Fujs, and my new teammate, Will Fenn. Tony dazzled the audience with strategies for automating reports and charts with just a few lines of R code, and Will shared tips to help novices avoid falling off the learning curve cliff. Check out their resources and tips in this handout.

tony_will

Tony Fujs (left) and Will Fenn (right)

I thought the presentation went okay, but afterwards, an audience member commented, “It would be really cool if you got some evaluators together to show us what kinds of things are possible in R.” “Umm yep, that’s what we just did, Will and Tony showed how to automate reports and create data visualizations in R.” “Yep exactly, that would be a great panel, you could get several evaluators together and show how to automate reports and make data visualizations in R.” “Did you see the panel we just did?” “Yeah you should put a panel together like that.” Okay thanks, I’ll consider it. 🙂

Evaluation Blogging: Improve Your Practice, Share Your Expertise, and Strengthen Your Network

Dozens of evaluators have influenced and guided my blogging journey, and I was fortunate to co-present with three of them on Saturday: Susan Kistler, Chris Lysy, and Sheila Robinson. I first started blogging after watching Chris’ Ignite presentation at Eval11, Susan’s initial encouragement kept me going, and Sheila provides a sounding board for my new ideas.

awesome_panelists

Left to right: Susan Kistler, Chris Lysy, and Sheila B. Robinson

Can you tell we presented on Saturday morning?! Chris and I arrived early. I almost panicked, but instead Chris and I started laughing hysterically, and then a second person arrived. Close call!

empty_ballroom

By the time we started, we drew a good crowd of 30-40 bloggers and soon-to-be bloggers. Same time next year??

Evaluation Practice in the Early 21st Century

Where have we come from, and where are we headed? Evaluators have accomplished some amazing things, and the future is bright. Patrick Germain and Michelle Portlock, evaluation directors at nonprofit organizations, shared strategies for making evaluation happen when you are not in the room:

eval13_nonprofiteval

For me, the mark of a good presentation is when the evaluator shows vs. tells us something new. Kim Sabo Flores, Chad Green, Robert Shumer, David White, Javier Valdes, and Manolya Tanyu talked about incorporating youth voices into policymaking decisions. The best part: the panelists invited a youth participant to speak alongside them on the panel so that she could share her experiences firsthand.

eval13_youth_voices

They taught us about youth presence vs. participation, and then they showed us about youth presence vs. participation. Well done!

A dataviz panel shared a brief history of dataviz; strategies for displaying qualitative data; and ideas for using graphic recording:

One of many, many graphic recording examples shared by Jara Dean-Coffey

One of many, many graphic recording examples shared by Jara Dean-Coffey

The Innovation Network team is pretty fond of graphic recording too, and Kat Athanasiades even recorded an entire advocacy evaluation panel. Thanks to Cindy Banyai for capturing this awesome video!

And just in case you’re not familiar with my plans for our field…

Wave goodbye to the Dusty Shelf Report!

Wave goodbye to the Dusty Shelf Report!

Lookin’ good, Eval! See you next year in Denver!

Dataviz Challenge #6: Unit Charts

Lately I’ve been feeling let down by summary statistics: the min and max, mean and median, quartiles and standard deviation… They do their job well enough. Summary statistics tell a summary. An aggregate story, bringing all the messy scores together into some sort of cohesion. We grab the averages and stick them in bar charts.

But sometimes we don’t want to summarize, we want to highlight the variety in scores and remind readers that the chart is actually made up of individual people, not just the mean or median. Long live the messy data, the dispersion, the distribution, the spread!

unit_chart_1

I could tell you a few descriptive statistics: min = 26%, max = 100%, Q1 = 64%, Q3 = 83%, median = 74%, mean = 73%, standard deviation = 15%. Or, I could show you the spread in this unit-chart-turned-histogram.

Unit charts are not your new go-to chart. They do not replace bar charts. They are not appropriate for all datasets. They’re best for those few moments when you choose to emphasize individual units of data. A unit could be 1 person, or 10 people, or 1 school, and so on. Units can be represented in circles or squares or triangles. Units can be stacked on top of each other to form a histogram, or they can be plotted along a line.

The dataviz challenge: Re-create the chart in in Excel, R, or some other free software program. Then, tweet a screenshot to @annkemeryBonus: Make a unit chart for your own data. Or, do you emphasize individual differences with other chart types? Share your ideas with the community!

The prize for playing: A professional development opportunity and bragging rights. I’ll post the how-to guide in a couple weeks.

Want to learn more? I’m presenting about charting techniques at the American Evaluation Association’s annual conference on Thursday, October 17, 2013 at 11am in Washington, DC. Hope to see you there!

Dataviz Challenge #5: The Answers!

I’ve been in love with diverging stacked bar charts since I saw Joe Mako’s submission to Cole Nussbaumer’s dataviz challenge last December. Joe made this contest-winning chart. But in Tableau! The amazing but expensive software!

Could I ever create one in Excel?!

Yes! Luckily I’d learned about the Values in Reverse Order feature from Stephanie Evergreen. With Joe’s inspiration and Stephanie’s strategy, I started making these beauties for myself in Excel.

I wanted to share the chart secrets with all of you, so last month, I challenged readers to re-create a diverging stacked bar chart like this one:

diverging_before-after

It looks like I’m not the only one who loves diverging stacked bar charts. Congratulations to the 12 contestants! In order of submission, they are:

Most contestants seized the opportunity to use their own datasets and made adjustments as needed. For example, Sheila’s dataset fit a traditional stacked bar chart better than a diverging stacked bar chart, and Anjie needed to display cut-off scores.

So how do you make these diverging stacked bar charts, anyways?! There are at least two strategies: Either a) create two separate charts, a strategy demonstrated in previous posts like this one, or b) use floating bars, a strategy demonstrated in previous posts like this one. Stephanie Evergreen blogged about strategy B a few weeks ago and her explanation is pretty awesome, so I’m going to focus on strategy A today.

Here’s a slideshow about the two-charts-in-one strategy. Enjoy!

Bonus! Download my Excel file.

Want to learn more? I’ll be sharing my top 5 must-have chart strategies at the American Evaluation Association’s annual conference on Thursday, October 17.

For discussion: Nearly all of the contestants requested friendly feedback on their graphs. In most cases, contestants were trying these charts for the first time and thinking about whether or not these charts could be adapted for their datasets. What do you think?