Tag Archives: evaluation

Four Steps: Social Network Analysis by Twitter Hashtag with NodeXL [Guest post by Johanna Morariu]

Note from Ann: Today’s guest post is from Johanna Morariu, Director of Innovation Network, AEA DVRTIG Chair, and dataviz aficionado.

snaBasic social network analysis is something EVERYONE can do. So let’s try out one social network analysis tool, NodeXL, and take a peek at the Twitter hashtag #eval13.

Using NodeXL (a free Excel plug-in) I will demonstrate step-by-step how to do a basic social network analysis (SNA). SNA is a dataviz approach for data collection, analysis, and reporting. Networks are made up of nodes (often people or organizations) and edges (the relationships or exchanges between nodes). The set of nodes and edges that make up a network form the dataset for SNA. Like other types of data, there are quantitative metrics about networks, for example, the overall size and density of the network.

There are four basic steps to creating a social network map in NodeXL: get NodeXL, open NodeXL, import data, and visualize.

Do you want to explore the #eval13 social network data? Download it here.

Here’s where SNA gets fun—there is a lot of value in visually analyzing the network. Yes, your brain can provide incredible insight to the analysis process. In my evaluation consulting experience, the partners I have worked with have consistently benefited more from the exploratory, visual analysis they have benefited from reviewing the quantitative metrics. Sure, it is important to know things like how many people are in the network, how dense the relationships are, and other key stats. But for real-world applications, it is often more important to examine how pivotal players relate to each other relative to the overall goals they are trying to achieve.

So here’s your challenge—what do you learn from analyzing the #eval13 social network data? Share your visualizations and your findings!

And then he flashed a slide that made the whole room gasp… [Guest post by Jen Hamilton]

Today’s guest post is from Jen Hamilton, a.k.a. superwoman. Jen is an experienced evaluator, the communications committee co-chair for the Eastern Evaluation Research Society, and, perhaps officially now, an evaluation blogger. Check out Jen’s previous posts about the magical ingredient in potent presentations and evaluation theory. Enjoy! Ann Emery

————————————————————————–

And you were worried about your socks…

After writing my previous guest post on conference presentations, I’ve been thinking about them a little more.  Specifically, what to do when something goes wrong. I don’t mean a glitch, I mean, horribly, terribly WRONG.

There’s the usual suspects when you think about what might go wrong with your presentation. You forgot your flash drive on the plane, your socks don’t match, you forgot to wear socks, the projector doesn’t work, nobody except your mother is in the audience, EVERYONE is in the audience—the list goes on and on. The over-riding worry is that you are going to come off looking like an unprofessional, inarticulate doofus in an ill fitting suit.

I’m here to tell you that these worries pale in comparison to the worst presentation I’ve ever seen, and how the presenter turned it around.

This was in 2009 at a professional conference, and the room was full. Not just of regular geeks like me, but also with big famous geeks who have stuff named after them. The presentation started well enough, the equipment worked, the presenter was wearing socks, and was reasonably articulate. And then he flashed on the screen a slide that made the whole room gasp. From this slide, even I could tell that the study that he had designed, and worked so hard on, had a giant, fatal flaw. He had made a whopper of a mistake in the design of the study. I saw it. I looked around, and could tell that EVERYBODY had seen it. I considered sneaking out of the room so I wouldn’t have to watch the inevitable bloodbath.

A big famous geek raised his hand, and the surprised presenter was soon sporting a horrified expression, as the magnitude of his mistake sunk in. Everything he had done was tainted. And here is what he did. Instead of explaining and getting reflexively defensive, he said. “Oh, boy. Look at that,” pause, “That’s a problem, isn’t it? I can’t believe I missed that.” Instead of smelling blood, the audience rallied to his defense, pointing out (kindly) how it was easy to miss, and then they started brainstorming ways to fix it. Basically a room full of smart people were working together to salvage his study.  It went from a presentation to a brainstorming session. The presenter was furiously taking notes. It was uplifting, and not only that, I learned more from the brainstorming than I ever would have from the original presentation.

So. The lesson is –don’t ever get defensive. And don’t worry about the socks.

— Jen Hamilton, @limeygrl

On Methodological Demons, Farmers, and Bridgebuilders [Guest post by Andrew Blum]

Last week I wrote about my inner evaluation demon (you can read the post here). Today’s post is a response by Andrew Blum. Andrew is the Director of Learning and Evaluation at the US Institute of Peace. He also tweets about peacebuilding and internal evaluation here.

When we met at the Eastern Evaluation Research Conference in April, we realized that we work close enough to each other in DC that we can meet for lunch and talk about… yes!… evaluation. I’m very fortunate to have met someone like Andrew because he’s such a great resource for strategies about internal evaluation and building a learning culture.

— Ann Emery

——————————————————-

In a recent blog post, Ann asked what our evaluation demons are.  The one that torments me the most, one that will be familiar to most anyone who has conducted research, is the one that continues to ask, “yes, but how do you really know that?”

You patiently explain your methodology to the demon, your verification procedures, your triangulation strategies, but the demon always has a doubt to express, a potential flaw, something else that could be done. When you finally say, yes, “but cost!” the demon will chuckle.

The reason this demon is so powerful is the difficulty we have in identifying a “good enough” methodology. Not only is this a big, hard question, but I’ve noticed recently it is hard question to even have a productive debate about. The question seems to split people into two groups that I have begun to think of as farmers and bridgebuilders. To understand how these two groups think, imagine an evaluation with a methodology that is exactly 50% as rigorous as you would like. In this situation, the farmer sees half-a-crop, still something to eat. The bridgebuilder sees half-a-bridge, useless and potentially dangerous. It’s not hard to see how these two groups might talk past each other when discussing methodology and methodological rigor.

Perhaps this is because I work in the field of peacebuilding, where quality data are hard to come by, but I am a proud farmer. I am constantly telling my colleagues, get me something, gather me some information, let’s do a bit better. Or to perhaps abuse the metaphor, let’s eat what we have, and work to plant a bit more of the field. But frankly, if I am talking to a committed bridgebuilder this kind of activity is hard to explain. So I’m interested in your thoughts on navigating this divide and ways to create more productive conversations on “good enough” methodologies.

— Andrew Blum (@alb202)

Newbie Evaluator Essentials [Guest post by Karen Anderson]

Blog swap! Karen Anderson and I are mixing things up today by guest-posting on each other’s blogs.

Karen Anderson is my favorite “newbie” evaluator. Karen completed a master’s degree in social work a couple years ago. She’s currently an evaluator at a nonprofit in Atlanta and she’s the Diversity Programs Intern for the American Evaluation Association. In all her “spare” time, she’s doing pro-bono evaluation for the State of Black Gay America Summit organizers. And she’s a blogger!

You can read Karen’s LinkedIn profile here, and you can read her blog, On Top of the Box Evaluation, by clicking here.

I hope you enjoy Karen’s guest post.

— Ann Emery

———-

When I think about the “newbie” evaluator or the not so new professional to the evaluation field and the necessary knowledge base and skills needed to not only survive, but to thrive, I reflect upon Jean King’s Essential Competencies for Program Evaluators.

King’s Essential Competencies for Program Evaluators include:

  1. Professional Practice: These are the fundamental norms and values of evaluation practice, which include working ethically, applying evaluation standards, and considering the public welfare, which is further explained in the AEA Guiding Principles for Evaluation under Responsibilities for General and Public Welfare.
  2. Situational Analysis: The unique interests, issues, and contextual circumstances of evaluation.
  3. Reflective Practice: One’s own evaluation expertise and need for growth, which includes knowing self, reflecting on your practice, pursuing professional development, and building professional relationships.
  4. Interpersonal Competence: Do you have the people skills for evaluation practice? This includes negotiation skills, conflict resolution, cross cultural competence, and facilitating constructive interpersonal interactions.
  5. Project Management: King describes this as the “nut and bolts” of evaluation work” in the presentation above. This includes presenting work in a timely manner , budgeting, responding to RFPs, use of technology, and supervising and training others.
  6. Systematic Inquiry: The technical aspects of evaluation. What’s your knowledge base? Do you know qualitative, quantitative, and/or mixed methods? Developing program theory, evaluation design and evaluation questions are also major components of this competency area.

So in terms of the Essential Competencies for Program Evaluators, how are you doing? I wish I had some type of rating scale to help me to see how far I’ve come along. I’d have to say on the job training, webinars, and seeking out evaluation trainings, no matter how brief, like the online American Evaluation Association’s Coffee Break Demonstration Series has helped me to come a long way since my grad school days (2010).

Newbies: What are some “essentials” that you think are missing from above that relate to your evaluation practice growth and development?

Not so newbies: What steps do you take to sharpen your evaluation skills and to increase your knowledge base?

— Karen Anderson

My evaluation “pre-test” for non-evaluators

I love helping non-evaluators use results to improve programs. Click here to read what I’m aiming for in this quest for evaluation use.

Sometimes program staff already know a lot about evaluation. Often, I’m teaching them about evaluation for the first time. So where do I begin? How do I know “where” they’re at in their readiness to dive into evaluation?

I hope you enjoy reading about my evaluation “pre-test” for non-evaluators.

—————————————

When I meet with program staff for the first time, I simply say, “So, tell me about your program.” I get a range of responses. Here’s what the responses tell me:

Level 1: “Well, I work with students, like I was helping Lorena with her college essays this afternoon. Sometimes they’re going through a lot, like Jose, his brother just got shot, and he saw it happen, so he’s having a hard time dealing with it, and I’m trying to get him to talk with Maria [a mental health counselor]. Sometimes the students need extra help too, so I send them to Sara [a tutor].”

Translation: References to specific program participants (Lorena, Jose). Focused exclusively on program activities, not outcomes. Focused exclusively on their personal role in the program.

Level 2: “I’m an academic advisor. I help high school students get into college. Sometimes they’ve got non-academic things going on, so I refer them to mental health counselors, or do some ‘light’ case management myself. I also refer them to our program’s tutor, Sara, when they need extra academic help to get their grades up.”

Translation: Less personal. Includes basic demographics about program participants (they’re high school students). Uses terminology correctly (academic advisor, case management, referrals). Although it’s still focused on program activities, there are small references to program outcomes (getting into college, purpose of tutoring is to increase their grades).

Level 3: “I’m an academic advisor, so I help with college essays, coordinating with tutors, etc., to help high school students get into college. I also make referrals to mental health counselors and tutors. I work on a team with 3 other academic advisors. I work with 9th grade students, someone else works with 10th grade students, etc. because students need different things in different grades. The program’s director has a different role; he does paperwork for the funders, among other things. You might want to talk to him, too.”

Translation: Understands their role in relation to other teammates, and that there are different roles for a reason.

Level 4: “I’m an academic advisor, so I do x, y, and z while the other academic advisors, tutors, and mental health counselors do a, b, and c. The whole point of our program is to help first-generation, low-income students get a little extra help so they can go to college… and not just get into college, but graduate within 6 years. We’re aiming for other shorter-term goals too, like we’ve got goals for SAT scores, GPAs, etc. to make sure students are on the right track to graduate high school and be successful in college.”

Translation: The focus is on program outcomes rather than program activities. Hooray! This is pretty rare among non-evaluators.

Level 5: “Why don’t I just show you our logic model?”

Translation: YES!

Level 6: “Why don’t I just show you our logic model? [Finds logic model in her well-organized computer files, or better yet, has it pinned to the board next to her desk.] There are a couple issues here, maybe you can help us with this… We’re supposed to do all this stuff [points to activities], but these goals are pretty unrealistic [points to outcomes]. I mean, how are we supposed to tutor 200 students a semester? The funder only gave us $20,000. That’s barely enough for a part-time staff person’s salary. How is a single part-time tutor supposed to tutor 200 students every single semester and make any impact?”

Translation: I’m pretty excited at this point. They understand the logic model and the disconnect between the actitivies and outcomes.

Level 7: “Why don’t I just show you our logic model?… It’s pretty obvious that our activities don’t link to our outcomes… There’s a big disconnect, and I’ve already talked to the funders about this. In fact, we just chatted on the phone again last week. While we can’t change the logic model this year, since we’re already half-way into the grant, they said we could alter the logic model for next year. We’re pretty excited that our program officer understands evaluation so well.”

Translation: The Evaluation Zen Master among non-evaluators! Here’s a leader who’s already poised to make differences for the participants, for the program, and in their relationship with the funders. This leader can be a champion for evaluation within the organization.

Sometimes the direct service staff are at lower levels while managers are at higher levels – but not always. I’ve met plenty of tutors, academic advisors, and mental health counselors within a single organization who are at Levels 5-7 and plenty of managers who are Levels 1-4. That’s fine. Different people are at different levels. I’ll help them advance to the next level regardless of their starting point. The problems arise when this happens within a single program; that is, when a tutor is at Level 5 or 6 while their direct supervisor is only at Level 2 or 3.

Have you used similar tests when beginning an evaluation? Have you seen similar results? If not, what’s your strategy? How do you figure out where to begin?

— Ann Emery