Research on Evaluation: It Takes a Village (The Solution)

Our first post lamented the poor response rates in research on evaluation. There are many reasons for these poor response rates, but there are also many things that we can do to improve response rates and subsequently improve the state of research on evaluation.

How can evaluators improve response rates?

Coryn et. al (2016) suggests that evaluators find research on evaluation important. However, the response rates to these projects would suggest otherwise. As with any area of opportunity, there is often several components that influence success. Yes, evaluators should naturally care more about propelling our field forward, but the ability to change that without amending our practices as researchers seems unlikely. Therefore, we believe that the importance of participation must be built and to do we need to focus on what evaluators see as valuable research. Researchers must also take care to carry-out research with sound methodologies. Some recommendations for improving response rates as evaluators include:

  1. Conducting research that is relevant to the field of evaluation while maintaining a high standard of rigor. You can increase the likelihood of this by…
    1. Piloting your study (grad students and colleagues are great for this!)
    2. Asking for feedback from a critical friend
    3. Having evaluation practice guide or inform the research questions
  2. Reduce the cognitive load on participants by making our surveys shorter and easier to complete. You can do this by tying your questions to your research questions. It’s fun to have lots of data but it is even better to have meaningful data (i.e. stop asking unnecessary questions).
  3. Apply Dillman’s Tailored Design method. This includes things like:
    1. Increasing the benefits of participation, such as by asking for help from participants or providing incentives for participation
    2. Decreasing the costs of participation, such as by ensuring no requests are personal or sensitive in nature and that it is convenient for participants to respond

What can the AEA Research Request Task Force do?

The AEA Research Request Task Force is also a crucial component of this process, acting not only as a gatekeeper to the listserv, but also as quality and relevance control. Currently, samples of usually 1,000-2,000 evaluators are sent out for every research request. If we could increase the response rate, we could decrease our random sample and decrease the load on the AEA membership. Some recommendations for new policies for the task force include:

  1. Policies that would satisfy Dillman’s Tailored Design Method, including allowing:
    1. Personalized contact (e.g., providing names to researchers)
    2. Repeated contact to participants
    3. Contact via postal or telephone
  2. Consider sending out survey requests themselves to improve the legitimacy of survey requests and reduce confidentiality concerns
  3. Have more stringent rigor and relevancy standards to decrease the likelihood that participating evaluators get frustrated over the surveys that sent out and subsequently opt out of future research

Conclusions

We believe that evaluators should care more about the importance of research on evaluation and that it should be more visible in the field so that practitioners know about it and how it can improve their practices. However, it is our responsibility to improve our field by being good research participants. So please, if you ever receive a request to participate in a research on evaluation study, please do so. You are helping our field of evaluation

Collaboration is Awesome

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Research on Evaluation: It Takes a Village (The Problem)

 

Response rates from evaluators are poor. Despite research suggesting that AEA members consider research on evaluation as important, response rates for research on evaluation studies are often only between 10-30%.1

As evaluators ourselves, we understand how busy we can be. However, we believe that evaluators should spend more time contributing to these studies. These studies can be thought of as evaluations of our field, such as: what our current practices are, how should we train evaluators, what can we improve, how do our evaluations lead to social betterment, and more are just some of the broad questions these studies aim to answer. These studies can also help inform AEA efforts on the evaluation guiding principles and evaluator competencies.

Why are we seeing poor response rates?

  1. Response rates in general are poor. Across the world, response rates are declining. We are not unique in this regard. This phenomenon is happening in telephonemailing, and internet surveys alike.
  2. Poorly constructed surveys. Unfortunately, some of this issue is probably within researchers themselves. They develop surveys that are too long or too confusing so evaluators drop out early from the study. For instance, Dana’s thesis had a 27% response rate but only 59% of participating evaluators finished the entire survey, which took participants a median 27 minutes to complete. To improve response and completion rates, a more succinct survey would have worked better.
  3. Evaluation anxiety. We often think about evaluation anxiety in our clients, but these research on evaluation studies flip the focus to ourselves. It may be anxiety-provoking for evaluators to introspect—or let other evaluators inspect—their own practices. As an example, participants in Deven’s research on UFE were asked to describe their approach to evaluation after selecting which “known” approaches they apply. Some participants explained that they did not know the formal name for their approach, or they just chose the one that sounded right. This could have been anxiety-provoking for participants and reduced their likelihood of participating or completion the study.
  4. Apathy. Perhaps evaluators just do not care about research on evaluation. Many evaluators “fall into” evaluation rather than joining the field intentionally. They may not have the research background to care enough about “research karma.”
  5. Inabilities to truly use Dillman’s principles. If you know anything about survey design, you know about the survey guru Don Dillman and his Tailored Design Method for survey development. Some of the methods they recommend for increasing response rates are to personalize surveys (e.g., use first and last names), use multiple forms of communication (e.g., send out a postcard as well as an email with the survey), and repeated contact (e.g., an introductory email, the main survey email, and multiple follow-ups). However, these methods are unable to be used with AEA members. The research request task force does not provide names or mailing addresses to those who request a sample of evaluators and they limit contact to members to no more than 3 notifications over no more than a 30 day period. This makes the tailored design method difficult to implement.

Our next post will discuss what can be done by evaluators and the AEA research task force to improve response rates.

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Footnotes

  1. Notably, the study on research on evaluation had a response rate of 44% (Coryn et al., 2016). While this is much higher than most research on evaluation studies—and it is unclear how they achieved this since all they mention is they used Dillman’s principles—it is still low enough to call into question the generalizability of the findings. For instance, it may be more accurate to say only 44% of evaluators care about research on evaluation since the remaining 56% didn’t even both to participate!

It’s time to get serious about Twitter!

Today I am going to discuss getting started with Twitter. Why Twitter? Well, it is most applicable to the arenas I’m in (e.g., I-O psychology, evaluation, data visualization), but that doesn’t mean it is right for you. Depending on what sandboxes you’re playing in, you might need to consider multiple platforms (almost a guarantee) and Twitter might not be one of them. But, for now, let’s get Twitter savvy!

  1. Create a Twitter account. This seems simple enough, right? Yes, but I realize that this might still be on your to-do list. Make it happen! You will improve as you go along – not by planning forever and never actually doing it.
  2. Add a picture – seriously. Avoid using a logo or the default silhouette (it’s lame and doesn’t allow people to get to know YOU).Deven Wisner Twitter Bio
  3. Bio! I used to have what I would consider a lame bio, and I’m so glad I changed it. The day after I updated mine to something more unique, I was mentioned in an interview done by Dr. Stephanie Evergreen. They included a screen shot of my Twitter photo and bio. You better believe I was rejoicing that I changed my drab bio into something a little more hip. So, what do you put there?! This is a spot to market yourself in a few words, hashtags, and emojis. I’ve included mine as an example.
  4. Follow some awesome people. This is industry specific, of course, but my suggestion is checking out a few prominent players in your field. Check out who they’re following for other big names, and look at who is following them for up-and-coming connections. Click here for my profile.
  5. Set a schedule…I can’t stress this enough. I check twitter at least three times a day…once with my coffee, during lunch, and again after my evening workout. My posts are also on a schedule. Of course, I post sporadically as something interests me, but I always have a couple posts set to go out – no matter what! Note: People get discouraged because they don’t have many followers. Stop that. This takes time and effort.  The people you’re trying connect with as a professional aren’t your friends (at least not yet), so don’t expect an obligatory follow like you’d get from family on Facebook (kidding, of course). You’ll get there…it just takes time and effort!
  6. A schedule is great, but you need to set make goals. For example, you might make a goal of Tweeting three times and following two new people per day. After a couple weeks, you can reassess whether you can do more (MORE is better with Twitter but consistency is MOST important.

    Deven Wisner Twitter Goals
    This is an example from when I first started on Twitter.

If you want a personalized plan, or to discuss a different social media platform, contact me. I would be happy to develop a social media strategy based on your goals.

P.S. If you’re into I-O, Evaluation, and/or data visualization, you will find some awesome people under Nifty Resources.

New to Evaluation? Here are tips for plugging in!

As a new professional (or one that has recently pivoted) in evaluation, you might be wondering how to leverage yourself or “plug-in” to the community. The beauty of evaluation is its interdisciplinarity but that can make plugging in a little daunting (but not impossible!). Below are some tips on how to immerse yourself in the field!

Become an American Evaluation Association (AEA) member.

Not only will you be able to attend the yearly conference, you will have more opportunities to become involved than you will be able to sign up for. From professional development to peer-reviewed articles, AEA really does have a great compilation of resources for academics and practitioners.

Attend an AEA conference!

Deven Wisner AEA 2017 Evergreen
Me nerding out with Dr. Stephanie Evergreen at the Eval16

It is one thing to become a member and never go to a conference, but this is one conference I am willing to pay out of my own pocket to attend. If you’re looking to share and learn from others, find a job opportunity, or just network with others, this week-long event is a great investment. I can promise you one thing: the AEA conference is like no other (in a good way).

Deven Wisner AEA 2016 NametagFind your Local Area Affiliate on AEA.

Again, AEA is a great resource, and that isn’t just at an international level. They also support AEA affiliates, which means you can be involved throughout the year. This is a great way to meet evaluators near you, find out about independent work (if you’re into that), and further develop yourself as a professional. If possible, I suggest being part of a committee or the board. You will be stretched more than just being a member. We have all become members of organizations to never actually attend an event (c’mon, I know I’m not the only one).

Deven Wisner AZENet
Some of the great Arizona Evaluation Network board members I get to work with!

Join an AEA Topical Interest Group (TIG).

If you have a certain area (or maybe more than one) within the field of evaluation that strikes your fancy, get more involved through a TIG! You might have the opportunity to write a blog post, rate conference proposals, and/or be part of the yearly meetings (held at the AEA conference). Again, you will meet people with similar interests but with different levels of experience. I’m part of the Data Visualization and Reporting TIG, along with Research on Evaluation.

Refine your elevator speech.

Who are you? What’s evaluation? How do other people entitle what you do? All of these things are important. Be ready to explain what you do to others. Dividing my time between industrial-organizational psychology and evaluation means I’ve had to refine this for all areas of my professional life. My best advice is to think back to those family dinners…how do you explain it? Okay, take that and make it relatable each time you talk about it. UC-Davis has a good resource on this here.

Get on Twitter…oh yeah, I said it.

Evaluators are taking on Twitter and it is AWESOME! This is a quick way to see what the trends are and learn from others. Plus, you get to share your own thoughts and work. As someone who was anti-Twitter for a long time, I get it…you might be apprehensive. Twitter is the way to find little nuggets of information that can often times lead to great finds. So, if you haven’t already, create an account and start following other evaluators (pro tip: find one person you like and check out who they’re following)!

 

Deven Wisner Twitter

…and there you have it! Did I miss something? Feel free to share what has worked for you.

NEW – Additional tips from Ann K. Emery’s blog…

  1. Conference tips for new evaluators
  2. Newbie essentials
  3. Job hunting

P.S. Click here to read a blog I wrote for AEA365 as a Data Visualization and Reporting TIG member.

Is your qualitative dataviz taking a backseat? A few extra minutes = rich data noticed!

CkW1FHXVAAAgmTE
Created by Chris Lysy
STOP depreciating your qualitative data by putting it into an appendix, or having six pages worth of themes, definitions, and examples. That’s rich information that you need to bring your stakeholders’ attention to! Like any data visualization, you want to draw readers in and make pile of data more digestible. Qualitative data might be dense but it’s no different.

So what is something easy I’ve started doing? Adding icons. Icons are a super easy way to tell your readers that the qualitative data confirmed something…or it didn’t. Or maybe it did — but only a little bit! Either in Excel (depending on how you build your qualitative tables) or Word, start inserting icons/images/GIFs (okay, maybe that’s a stretch) to indicate if a program outcome was achieved according to qualitative feedback. See my loaded and very fake example below.

First, I choose some icons (Excel or Word: Insert > Symbol or Image). Just like the charts you use to visualize quant, the icons should make sense. A giraffe or poo emoji might not be what you’re looking for (or, if it is, what an awesome evaluation).

After you’ve chosen icons, create a legend…because assumptions are dangerous.

Screen Shot 2017-05-15 at 8.27.14 PM.png

Now, incorporate the icons into your qualitative table. In ones I’ve done, I add it on the left most side — the FIRST place my stakeholders are looking. They can quickly see that the hypothesis was accepted…or not. This makes it easy for them to dive into what they need to read first. For example, your stakeholder might be most concerned that their program did not achieve the desired outcome (and if your survey questions answer your evaluation questions, this will be no problem to connect, right?!).

Here’s a super simple example…that took me all of a few seconds. Something sensical that compliments the dense text will help get qualitative data noticed.

Screen Shot 2017-05-15 at 8.27.00 PM.png
This doesn’t replace all the other important stuff (e.g., definition, frequency, etc.), but your stakeholders can get a snapshot of the results! 
  1. This is one very simple idea, and I bet you’ve seen some of the awesome resources put forth by Ann K. Emery, and Stephanie Evergreen on visualizing qualitative data. They are great ideas! But even with these awesome ideas, most of the reports I’ve seen in the past few months are still full of indigestible qual…NOT a great compliment to the awesome charts and graphs you’re probably making, right? So, my challenge to you is to start using the great resources available to you — and come up with your own!

Got an applied project? You can build the capacity for data-driven decision making.

During graduate school, students are usually offered applied opportunities. What I love about applied psychology (e.g., evaluation, I-O psychology…) is that graduate students have the chance to bring their knowledge to a variety of industries — and built the value of data driven decision making. To me, that is priceless. Exposing the field of applied psych is great…and so is making others aware of all the great things can be done when something other than anecdotes are the decision making tool of choice.

So what’s my experience with this? I also had these great applied opportunities, and I started to realize that I was an advocate for my field. I had a new perspective about the entire experience — if my client walked away feeling like they wasted their time, I didn’t do a very good job.

My second to last semester I completed a needs assessment and process evaluation for a company in another state. This company is phenomenal — great idea that’s meeting a need, lean bottom line, and an office full of great people. Where’s the but? Well, it’s that data wasn’t driving their strategic planning. Needs assessment? What’s that? There I had it — an opportunity to BUILD the capacity of evaluation in this organization.

In short, the project went well — everyone learned a lot, services were revised, future planning was focused on data. But that’s all while you’re still there, right? In the back of your obsessive applied psych mind…you know this was one project, and the long lived method of luck and “educated” guessing (oxymoron’s make for good blog topics) could be revived and become the preferred decision making tool — again.

Data driven decision making. What does the data tell us? How do the statistics relate to what we are seeing financially? Your clients said they wanted this service…but was it a representative sample?? I felt like a broken record…because I hoped that between demonstration, dialogue, and bringing my client along for the experience would lead to an appreciation and PREFERENCE for data to inform their decisions.

images.png
Created by Chris Lysy

Well, two months later I was in a follow up meeting…sitting in on an unrelated project…and I heard it:

“We need to make decisions based on what the data tells us. It can’t be what I like, or what makes sense to me. Let’s use the tools we have to make changes using data.”

…you know that moment when someone says their idea of an awesome day is binge watching Frasier and eating pizza rolls, and you’re like “…that’s hot.” Bam. There it was. The sexy side of being an advocate for our field.

You see, change is hard. Pushing for a better method (that isn’t always easier) can be a challenge. And hey, being a grad student is a special level of hell at times. Sometimes you want to drop the results and peace. You don’t always want to screw with Excel for hours to get something other than a canned report (but seriously, talk to me if it’s taking you hours to craft good viz). But at the end of it, you have an opportunity to see the results put into action. Your very presence is a disruption — a potential catalyst for change. The credibility of our field? It’s on ALL of our shoulders. So, the next time you’re burnt out, remember the potential to impact decision makers and the responsibility to your colleagues (oh, and call on them when you need help!).

f29bf90a502b1268b067bfa6371168de.jpg
Created by Chris Lysy

Can we talk about logic models?

Over a year ago I had the pleasure of co-presenting on logic models to a group of individuals from non-profit organizations. The presentation included time to discuss the pros and cons of the logic model. This really gave my team and I the ability to work through the problems with the attendees. The most voiced problem?

Continue reading “Can we talk about logic models?”