#AZENET20: Let’s Talk About Soft Skills

I’m very excited to announce that I was recently elected as the 2020-2021 President of the Arizona Evaluation Network. As someone who has been involved in the American Evaluation Association since the beginning of graduate school, finding a local affiliate was one way to stay connected to our field throughout the year. The opportunity to propel the Arizona Evaluation Network forward with a fantastic board and membership, striving for an engaging, equitable, and relevant community, is definitely an honor.

In future blogs, I will focus more on my adoration of the field of evaluation, but here I want to share the theme for the Arizona Evaluation Network’s next conference (whoo, 2020!): Soft Skills for Evaluators (and institutional planners, analysts, project managers, and the like). In our most recent Arizona Evaluation Network conference, we took a moment to reflect and reengage in the fundamentals of evaluation. This prompted me to think about what else is relevant to practitioners and academics alike – regardless of the context in which we find ourselves in the field of evaluation (or the approach we take, for that matter). Combining that with the important critical role that interpersonal skills, reflective practice, and self-awareness play in our success (or so I think), soft skills was almost an obvious choice. So, let’s talk more about that…

As a starting place, I’ll share that I’ve had this realization, which is that by bringing more of me to my practice, reducing the disparity between who I am personally and the work I do, everyone benefits (probably one of those things someone much wiser told me 1,000 times but I had to learn the hard way). So, you could say my development of my own self-awareness (a really important soft skill if you ask me!) is something that has really shaped how I engage in projects and collaborate with clients.

The next reason is that when I take a moment to consider soft skills of evaluators, I see it as a multi-faceted concept that has the potential to enrich our work and, even more importantly, play a role in the impact of the organizations we partner with. Soft skills include having the ability to navigate difficult conversations…the ones where humility and vulnerability are at the forefront. It’s the point in which we remove ourselves from a pedestal, the expert role, and come alongside stakeholders to achieve a common, people-centered goal. Soft skills enable us to get the right voices involved, which often means more active listening and less prescriptive consulting. It’s when we are focused on who we are as people, knowing where our value-add is to an organization or community, and capitalizing on the skills of others (stakeholders and other evaluators) that we facilitate the greatest impact: we are stronger together! Soft skills also prompt us to consider our approach and recognize that sometimes we need to pivot. And almost more importantly, acknowledging that it’s okay to do so, as we’re trying to tear down the wall between consultants/experts and stakeholders.

As I consider our efforts to establish the technical skills needed to be effective practitioners, I propose that we should be simultaneously focused on the other part of the formula: soft skills. Because after all is said and done, if we can’t have meaningful dialogue with stakeholders…meeting them where they’re at…it seems aspirational to think that a reliance on our technical skills alone will result in the use of findings. Let’s start thinking beyond certifications and traditional forms of expertise. You might call it back to the basics on effective human interaction!

I’m excited to share what I’ve learned about the importance of soft skills, but even more pumped to hear about what others have found to be effective in their work. Look for more details in the coming months on the 2020 Annual Arizona Evaluation Network Conference (even better, get on our mailing list to ensure you receive updates like a cool kid!).

Reflections from #AZENet19

A few weeks after the 2019 Annual Arizona Evaluation Network Conference, I felt inclined to reflect on my experience. This year’s theme, Refocusing on the Fundamentals, served as a call to get back to the roots of evaluation practice. This year’s conference really reminded me that we should never become too complacent with our skills in even the most routine of tasks. The reality is…things change – especially with environmental (i.e. situational or contextual) factors and stakeholder dynamics. We as evaluators need to flex to this for the sake of both our own development and that of the programs, collaboratives, or communities we are working with.

So as expected, I came away from the conference challenged and recharged (I love getting together with my people!); I was ready to take on a whole new set of goals! I realized that the conference theme leveraged two important aspects of how I strive to approach projects. The first is to maintain a heightened level of self-awareness, and the second (and really how I continually push for the first) is the application of ongoing reflective practice — asking myself questions like…

What are my strengths? What are my areas of opportunity? What do I enjoy doing? Where can I continue to develop? Where can I leverage others to add more value and impact to my clients’ projects?

Through my interactions with AZENet colleagues, I realized I was answering some of these questions naturally through peer discussions and a reminder of the foundational principles in our work. This experience reinforced the idea that we need to be coming together as diverse groups to enhance our practices and what we deliver. It’s how we move our field forward. Personally, I think the experience helped reaffirm how important it is to promote self-awareness and reflective practice in my work, and it also helped increase my awareness of the fact that I want/need more opportunities for collaboration my peers. Growth is difficult, so why not go through the process with other people that might be asking themselves the same reflective questions as you are.

I’m considering this my challenge to get in that space more…and want to encourage others to do the same!

Look for my future post, which will include my vision as the 2019 AZENet President-Elect.

Research on Evaluation: It Takes a Village (The Solution)

Our first post lamented the poor response rates in research on evaluation. There are many reasons for these poor response rates, but there are also many things that we can do to improve response rates and subsequently improve the state of research on evaluation.

How can evaluators improve response rates?

Coryn et. al (2016) suggests that evaluators find research on evaluation important. However, the response rates to these projects would suggest otherwise. As with any area of opportunity, there is often several components that influence success. Yes, evaluators should naturally care more about propelling our field forward, but the ability to change that without amending our practices as researchers seems unlikely. Therefore, we believe that the importance of participation must be built and to do we need to focus on what evaluators see as valuable research. Researchers must also take care to carry-out research with sound methodologies. Some recommendations for improving response rates as evaluators include:

  1. Conducting research that is relevant to the field of evaluation while maintaining a high standard of rigor. You can increase the likelihood of this by…
    1. Piloting your study (grad students and colleagues are great for this!)
    2. Asking for feedback from a critical friend
    3. Having evaluation practice guide or inform the research questions
  2. Reduce the cognitive load on participants by making our surveys shorter and easier to complete. You can do this by tying your questions to your research questions. It’s fun to have lots of data but it is even better to have meaningful data (i.e. stop asking unnecessary questions).
  3. Apply Dillman’s Tailored Design method. This includes things like:
    1. Increasing the benefits of participation, such as by asking for help from participants or providing incentives for participation
    2. Decreasing the costs of participation, such as by ensuring no requests are personal or sensitive in nature and that it is convenient for participants to respond

What can the AEA Research Request Task Force do?

The AEA Research Request Task Force is also a crucial component of this process, acting not only as a gatekeeper to the listserv, but also as quality and relevance control. Currently, samples of usually 1,000-2,000 evaluators are sent out for every research request. If we could increase the response rate, we could decrease our random sample and decrease the load on the AEA membership. Some recommendations for new policies for the task force include:

  1. Policies that would satisfy Dillman’s Tailored Design Method, including allowing:
    1. Personalized contact (e.g., providing names to researchers)
    2. Repeated contact to participants
    3. Contact via postal or telephone
  2. Consider sending out survey requests themselves to improve the legitimacy of survey requests and reduce confidentiality concerns
  3. Have more stringent rigor and relevancy standards to decrease the likelihood that participating evaluators get frustrated over the surveys that sent out and subsequently opt out of future research

Conclusions

We believe that evaluators should care more about the importance of research on evaluation and that it should be more visible in the field so that practitioners know about it and how it can improve their practices. However, it is our responsibility to improve our field by being good research participants. So please, if you ever receive a request to participate in a research on evaluation study, please do so. You are helping our field of evaluation

Collaboration is Awesome

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Research on Evaluation: It Takes a Village (The Problem)

 

Response rates from evaluators are poor. Despite research suggesting that AEA members consider research on evaluation as important, response rates for research on evaluation studies are often only between 10-30%.1

As evaluators ourselves, we understand how busy we can be. However, we believe that evaluators should spend more time contributing to these studies. These studies can be thought of as evaluations of our field, such as: what our current practices are, how should we train evaluators, what can we improve, how do our evaluations lead to social betterment, and more are just some of the broad questions these studies aim to answer. These studies can also help inform AEA efforts on the evaluation guiding principles and evaluator competencies.

Why are we seeing poor response rates?

  1. Response rates in general are poor. Across the world, response rates are declining. We are not unique in this regard. This phenomenon is happening in telephonemailing, and internet surveys alike.
  2. Poorly constructed surveys. Unfortunately, some of this issue is probably within researchers themselves. They develop surveys that are too long or too confusing so evaluators drop out early from the study. For instance, Dana’s thesis had a 27% response rate but only 59% of participating evaluators finished the entire survey, which took participants a median 27 minutes to complete. To improve response and completion rates, a more succinct survey would have worked better.
  3. Evaluation anxiety. We often think about evaluation anxiety in our clients, but these research on evaluation studies flip the focus to ourselves. It may be anxiety-provoking for evaluators to introspect—or let other evaluators inspect—their own practices. As an example, participants in Deven’s research on UFE were asked to describe their approach to evaluation after selecting which “known” approaches they apply. Some participants explained that they did not know the formal name for their approach, or they just chose the one that sounded right. This could have been anxiety-provoking for participants and reduced their likelihood of participating or completion the study.
  4. Apathy. Perhaps evaluators just do not care about research on evaluation. Many evaluators “fall into” evaluation rather than joining the field intentionally. They may not have the research background to care enough about “research karma.”
  5. Inabilities to truly use Dillman’s principles. If you know anything about survey design, you know about the survey guru Don Dillman and his Tailored Design Method for survey development. Some of the methods they recommend for increasing response rates are to personalize surveys (e.g., use first and last names), use multiple forms of communication (e.g., send out a postcard as well as an email with the survey), and repeated contact (e.g., an introductory email, the main survey email, and multiple follow-ups). However, these methods are unable to be used with AEA members. The research request task force does not provide names or mailing addresses to those who request a sample of evaluators and they limit contact to members to no more than 3 notifications over no more than a 30 day period. This makes the tailored design method difficult to implement.

Our next post will discuss what can be done by evaluators and the AEA research task force to improve response rates.

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Footnotes

  1. Notably, the study on research on evaluation had a response rate of 44% (Coryn et al., 2016). While this is much higher than most research on evaluation studies—and it is unclear how they achieved this since all they mention is they used Dillman’s principles—it is still low enough to call into question the generalizability of the findings. For instance, it may be more accurate to say only 44% of evaluators care about research on evaluation since the remaining 56% didn’t even both to participate!

It’s time to get serious about Twitter!

Today I am going to discuss getting started with Twitter. Why Twitter? Well, it is most applicable to the arenas I’m in (e.g., I-O psychology, evaluation, data visualization), but that doesn’t mean it is right for you. Depending on what sandboxes you’re playing in, you might need to consider multiple platforms (almost a guarantee) and Twitter might not be one of them. But, for now, let’s get Twitter savvy!

  1. Create a Twitter account. This seems simple enough, right? Yes, but I realize that this might still be on your to-do list. Make it happen! You will improve as you go along – not by planning forever and never actually doing it.
  2. Add a picture – seriously. Avoid using a logo or the default silhouette (it’s lame and doesn’t allow people to get to know YOU).Deven Wisner Twitter Bio
  3. Bio! I used to have what I would consider a lame bio, and I’m so glad I changed it. The day after I updated mine to something more unique, I was mentioned in an interview done by Dr. Stephanie Evergreen. They included a screen shot of my Twitter photo and bio. You better believe I was rejoicing that I changed my drab bio into something a little more hip. So, what do you put there?! This is a spot to market yourself in a few words, hashtags, and emojis. I’ve included mine as an example.
  4. Follow some awesome people. This is industry specific, of course, but my suggestion is checking out a few prominent players in your field. Check out who they’re following for other big names, and look at who is following them for up-and-coming connections. Click here for my profile.
  5. Set a schedule…I can’t stress this enough. I check twitter at least three times a day…once with my coffee, during lunch, and again after my evening workout. My posts are also on a schedule. Of course, I post sporadically as something interests me, but I always have a couple posts set to go out – no matter what! Note: People get discouraged because they don’t have many followers. Stop that. This takes time and effort.  The people you’re trying connect with as a professional aren’t your friends (at least not yet), so don’t expect an obligatory follow like you’d get from family on Facebook (kidding, of course). You’ll get there…it just takes time and effort!
  6. A schedule is great, but you need to set make goals. For example, you might make a goal of Tweeting three times and following two new people per day. After a couple weeks, you can reassess whether you can do more (MORE is better with Twitter but consistency is MOST important.

    Deven Wisner Twitter Goals
    This is an example from when I first started on Twitter.

If you want a personalized plan, or to discuss a different social media platform, contact me. I would be happy to develop a social media strategy based on your goals.

P.S. If you’re into I-O, Evaluation, and/or data visualization, you will find some awesome people under Nifty Resources.

New to Evaluation? Here are tips for plugging in!

As a new professional (or one that has recently pivoted) in evaluation, you might be wondering how to leverage yourself or “plug-in” to the community. The beauty of evaluation is its interdisciplinarity but that can make plugging in a little daunting (but not impossible!). Below are some tips on how to immerse yourself in the field!

Become an American Evaluation Association (AEA) member.

Not only will you be able to attend the yearly conference, you will have more opportunities to become involved than you will be able to sign up for. From professional development to peer-reviewed articles, AEA really does have a great compilation of resources for academics and practitioners.

Attend an AEA conference!

Deven Wisner AEA 2017 Evergreen
Me nerding out with Dr. Stephanie Evergreen at the Eval16

It is one thing to become a member and never go to a conference, but this is one conference I am willing to pay out of my own pocket to attend. If you’re looking to share and learn from others, find a job opportunity, or just network with others, this week-long event is a great investment. I can promise you one thing: the AEA conference is like no other (in a good way).

Deven Wisner AEA 2016 NametagFind your Local Area Affiliate on AEA.

Again, AEA is a great resource, and that isn’t just at an international level. They also support AEA affiliates, which means you can be involved throughout the year. This is a great way to meet evaluators near you, find out about independent work (if you’re into that), and further develop yourself as a professional. If possible, I suggest being part of a committee or the board. You will be stretched more than just being a member. We have all become members of organizations to never actually attend an event (c’mon, I know I’m not the only one).

Deven Wisner AZENet
Some of the great Arizona Evaluation Network board members I get to work with!

Join an AEA Topical Interest Group (TIG).

If you have a certain area (or maybe more than one) within the field of evaluation that strikes your fancy, get more involved through a TIG! You might have the opportunity to write a blog post, rate conference proposals, and/or be part of the yearly meetings (held at the AEA conference). Again, you will meet people with similar interests but with different levels of experience. I’m part of the Data Visualization and Reporting TIG, along with Research on Evaluation.

Refine your elevator speech.

Who are you? What’s evaluation? How do other people entitle what you do? All of these things are important. Be ready to explain what you do to others. Dividing my time between industrial-organizational psychology and evaluation means I’ve had to refine this for all areas of my professional life. My best advice is to think back to those family dinners…how do you explain it? Okay, take that and make it relatable each time you talk about it. UC-Davis has a good resource on this here.

Get on Twitter…oh yeah, I said it.

Evaluators are taking on Twitter and it is AWESOME! This is a quick way to see what the trends are and learn from others. Plus, you get to share your own thoughts and work. As someone who was anti-Twitter for a long time, I get it…you might be apprehensive. Twitter is the way to find little nuggets of information that can often times lead to great finds. So, if you haven’t already, create an account and start following other evaluators (pro tip: find one person you like and check out who they’re following)!

 

Deven Wisner Twitter

…and there you have it! Did I miss something? Feel free to share what has worked for you.

NEW – Additional tips from Ann K. Emery’s blog…

  1. Conference tips for new evaluators
  2. Newbie essentials
  3. Job hunting

P.S. Click here to read a blog I wrote for AEA365 as a Data Visualization and Reporting TIG member.

Is your qualitative dataviz taking a backseat? A few extra minutes = rich data noticed!

CkW1FHXVAAAgmTE
Created by Chris Lysy
STOP depreciating your qualitative data by putting it into an appendix, or having six pages worth of themes, definitions, and examples. That’s rich information that you need to bring your stakeholders’ attention to! Like any data visualization, you want to draw readers in and make pile of data more digestible. Qualitative data might be dense but it’s no different.

So what is something easy I’ve started doing? Adding icons. Icons are a super easy way to tell your readers that the qualitative data confirmed something…or it didn’t. Or maybe it did — but only a little bit! Either in Excel (depending on how you build your qualitative tables) or Word, start inserting icons/images/GIFs (okay, maybe that’s a stretch) to indicate if a program outcome was achieved according to qualitative feedback. See my loaded and very fake example below.

First, I choose some icons (Excel or Word: Insert > Symbol or Image). Just like the charts you use to visualize quant, the icons should make sense. A giraffe or poo emoji might not be what you’re looking for (or, if it is, what an awesome evaluation).

After you’ve chosen icons, create a legend…because assumptions are dangerous.

Screen Shot 2017-05-15 at 8.27.14 PM.png

Now, incorporate the icons into your qualitative table. In ones I’ve done, I add it on the left most side — the FIRST place my stakeholders are looking. They can quickly see that the hypothesis was accepted…or not. This makes it easy for them to dive into what they need to read first. For example, your stakeholder might be most concerned that their program did not achieve the desired outcome (and if your survey questions answer your evaluation questions, this will be no problem to connect, right?!).

Here’s a super simple example…that took me all of a few seconds. Something sensical that compliments the dense text will help get qualitative data noticed.

Screen Shot 2017-05-15 at 8.27.00 PM.png
This doesn’t replace all the other important stuff (e.g., definition, frequency, etc.), but your stakeholders can get a snapshot of the results! 

  1. This is one very simple idea, and I bet you’ve seen some of the awesome resources put forth by Ann K. Emery, and Stephanie Evergreen on visualizing qualitative data. They are great ideas! But even with these awesome ideas, most of the reports I’ve seen in the past few months are still full of indigestible qual…NOT a great compliment to the awesome charts and graphs you’re probably making, right? So, my challenge to you is to start using the great resources available to you — and come up with your own!