Blog

#AZENET20: Let’s Talk About Soft Skills

I’m very excited to announce that I was recently elected as the 2020-2021 President of the Arizona Evaluation Network. As someone who has been involved in the American Evaluation Association since the beginning of graduate school, finding a local affiliate was one way to stay connected to our field throughout the year. The opportunity to propel the Arizona Evaluation Network forward with a fantastic board and membership, striving for an engaging, equitable, and relevant community, is definitely an honor.

In future blogs, I will focus more on my adoration of the field of evaluation, but here I want to share the theme for the Arizona Evaluation Network’s next conference (whoo, 2020!): Soft Skills for Evaluators (and institutional planners, analysts, project managers, and the like). In our most recent Arizona Evaluation Network conference, we took a moment to reflect and reengage in the fundamentals of evaluation. This prompted me to think about what else is relevant to practitioners and academics alike – regardless of the context in which we find ourselves in the field of evaluation (or the approach we take, for that matter). Combining that with the important critical role that interpersonal skills, reflective practice, and self-awareness play in our success (or so I think), soft skills was almost an obvious choice. So, let’s talk more about that…

As a starting place, I’ll share that I’ve had this realization, which is that by bringing more of me to my practice, reducing the disparity between who I am personally and the work I do, everyone benefits (probably one of those things someone much wiser told me 1,000 times but I had to learn the hard way). So, you could say my development of my own self-awareness (a really important soft skill if you ask me!) is something that has really shaped how I engage in projects and collaborate with clients.

The next reason is that when I take a moment to consider soft skills of evaluators, I see it as a multi-faceted concept that has the potential to enrich our work and, even more importantly, play a role in the impact of the organizations we partner with. Soft skills include having the ability to navigate difficult conversations…the ones where humility and vulnerability are at the forefront. It’s the point in which we remove ourselves from a pedestal, the expert role, and come alongside stakeholders to achieve a common, people-centered goal. Soft skills enable us to get the right voices involved, which often means more active listening and less prescriptive consulting. It’s when we are focused on who we are as people, knowing where our value-add is to an organization or community, and capitalizing on the skills of others (stakeholders and other evaluators) that we facilitate the greatest impact: we are stronger together! Soft skills also prompt us to consider our approach and recognize that sometimes we need to pivot. And almost more importantly, acknowledging that it’s okay to do so, as we’re trying to tear down the wall between consultants/experts and stakeholders.

As I consider our efforts to establish the technical skills needed to be effective practitioners, I propose that we should be simultaneously focused on the other part of the formula: soft skills. Because after all is said and done, if we can’t have meaningful dialogue with stakeholders…meeting them where they’re at…it seems aspirational to think that a reliance on our technical skills alone will result in the use of findings. Let’s start thinking beyond certifications and traditional forms of expertise. You might call it back to the basics on effective human interaction!

I’m excited to share what I’ve learned about the importance of soft skills, but even more pumped to hear about what others have found to be effective in their work. Look for more details in the coming months on the 2020 Annual Arizona Evaluation Network Conference (even better, get on our mailing list to ensure you receive updates like a cool kid!).

Reflections from #AZENet19

A few weeks after the 2019 Annual Arizona Evaluation Network Conference, I felt inclined to reflect on my experience. This year’s theme, Refocusing on the Fundamentals, served as a call to get back to the roots of evaluation practice. This year’s conference really reminded me that we should never become too complacent with our skills in even the most routine of tasks. The reality is…things change – especially with environmental (i.e. situational or contextual) factors and stakeholder dynamics. We as evaluators need to flex to this for the sake of both our own development and that of the programs, collaboratives, or communities we are working with.

So as expected, I came away from the conference challenged and recharged (I love getting together with my people!); I was ready to take on a whole new set of goals! I realized that the conference theme leveraged two important aspects of how I strive to approach projects. The first is to maintain a heightened level of self-awareness, and the second (and really how I continually push for the first) is the application of ongoing reflective practice — asking myself questions like…

What are my strengths? What are my areas of opportunity? What do I enjoy doing? Where can I continue to develop? Where can I leverage others to add more value and impact to my clients’ projects?

Through my interactions with AZENet colleagues, I realized I was answering some of these questions naturally through peer discussions and a reminder of the foundational principles in our work. This experience reinforced the idea that we need to be coming together as diverse groups to enhance our practices and what we deliver. It’s how we move our field forward. Personally, I think the experience helped reaffirm how important it is to promote self-awareness and reflective practice in my work, and it also helped increase my awareness of the fact that I want/need more opportunities for collaboration my peers. Growth is difficult, so why not go through the process with other people that might be asking themselves the same reflective questions as you are.

I’m considering this my challenge to get in that space more…and want to encourage others to do the same!

Look for my future post, which will include my vision as the 2019 AZENet President-Elect.

Overlapping Column Charts: A Quick Actual v. Goal Comparison

Hello there! I’m writing you as a follow up to a workshop I recently facilitated with Nicole Huggett, MSW, for the Arizona Evaluation Network in Phoenix. A big focus of our time together was spent on covering visualization options for comparing goals and pre-post results.

One of the popular charts we discussed were overlapping column charts and how they can be used to compare actual performance to goals. Since the workshop, I have found overlapping column charts to be very valuable data visualizations for this – so much so that I knew I had to share the steps publicly (OKAY, I also kept getting asked for the steps, so I knew writing it once would save us all some time!).

Although I already shared when you might use this chart, the particular scenario I was to set is related to survey participation. Specifically, one community organization needed a quick way to determine which years they met (or didn’t meet) their survey participation goals. An overlapping column chart served as a great way to for project managers to determine just that in a matter of seconds.

Ready to make one yourself? Awesome – let’s do it!

To get started, select your data insert a 2D Clustered Column Chart.

Excel, we love you so, but you do some weird stuff. To fix the data, right click and choose select data. Go ahead and delete the year series (oh yes, we’re going to delete lots of things!), select Goal and notice the x-axis is empty…go ahead and click this button and highlight the four years. Voila! Your Goal Series is now included, and you should have two columns in your Excel window.

Next, let’s get these columns on top of one another. To do that, we are going to right-click the Actual Column, select Format Data Series (get familiar with this area of Excel – it’s crucial to a lot of your changes!), and change the axis from Primary to Secondary. The column you want on top is the secondary…and the column you want on the bottom is the primary.

Now that we have these on top of one another, let’s adjust the gap of the Goal column. You can play with the settings to make it look right but I’d say at least down to 75%.

To start to clean this up (it’s still confusing right now!), let’s right-click the Actual column (Excel should allow you to select all of them) and Add Data Labels.

From here on, it’s really turning your chart from a Basic to Bomb Chart (check out this example of how to make yours look awesome). You want to pay special attention to fonts (both the type and size), colors, unnecessary noise (yes grid lines, I’m talking about YOU), and, of course the title! It’s here where you want to leverage data visualization best practices to really get your reader’s attention.

After you’ve made some simple changes, your overlapping column chart it should look something like this:

One thing you might notice is we don’t know what the goal was from looking at the chart – and that’s OKAY. This is really intended to give high-level insight. In other words, was the goal achieved or not? Whether this is as much information as your exec team needs, or you want to create a dialogue, I highly suggest this minimalistic chart for easy actual-to-goal comparisons!

Want to know how to do this in Tableau? Tune in next time and don’t forget to check out my posts on how to Getting Started with Tableau.  

Research on Evaluation: It Takes a Village (The Solution)

Our first post lamented the poor response rates in research on evaluation. There are many reasons for these poor response rates, but there are also many things that we can do to improve response rates and subsequently improve the state of research on evaluation.

How can evaluators improve response rates?

Coryn et. al (2016) suggests that evaluators find research on evaluation important. However, the response rates to these projects would suggest otherwise. As with any area of opportunity, there is often several components that influence success. Yes, evaluators should naturally care more about propelling our field forward, but the ability to change that without amending our practices as researchers seems unlikely. Therefore, we believe that the importance of participation must be built and to do we need to focus on what evaluators see as valuable research. Researchers must also take care to carry-out research with sound methodologies. Some recommendations for improving response rates as evaluators include:

  1. Conducting research that is relevant to the field of evaluation while maintaining a high standard of rigor. You can increase the likelihood of this by…
    1. Piloting your study (grad students and colleagues are great for this!)
    2. Asking for feedback from a critical friend
    3. Having evaluation practice guide or inform the research questions
  2. Reduce the cognitive load on participants by making our surveys shorter and easier to complete. You can do this by tying your questions to your research questions. It’s fun to have lots of data but it is even better to have meaningful data (i.e. stop asking unnecessary questions).
  3. Apply Dillman’s Tailored Design method. This includes things like:
    1. Increasing the benefits of participation, such as by asking for help from participants or providing incentives for participation
    2. Decreasing the costs of participation, such as by ensuring no requests are personal or sensitive in nature and that it is convenient for participants to respond

What can the AEA Research Request Task Force do?

The AEA Research Request Task Force is also a crucial component of this process, acting not only as a gatekeeper to the listserv, but also as quality and relevance control. Currently, samples of usually 1,000-2,000 evaluators are sent out for every research request. If we could increase the response rate, we could decrease our random sample and decrease the load on the AEA membership. Some recommendations for new policies for the task force include:

  1. Policies that would satisfy Dillman’s Tailored Design Method, including allowing:
    1. Personalized contact (e.g., providing names to researchers)
    2. Repeated contact to participants
    3. Contact via postal or telephone
  2. Consider sending out survey requests themselves to improve the legitimacy of survey requests and reduce confidentiality concerns
  3. Have more stringent rigor and relevancy standards to decrease the likelihood that participating evaluators get frustrated over the surveys that sent out and subsequently opt out of future research

Conclusions

We believe that evaluators should care more about the importance of research on evaluation and that it should be more visible in the field so that practitioners know about it and how it can improve their practices. However, it is our responsibility to improve our field by being good research participants. So please, if you ever receive a request to participate in a research on evaluation study, please do so. You are helping our field of evaluation

Collaboration is Awesome

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Research on Evaluation: It Takes a Village (The Problem)

 

Response rates from evaluators are poor. Despite research suggesting that AEA members consider research on evaluation as important, response rates for research on evaluation studies are often only between 10-30%.1

As evaluators ourselves, we understand how busy we can be. However, we believe that evaluators should spend more time contributing to these studies. These studies can be thought of as evaluations of our field, such as: what our current practices are, how should we train evaluators, what can we improve, how do our evaluations lead to social betterment, and more are just some of the broad questions these studies aim to answer. These studies can also help inform AEA efforts on the evaluation guiding principles and evaluator competencies.

Why are we seeing poor response rates?

  1. Response rates in general are poor. Across the world, response rates are declining. We are not unique in this regard. This phenomenon is happening in telephonemailing, and internet surveys alike.
  2. Poorly constructed surveys. Unfortunately, some of this issue is probably within researchers themselves. They develop surveys that are too long or too confusing so evaluators drop out early from the study. For instance, Dana’s thesis had a 27% response rate but only 59% of participating evaluators finished the entire survey, which took participants a median 27 minutes to complete. To improve response and completion rates, a more succinct survey would have worked better.
  3. Evaluation anxiety. We often think about evaluation anxiety in our clients, but these research on evaluation studies flip the focus to ourselves. It may be anxiety-provoking for evaluators to introspect—or let other evaluators inspect—their own practices. As an example, participants in Deven’s research on UFE were asked to describe their approach to evaluation after selecting which “known” approaches they apply. Some participants explained that they did not know the formal name for their approach, or they just chose the one that sounded right. This could have been anxiety-provoking for participants and reduced their likelihood of participating or completion the study.
  4. Apathy. Perhaps evaluators just do not care about research on evaluation. Many evaluators “fall into” evaluation rather than joining the field intentionally. They may not have the research background to care enough about “research karma.”
  5. Inabilities to truly use Dillman’s principles. If you know anything about survey design, you know about the survey guru Don Dillman and his Tailored Design Method for survey development. Some of the methods they recommend for increasing response rates are to personalize surveys (e.g., use first and last names), use multiple forms of communication (e.g., send out a postcard as well as an email with the survey), and repeated contact (e.g., an introductory email, the main survey email, and multiple follow-ups). However, these methods are unable to be used with AEA members. The research request task force does not provide names or mailing addresses to those who request a sample of evaluators and they limit contact to members to no more than 3 notifications over no more than a 30 day period. This makes the tailored design method difficult to implement.

Our next post will discuss what can be done by evaluators and the AEA research task force to improve response rates.

ASR_9983-600x600

This post was written in collaboration with Dana Linnell Wanzer. Dana is an evaluation consultant specializing in programs serving children and youth. She loves Twitter, research on evaluation, and youth program evaluation. If you haven’t already, check out her blog — you’ll be glad ya did!

Footnotes

  1. Notably, the study on research on evaluation had a response rate of 44% (Coryn et al., 2016). While this is much higher than most research on evaluation studies—and it is unclear how they achieved this since all they mention is they used Dillman’s principles—it is still low enough to call into question the generalizability of the findings. For instance, it may be more accurate to say only 44% of evaluators care about research on evaluation since the remaining 56% didn’t even both to participate!

For consultants and consultants-to-be: BONUS POST!

rita-morais-108397

BONUS POST WITH DR. GAIL BARRINGTON!

And you thought the fun was over…

Whether you are thinking about venturing out on your own or have already started, this series will arm you with advice from seasoned consultants! This post features Dr. Barrington, who provides her insight on common questions consultants (or consultants-to-be) might have. Be sure to check out the first and second part of this series!

Bio:

Gail Vallance Barrington is a graduate of McGill University (BA) and Carleton University (MA) and holds a Doctorate in Educational Administration from the University of Alberta (1981). She is a Credentialed Evaluator and a certified teacher. In 2014, she was made a Fellow of the Certified Management Consultants of Canada. Since starting her consulting practice in 1985 she has conducted over 130 program evaluation studies in the fields of education, health, and research. Her top-rated book, Consulting Start-up and Management: A Guide for Evaluators and Applied Researchers (SAGE, 2012) continues to be popular. In 2008 she received the Canadian Evaluation Society award for her Contribution to Evaluation in Canada and in 2016 was honoured to receive the American Evaluation Association Alva and Gunnar Myrdal Award for Evaluation Practice. She teaches courses in qualitative research and program evaluation for several universities and provides webinars and workshops on consulting skills.

Gail Barrington closeup 2016

What made you decide to become a consultant?

I began to work on program evaluation contracts while teaching as a sessional instructor at the University of Calgary in the Faculty of Education. For a while I did both. Then a term came along when, due to program reorganization, the courses I was teaching would not be offered. I had to decide what to do, wait for a year with half-time consulting or go at it full time. This was not an easy decision as I had always worked for a school, college, or university, never responsible for my own pay check. Going out on my own was very scary. I can remember sitting in my car looking across a rainy street at the office building I had selected and wondering if I could really work there. I took a big breath, got out of the car and the rest is history. My consulting business opened on November 1, 1985.

Could you share some best practices for remote consultancy?

I often work with clients at a distance. For example, I worked for many years on a national evaluation project for the federal government in Ottawa, Ontario while I lived in Calgary, Alberta. I made a point of holding quarterly meetings in person because I felt that being on site with the client was necessary to get a sense of their work, their issues, and their reaction to what we were doing. Email and Skype are fine but in, the end, it is personal chemistry and partnership which bond a project together. In that project we spent time together informally as well as during our day-long meetings. We had fun skating, sailing, and going out to restaurants. These informal activities strengthened our understanding of each other and we accomplished a remarkable amount of success as a result.

Do you suggest that consultants market themselves as a specialist or a generalist?

I think you need a specialty area and then you can branch out from there. New consultants should focus both on what they like doing best and on what they have received good feedback about. Your specialty area will expand and morph over time but having a clearly defined area of specialization as a foundation is a great way to start.

How do you measure the success of a project?

Success is not measured by the final report. It is really measured by the extent to which positive program change occurs. Evaluation is all about making a difference through informed decision making and of course our role is providing the evidence needed. The real results happen long after the report is finished, and we have gone on our way. Not often enough do we circle back to find out what happened afterwards.

Finally, what advice do you wish someone had given you as a new consultant?

Hang in there and find some good colleagues. When I started my business, I didn’t have any role models and so I had to make it up as I went along. The closest approximation I could find to the independent evaluation consultant was the independent business consultant. As a result, I became a Certified Management Consultant (CMC) so I could have some colleagues to talk to. However, research was not their interest area and so I was still struggling about how to conduct evaluation research and bill for it gracefully. Happily, I found the American Evaluation Association (AEA) Independent Consulting Topical Interest Group (IC TIG) and there at last were business people with a social justice perspective. We continue talking to this day!

Want more Dr. Barrington? Visit her website!

For consultants and consultants-to-be: expert advice (pt. 2)

rita-morais-108397
It’s here — part two! 

Whether you are thinking about venturing out on your own or have already started, this two-part series will arm you with advice from seasoned consultants! This post features Ann K. Emery, who provides her insight on common questions consultants (or consultants-to-be) might have. Check out the first post in this series here.

Bio:

Ann K. Emery is a speaker, workshop facilitator, and blogger, passionate about “making technical information easier to understand for non-technical audiences.” In other words, she is a dataviz expert! Ann is also a well-known blogger, bringing practical tips to those looking to transform their data into effective stories through the use of data visualization. Without further ado, let’s dig into her tips! 

IMG_9880

How did you prepare for running your own consulting firm?

Launching my own consulting firm was a happy accident. I started blogging back in 2012, just for the joy of sharing skills with others, without expecting it to lead anything. And I always enjoyed public speaking and leading workshops. My name got out there. People read something I wrote or saw me speak at a conference. I started getting a few invitations to give talks and redesign the visuals in reports. And then I got a few more invitations. And a few more. At the time, I was working full-time and doing grad school at night. I had limited bandwidth for independent consulting projects. In Spring 2014, I finished grad school and had the time to accept some side projects. I did the math; the projects would actually pay more than my current (good) salary. People started asking when I was going independent. I hadn’t considered going solo. I had planned to stay at my current position for a long time. Over the summer of 2014, I spoke with a dozen of my mentors. I got great advice:

“Don’t even think about quitting your salaried job until you have a years’ worth of household expenses saved—and be willing to lose every penny if you’re not profitable the first year.”

“You’ll work harder than ever, but the work will be more fulfilling than ever.”

In the fall of 2014, I was having dinner with some girlfriends, and mentioned that I might go solo someday. “Well, what are you waiting for?” one asked. I didn’t have a good answer. That next week, I put in my notice.

For those wondering how about the transition to a consultant, did you continue working at a 9-5 job until you became established?

I’ve met two types of consultants: those who find themselves with spare time (job loss, just finished grad school and they’re job hunting, etc.) and those who have already built a reputation for doing great work and have prospective clients banging down their door. The first type struggles to take in work. The second type struggles to turn it down. The second type has no choice. You have to quit your salaried job and start your own company. You work harder than ever, for a while. Then you get better at subcontracting and saying no. You get to choose which type of consultant you want to be. Pull the trigger too early—before you’re established—and you may always struggle to bring in work and pay the bills.

How do you avoid being spread too thin?

I hire smart and talented subcontractors like you!

More importantly, I say no so that I can say yes. I don’t appear on podcasts (I’m visual so an auditory medium has zero appeal). I don’t write guest blog posts (my clients hire me to write blog posts so it doesn’t make business sense to write for free). I don’t work for free (I like to keep a roof over my head). I have to decline projects that aren’t a perfect fit so that I have creative energy to rock the ones that are.

Describe a time when you dealt with a difficult client (or situation). How did you make things work?

I divide my projects into two broad categories: training and design.

In training projects—my keynotes, workshops, webinars, and individual coaching sessions—I haven’t had difficult clients, but I have had inexperienced clients. The contact person has been put in charge of planning the keynote address for their conference for the very first time. I often need to teach them about coordinating with A/V staff, setting up projectors, connecting and testing the microphones, and so on. Planning a talk of this level can be a stressful experience for my clients. They want the logistics to be perfect. I try to walk them through the unknowns and alleviate as much of the stress as possible. I’ve given a billion talks. I’ve seen all sorts of stage setups. Everything that could go wrong has gone wrong—projector lightbulbs burning out mid-talk, fire drills, laptop batteries dying, malfunctioning microphones. I get migraines a few times a year—the kind where your vision and smell are all messed up—so I knew it was only a matter of time until I got a migraine during a speaking engagement. It happened in February. I could only see a sliver of my slides thanks to tunnel vision. Then the smells and nausea started. I gave participants a coffee break, puked in the bathroom, and came back and finished the talk. From the audience’s perspective, it was one of the better workshops I led all year. Good public speaking is more about rolling with the punches than about careful preparation. I let my client know that I’ve experienced every possible projector and microphone hiccup and that the talk will be stellar no matter what. 

In design projects—revamping existing reports or designing the visualizations for client reports from scratch—I haven’t had difficult clients, but I have had difficult timelines. Contracting takes longer than expected because we need a signature from someone who’s on vacation. I’m graphing the data and notice that the numbers don’t make sense and they have to re-run the analyses to fix a few typos. Every consultant I know has been in this situation: Something goes wrong that’s outside of your control, and you’re the one who has to give up your weekend to fix it. We all notice the red flags early on. In the past, I’ve tried to give the project the benefit of the doubt. This project will be different, I lie to myself. Sure, their timeline is tight, but maybe everything will go according to plan this time.

My number one goal in 2018 is to trust my gut instinct and decline the projects with too-tight timelines.

How do you measure the success of a project?

Repeat clients and referrals!

A few months ago, I gave a mediocre workshop—or so I’d thought. I’d pose a discussion question to the group, and people just stared at me with poker faces. I’d tell a joke, and people just stared at me with poker faces. I couldn’t understand why the workshop structure I’d carefully crafted over the years had fallen flat. I left feeling deflated. Over the weekend, I had serious self-doubt, questioning whether I was even in the right career path. Then, on Monday morning, the client emailed me, praising the workshop and saying it was the best they’d ever attended. They invited me to return to their organization for another few days of workshops. I returned, gave another few days of workshops, and left with the same self-doubt. For the second time, nobody responded to my discussion questions or laughed at my jokes. And then—you guessed it—the client emailed me, praised the workshop, and invited me to return. The organization is accustomed to traditional, buttoned-up lecturers. My skin gets thicker each time, so when I return for the third series of workshops, I’ll be prepared to pose non-discussion discussion questions and tell my unfunny jokes.

In design projects, I used to think that a repeat client was a bad thing. If I redesigned the report well the first time, the client should be able to follow my steps and do it themselves the next time, right? But my clients are often pressed for time. Or, they can get the design 90% of the way there, and they need me to nudge the visualizations to the finish line. I’m working on a multi-year project right now. Each year, my role shifts. At first, I was creating the visualizations myself. Later, I was coaching their staff members through the process, making minor adjustments to their drafts, but creating very little myself. Other consultants have warned me against this approach, worried that I’ll teach clients too much and be out of a job. But my instincts keep telling me that training up staff is a net positive. I’ve taught the staff so much during this multi-year project that I wish I could hire them. Literally. I tried to subcontract part of a project to one of the women, but we discovered that our contracting language wouldn’t allow it. We’ll definitely be working together again someday.

Finally, what advice do you wish someone had given you as a new consultant?

I’ve learned from the best: Herb Baum, Tanya Beer, David Bernstein, Dave Bruns, Isaac Castillo, Stephanie Evergreen, Edith Hawkins, Rodney Hopson, Kylie Hutchinson, Helene Jennings, Cole Knaflic, Chris Lysy, Kevin McNamee, Johanna Morariu, Kim Narcisso, Veena Pankaj, Maryfrances Porter, Jon Schwabish, and Trina Willard.  

I adore each of these people for telling me what I need to hear, not what I want to hear. They’ve given me all the personal and professional advice I’ll ever need. There’s nothing I wish I would’ve known earlier—just advice I wish I would’ve followed earlier.

Want more?!

If you’re interested in learning more from Ann, check out her website. You won’t be sorry, and I bet you’ll be adding it to your favorites

“Expert” Photo by Rita Morais on Unsplash