Logic Models: A New Opportunity

Hey everyone, I’m Phil Stoeklen! I’m excited to share some thoughts on a tool we leverage a lot with our stakeholders: logic models. Logic models are useful tools that help program stakeholders and evaluators gain an understanding of the strategic model or vision for their initiative. Logic models can help to map out what resources are needed to sustain program activities, and can also assist with visualizing  the outcomes and impacts of program activities…something that can be difficult to describe in mere words (especially when you have a lot of stakeholders to consider). The question I would like to pose, however, is do they actually help understand all outcomes and impacts?

If you think about it, the way that we organize logic models primes us to miss unintended (occasionally negative) program effects. Programs are, after all, treatments for existing problems. Like any treatment, there is always a potential risk for negative side-effects. By ignoring these potential adverse impacts when we lay out program logic, we open ourselves up to not catching problems as early as we can–or in the worst-case scenario, after it is simply too late. 

I think this is a commonly encountered phenomenon in logic model design, and I think it is because of a couple of important reasons. First, when programs are conceptualized the goals and imagined impacts are often quite lofty (not necessarily a bad thing), and almost always positive (again, not necessarily a bad thing). It would be quite odd, afterall, to design a deliberately harmful program, but it is worth pointing out that the negative considerations are not often a focal point of logic modelling sessions. A second reason for this is really a combination of the first observation and the literal format of logic models. We talk about inputs, outputs, outcomes, and impacts, but the latter two are almost never facilitated from the perspective of how this program design could conceivably harm populations. 

Now, having ambitions for programs to have wonderful long-term impacts is great! It helps program stakeholders  set big goals for themselves and fosters evaluation touch-points by identifying potential areas to measure for effect. The bigger problem that is happening is really a consequence of not thoroughly exploring what is realistic to expect as an outcome and impact of said program, and conversations about how we will react if/when something goes awry.

With this observation in mind, don’t you think it is time we have a real conversation about how we model program logic, and how we help our clients understand and anticipate as many program effects as we can?  It isn’t about focusing on the negative…instead, it’s having an informed conversation about how we recognize that every treatment has the potential for both positive and negative effects. In future posts, we will share some strategies to incorporating this important (but often missed) element of logic models. 

Phil is a Senior Managing Consultant at Viable Insights, where he leverages his strong background in evaluation. He has a Master’s degree in Applied Psychology with concentrations in Health Promotion and Disease Prevention, Evaluation Research, and Industrial-Organizational Psychology. Phil has been an evaluator and project manager on multiple projects, including: comprehensive needs assessments, community perception projects, formative and summative program evaluations, and impact evaluations. His projects have ranged from short-term to multi-year, and has collectively worked on more than $23 million in both grant and privately funded programs/initiatives. Clients he has worked with include Margaret A. Cargill Philanthropies, U.S. Department of Labor, University of Wisconsin System, the Wisconsin Technical College System, and the Annie E. Casey Foundation, among others. In addition to Phil’s professional consulting experience, he serves as an instructor in the Evaluation Studies and Institutional Research graduate certificate program at the University of Wisconsin- Stout. In that capacity, he teaches courses covering evaluation theory, data collection techniques and best practices, and evaluation applications. Whether in his role as an evaluator or instructor, his goal remains the same —  providing individuals and organizations with the tools, skills, and capacity to collect and use data in their decision making process. Find him on LinkedIn or Twitter!

Reflections from #AZENet19

A few weeks after the 2019 Annual Arizona Evaluation Network Conference, I felt inclined to reflect on my experience. This year’s theme, Refocusing on the Fundamentals, served as a call to get back to the roots of evaluation practice. This year’s conference really reminded me that we should never become too complacent with our skills in even the most routine of tasks. The reality is…things change – especially with environmental (i.e. situational or contextual) factors and stakeholder dynamics. We as evaluators need to flex to this for the sake of both our own development and that of the programs, collaboratives, or communities we are working with.

So as expected, I came away from the conference challenged and recharged (I love getting together with my people!); I was ready to take on a whole new set of goals! I realized that the conference theme leveraged two important aspects of how I strive to approach projects. The first is to maintain a heightened level of self-awareness, and the second (and really how I continually push for the first) is the application of ongoing reflective practice — asking myself questions like…

What are my strengths? What are my areas of opportunity? What do I enjoy doing? Where can I continue to develop? Where can I leverage others to add more value and impact to my clients’ projects?

Through my interactions with AZENet colleagues, I realized I was answering some of these questions naturally through peer discussions and a reminder of the foundational principles in our work. This experience reinforced the idea that we need to be coming together as diverse groups to enhance our practices and what we deliver. It’s how we move our field forward. Personally, I think the experience helped reaffirm how important it is to promote self-awareness and reflective practice in my work, and it also helped increase my awareness of the fact that I want/need more opportunities for collaboration my peers. Growth is difficult, so why not go through the process with other people that might be asking themselves the same reflective questions as you are.

I’m considering this my challenge to get in that space more…and want to encourage others to do the same!

Look for my future post, which will include my vision as the 2019 AZENet President-Elect.