I’ve Got My Data, Now What?
True confession: I am not a strategic evaluator. Anyone else want to come clean? Try this easy quiz:
- Do you churn out Survey monkey questionnaires the day before your workshops begin?
- Do you frantically google “student evaluation rubric” as the touring van pulls out of the theatre loading dock?
- Do you regularly practice post-event justification, working backwards through your program as you rush to complete a final report for a funder the night before it’s due?
If you answered yes to at least one of these questions, you might need an evaluation intervention.
I can’t be the only one guilty of saying: “I’ve collected all this data, now what?” If you’ve ever wished for a simple framework for funneling all that data into a useful format, you’re not going to like what I have to say next.
The truth is, if you’re asking the “now what” question, it’s already too late. Data interpretation and presentation takes time and experience to do really well, certainly. But if you invest thoughtful time in evaluation planning before your project begins, the results will fall into place and you won’t end up asking the “now what” question.
Ideally, here’s the evaluation cycle we should strive to create:
I would say my typical cycle looks like this:
When the Oregon Arts Commission launched a new Connecting Students to the World of Work grant program, I was determined to improve my evaluation process and shift my time allocation.
I enlisted the help of two professional evaluators (not required, but extremely helpful—especially during your first attempt), NPC Research and CR Smith Consulting. Their fresh perspective and lack of insider knowledge about arts education led them to ask pointed questions, forcing me to articulate clear outcomes for my program.
The extra time spent in the “Planning” stage made the data collection methods clear. A pre- and post- student survey would get to the question we needed to ask about whether students felt they made progress on the outcomes we defined. Simple. Elegant.
During the “Analysis and Reflection” stage, we got the pay-off: early survey returns gave us (unprompted) student quotes demonstrating personal growth in exactly the outcome areas we articulated during the planning stage. Reading those quotes was a huge validation of the time and effort spent early in the process. And meant that much less time needed to be spent on after-the-fact justification about the value of these projects.
As we move into the “Action and Improvement” stage over the summer, we will look carefully at both the successes and failures of our project. I don’t know yet if we’re ready to share our failures, but I’m mindful of the fact that if we only ever share our successes, we dis-incentivize risk-taking. This Strategic Learning model doesn’t just apply to evaluation. We have to share our failures so that others can use that as part of their data collection . . . which allows them in turn to learn from our work.
My new cycle now looks something like this:
For those of you still hoping for a magic bullet, I’ll leave you with this: While it’s really too late to do your best evaluation after your project has started, here are some questions you can ask yourself about the worth of your data. This can help make decisions about what to collect, which limits time spent in data collection and in turn frees up time to spend in other parts of the cycle.
- Value: Does what you are collecting align with your mission/outcomes? If not, stop asking for it.
- Feasibility: What capacity do you have to use the data? Don’t collect data you can’t or won’t use to inform your planning process.
- Balance: Do you have parity between formative and summative data? Ask enough questions to keep your process on track, but remember that partners and funders need to know what the end results are.
- Relevance: Will your partners be interested in your findings? Remember that keeping your results internal means less return on investment for the time spent doing the evaluation. If you’re going to take the time to measure your progress, share what you learn!
Suggestions for Further Reading:
What Should We Be Collecting and How? from Grantmakers for Effective Organizations is a helpful tool for determining which data collection methods might work best for the outcomes you are measuring.
“Choosing a Database for Your Organization” includes good questions about what kind of data you are managing and how you plan to use it.