How to measure the impact of your organisation
Thursday 1st morning session Tomas and Colin on METRICS
For scaling projects, there must be data driven approach. Lots of new tools to get data. The way we measure should change, but has it?
What metrics actually add value? Metrics not always worthwhile/useful when driven by funders.
Challenges when projects scale and stats are used (e.g., usage patterns may not be statistically significant). Metrics need to change. Can save a lot of trouble if evaluation is on the front-end instead of post-program evaluation (oftentimes funders only do the latter). Numbers and stats are important to funders, even if the quality isn’t that good. Relates to high-stakes metrics in schools. Goals slip from impacts to numbers.
Want to figure out how to measure “the fuzzy.” Require research on the ground – need to commit funding to this. Marrying qualitative and quantitative, but again the issue is metrics. Some way to take learnings from qualitative to map out quantitative measures?
Time and money when working on small scale projects – always up against. Budgets limited and pre-determined. Want to do metrics up front instead of later in the project. Constant stress with small orgs. What do orgs do to make room for planning and evaluation – cuts/scale down? Use questionnaire with new clients, asking about goals, what would make you feel like this project is successful (what does success look like?). Sustainability is important, but how to measure, especially building technology for them.
All Our Voices polling technology (from Princeton) for large groups of people. Can get rank order of what’s important to people.
Link between organization’s goals and project like building a website. What kinds of analytics/metrics satisfy both? Best practice: share with new client/org successes of another org and how metrics demonstrated that success.
Asking org to settle those “amorphous things” like impacts is very challenging – some orgs are even resistant to working/knowing impacts. Self interest around funding needs can drive this.
Sending team in after the project is done is common – but doesn’t work well. Evaluators don’t know about the program implementation. How to work with funders to fund evaluation/metrics more at the beginning.
Some analytics can really help, like looking at page views (e.g., of one page compared to another page). How this can inform their off-line work?
Power Base (at this point approaching 50 groups, would like to approach 500 (how to increase infrastructure and scale?). What about internal metrics? In first round, conversations with administrators who are in charge of the database. What’s useful/not useful? How to do something helpful with this information. What’s the responsibility of the evaluator to keep the evaluation going (when just doing one training isn’t enough)? Issues like group dynamics, communication, staff turnover, etc.
How to integrate feedback along the way for clients/orgs? What do the #s mean – having conversations about what it means and assumptions too. Understanding the complexities of communities and metrics takes a lot of work. At the end of the day, need to empower people as much as you can. How to know that you’re measuring the right thing? How to deal with unintended consequences…
Best practice of regularly checking in with what your mission is as an organization. Learning from problems as you go along. Figuring out how to ask the right questions.
At Drupal nonprofit summit recently. Example of an org that engaged all kinds of stakeholders in the development of a website that reflected the views and investment of many different levels.
Thinking about training around new approach to metrics for a movement: all different kinds of social justice orgs, large and small, having them work together and training is transferable across all.
Also interest around simple, smaller analytics like online searches, etc. that happen in a semi-regular basis.
Report back points:
• How funding impacts how we look @ analytics and planning (e.g., goals), often doesn’t work well.
• Front-loading metrics/analytics vs. at the end of a project. Asking people what success looks like.
• Training around metrics for a movement, bringing orgs together.