Why is IMM important?

Measuring activities and managing this information effectively  can produce multiple benefits, including: 

  • Producing valuable data on how to improve project performance; 
  • Keeping organizations accountable to stakeholders such as beneficiaries and funders; 
  • Generating evidence of positive impact, which can be used in fundraising, advocacy, and communications; and
  • Reducing the risk of projects creating or perpetuating negative or harmful outcomes.

Many organizations use their impact data to show evidence of program efficacy to potential donors.

“What you don’t measure, you can’t manage.” 

How do I get started?

Many people find IMM intimidating and overly complex at the outset. There are numerous IMM frameworks, tools, and reports, which results in confusion and decision paralysis.  For those starting their IMM journeys, try these simple steps:

  • Familiarize yourself with IMM essentials.

    The resources at the bottom of this webpage provide introductory information essential to proper measurement and management. Spend some time reading through these helpful documents and exploring the websites.

  • Set a strategy.

    Going through a theory of change (ToC) exercise is a helpful place to start. A theory of change exercise allows an organization to logically link their activities and programs with the intended social impact they would like to see. ToCs make the connection between immediate outputs, short-term outcomes, and long-term impacts produced by an organization or project. Not only is this a useful IMM exercise, but creating a theory of change can be a helpful practice for setting organizational strategy.

  • Operationalize your strategy.

    After creating a theory of change, determine how you might be able to measure the change you intend to create. Match indicators with specific outputs, outcomes, and impacts, then define how you will collect data for those indicators and targets you want to set for performance. If you want to do an evaluation as part of your strategy, determine which evaluation type is ideal for your needs. There are many different types of valid evaluation methods. A good place to start is Better Evaluation.

  • Integrate your data collection into your operations and existing processes.

    Data collection and monitoring should be built into organizational processes so that it does not create undue burdens on both staff and beneficiaries. For example, an organization may want to measure the impact of its accelerator program on ventures’ ability to raise capital after  acceleration, as compared to their performance prior. In order to capture this information, the accelerator would want to survey applicants – but they can save time by integrating their baseline questions into the accelerator’s initial application.

  • Meet and discuss IMM findings.

    Many organizations in project-based grant cycles may conduct IMM on those projects, but immediately move to the next program without reflecting on the last project’s insights. Data utilization is a frequently ignored IMM practice; this is a missed opportunity. Doing something as simple as meeting as a group to discuss IMM results and brainstorm how to integrate those findings moving forward can be a low-effort exercise. Other ways to utilize evaluation data include interactive internal workshops or creating dashboards for staff and stakeholders to explore monitoring data.

What are some IMM best practices?

  • Start small. Many people see complex frameworks or indicator lists and think that immediately, they must conduct a complicated evaluation or track dozens of metrics. This is not so!Begin by selecting 3-4 indicators that make sense for your organization to track and that produce information that is usable to you. Some organizations will use light-touch monitoring and evaluation to inform a larger follow-up evaluation.   
  • Don’t reinvent the wheel. Some organizations attempt to reinvent the wheel by creating their own frameworks, measurement plans, or indicators. However, there are a dearth of resources, indicator lists, and even survey templates that are freely available to choose from. Before creating complicated IMM systems, begin by scanning available resources – many of these resources, such as survey questions, have already been tested and used on previous projects, which validates them as useful tools.  
  • Customize existing resources for your needs. Organizations should look at existing frameworks and measurement plans applicable to their sector and geography. However, these resources won’t fit your project or organization perfectly – some small customization of existing resources may be required. Always customize resources to maximize the usability of your information. 
  • Maintain a data utilization mindset. Utilization-focused evaluation (UFE) is the practice of conducting IMM activities with the focus on how findings and data can inform organizational strategy and improve performance. This mindset allows users to cut down on redundant data gathering and help ensure that learnings are effectively applied.  For small organizations, this is especially helpful when IMM capacity is constrained. Be sure that you are only measuring what you will be able to use and manage.  
  • There is no one “right way” to do IMM. Some professionals feel that there is only one type of valid evaluation method. However, useful data and learning can be obtained from a variety of methods and without doing complex studies. Ensure that the data you produce will be useful for your organization and fits your current organizational capacity, both in terms of financial resources but also human resources. 
  • If in doubt, contact an expert. IMM specialists spend their professional careers training in different evaluation and measurement methods. Members of ANDE can contact ANDE’s Impact Manager for complementary 1:1 consultations, where they can get feedback on evaluation reports, recommendations for certain resources, help with creating an IMM strategy, and more. 

Resources about Getting started with IMM

Commons terms and definitions

Below are some of the most commonly used IMM terms.

  • Baseline, endline

    Baseline refers to measurement that is taken on a group of stakeholders prior to an intervention taking place.  Endline is measurement that is taken after the intervention takes place. Impact results are produced by comparing endline results with baseline data.

  • Indicators

    Also referred to as metrics, indicators are quantifiable measurements tied to an output, outcome, or impact. Looking at a theory of change, indicators detail how to measure the desired outcomes and impact.  For example: 

    As a short term outcome, an accelerator may want to see their entrepreneurs develop greater confidence in their ability to deliver investment pitches.  How would they measure this? The indicator tied to this outcome could be a self-assessment of entrepreneurial confidence, rated on a scale of 1-5.  

  • Quantitative data

    Quantitative data is information usually collected in the form of numbers. This data can be generated through administrative methods (e.g., analyzing financial records) or collected through surveys.  An example of a quantitative data point is: number of beneficiaries expressing satisfaction with services provided.

  • Qualitative data

    Qualitative data is generated by interviews, focus group discussions, or open-ended survey questions. You can think of qualitative data as “words,” instead of numbers. Qualitative data is useful for uncovering the “HOW” and “WHY” of a certain situation. It can embellish details and provide greater nuance to quantitative data.  In many cases, the best data collection practices involve both quantitative and qualitative data.

  • Benchmarking

    Benchmarking is comparing performance data with an external source, such as performance from similar organizations or to a well-regarded standard, such as the Sustainable Development Goals (SDGs). Organizations can also benchmark their current performance to past performance, a practice particularly useful for impact investors across their portfolios.

  • Attribution versus contribution

    Attribution is the ability to draw a causal link between an intervention (e.g., a training program) and an outcome (e.g., increased revenues). Contribution, alternatively, refers to how much an organization can take credit for in terms of impact. Measuring contribution in place of attribution is often a best practice when a causal link cannot be drawn between an organizations’ activities and the measured outcomes. For example, a business might receive investment from an impact investor, technical assistance from an accelerator, and mentorship from an industry leader, resulting in better financial performance. Attribution to one particular activity can be difficult – instead, a better practice is to try and estimate the contribution for the different activities.

  • Monitoring versus evaluation

    Often referred to in tandem as “monitoring and evaluation (M&E),” these components are complementary.  Monitoring refers to ongoing data collection that occurs throughout the duration of a project; monitoring data also helps programs make tweaks to implementation in real time.  Evaluation frequently is done after a project is completed to understand the impacts and whether intended outcomes have been achieved.  Monitoring data is often essential to evaluation – for example, surveying participants of a training program would happen all throughout a project’s lifespan, but will also inform an end-line evaluation.