How to Budget for Monitoring and Evaluation
When budgets are tight, it helps to remember why we fund monitoring, evaluation, and learning (MEL) at all: to reduce uncertainty, improve delivery, and earn credible confidence from funders and boards. In this article we offer a way to think about M&E budgets so you can set a fair share, ensure delivery quality, collect the right evidence and make a compelling case to your donors.
Start with your MEL needs
Before setting a MEL budget, it’s important to clarify your specific needs:
- What do we need to learn? Are we tracking just outputs, or do we need to understand outcomes and behavior changes over time?
- What is our scale and complexity? Are we running a small pilot or a complex multi-site program?
- What data and methods fit best? Should we use quick surveys, interviews, or more detailed evaluation designs?
- Who will do the MEL work? Will it be dedicated MEL staff, existing program team members, or external partners?
Remember, MEL budgeting usually covers two main types of activities:
- Everyday MEL: ongoing monitoring and learning that happens throughout program delivery, helping catch issues early and ensure quality.
- Time-bound studies: a focused evaluation or research project linked to a specific decision or milestone.
Be sure to budget for both MEL types along with essential costs for tools, translations, fieldwork, analysis, training, and reporting.
Follow These Practical Bands for Budgeting
No one-size-fits-all number, but these guideline ranges (as a % of program budget) help:
% of Program Budget | When to Use It |
3–5% | Quick learning for pilots or mid-year checks |
6–10% | Program-level answers and donor-ready credibility |
11–15% | High-stakes, multi-site scale, or external reports |
For example, a ₹1 crore program at scale would set aside ₹10–12 lakhs for MEL: ₹4–5 lakhs for continuous monitoring and ₹6–7 lakhs for an external evaluation tied to donor reporting.
Splitting the MEL Budget
Divide the MEL budget between ongoing MEL and a time-limited study, adjusting for your program stage:
Stage | Everyday MEL (%) | Time-bound Study (%) |
Pilot | 50 | 50 |
Steady State | 65 | 35 |
Scale/Replicate | 45 | 55 |
*Pilots: balance between exploration (studies) and early course-correction (everyday MEL).
*Steady state: focus on quality assurance, fewer big studies.
*Scale: donors expect both credibility and replication proof.
Also phase your spending:
- Q1 : tools, training, and baselines
- Q2–Q3: ongoing MEL + fieldwork
- Q4: wrap-up reports and learning workshops
Keep MEL costs out of “Overhead”
When pitching to donors, keep MEL costs out of overhead. Donors often limit overhead spending. To avoid confusion:
- Classify MEL costs as program expenses
- Include everyday MEL, evaluation studies, and data protection here.
- If commissioning an independent impact assessment, show it as a separate program line with clear scope and deliverables.
What about Large- Scale Studies?
Sometimes donors want deeper, sector-shaping evidence: a randomized control trial, a longitudinal panel, or a state-wide assessment. These are valuable but they should be budgeted over and above your core MEL percentage. Treat them as a separate line item, co-funded with partners or positioned as a sector investment. That way, your everyday MEL continues to strengthen delivery, while large studies generate insights that benefit the field at large.
Keep Donors Engaged
Keep donors involved and confident by:
- Sending concise reports on key questions answered and recommendations.
- Hosting workshops to review findings and agree next steps.
- Providing short, regular updates from ongoing MEL activities.
- Using charts or dashboards to turn complex data into simple visuals.
This keeps MEL a strategic conversation, not a one-off cost.
When you tie MEL spending to decisions, show the split (everyday vs study), and commit to clear milestones and a learning workshop, MEL stops looking like overhead and starts reading like strategy.
Author:
Populi Consulting



Leave a Reply