A theme that keeps surfacing across the voluntary and training sectors is that budgets are tightening. Courses that once ran twice a year now run once. Training providers are being asked to deliver more with less. So if you're wondering how to convince decision-makers to invest in your training, you're not alone.
When resources are squeezed, the instinct is often to focus on delivery and cut evaluation. But this is precisely the time when evidence of impact becomes your most valuable currency. When you can demonstrate that your courses genuinely change how people think, act and lead, you stop being seen as a cost and start being recognised as a catalyst.
Moving from Satisfaction to Transformation

Most organisations still rely on end-of-course feedback forms. They capture whether participants enjoyed the session, but not whether anything actually changed afterwards.
The real story begins in the days, weeks and months after the course. Did participants apply the skills they learnt? Did those skills improve the quality of their work or the effectiveness of their team? Did that lead to stronger results for their project, department or the individuals they work with?
Those questions can only be answered by tracking outcomes at multiple points in time, it's often called longitudinal evaluation or longitudinal research.
One training coordinator recently put it perfectly:
“We use SurveyMonkey for our initial feedback survey and our 3-month survey, but it can’t connect the dots. We run follow-up surveys, but it doesn’t show us how individuals develop over time. We have to work that out ourselves.”
In an uncertain economic climate, the training providers that survive and succeed will be the ones that close the gap between initial feedback and ongoing insight. This is where the future of training evaluation lies.
Three Models to Bring Your Impact to Life
To move beyond satisfaction surveys, we can draw upon three well-established frameworks which when combined, form a comprehensive approach.
1. The Kirkpatrick Model
This classic model outlines four levels of evaluation:
- Reaction – How participants felt about the training.
- Learning – What knowledge or skills they gained.
- Behaviour – How they applied it in real situations.
- Results – What changed as a result (for the organisation or community).
It’s a straightforward progression from satisfaction, to learning, to application, to impact, but many trainers stop at the first or second step and miss out on demonstrating the value of their courses.
Longitudinal surveys make it possible to measure all four levels: the immediate experience all the way through to long-term change in behaviour and outcomes.
2. Phillips ROI Model
Jack Phillips expanded the Kirkpatrick model by adding a fifth level: Return on Investment.
At its heart his model is asking whether the benefits of the course outweighed the costs.
In a world of shrinking budgets, this level matters. For example:
- Did improved team leadership reduce staff turnover?
- Did better communication skills save time or prevent project errors?
- Did increased confidence lead to higher client satisfaction?
When you track behavioural and organisational results over time, quantifying ROI becomes achievable rather than theoretical because it becomes easier to pin down the specific financial savings or increases in revenue that were achieved. You're rarely going to be able attribute all of that success to a single course, but having those successes defined and quantified goes a long way towards demonstrating the benefits to an organisation of investing in training.
3. Brinkerhoff’s Success Case Method
While numbers tell part of the story, stories reveal the why.
Brinkerhoff’s approach identifies the most and least successful participants and studies what drove or hindered their progress.
It combines qualitative storytelling with quantitative data so you'll have a range of results for the stakeholders you report to, whether that's senior management, funders or the board. Stories bring numbers to life and the numbers give stories more credibility. You want both.
Putting It into Practice
You don’t need an evaluation department to use these frameworks. You simply need to design your surveys so they align with each level and then schedule them across the participant journey.
Makerble makes this straightforward: you can schedule the same survey to be sent out automatically at different points in time - for example, before the course, straight after it, 3-months later. Makerble shows you the change in participants' answers over time. If you want, you can go further and get a 360° perspective of the impact by overlaying participants' responses with facilitators' observations as well as the opinions of relevant external observers, such as a course participant's manager or team-mate.
Why It Matters Now
Evaluation has often been an afterthought. Today, it’s a differentiator.
When you present clear, data-driven stories of transformation as a result of the training courses you deliver, you win the confidence and budgets of clients, funders and internal stakeholders. And beyond that, when participants on your courses see their own progress visualised, i.e. the improvement from their first baseline to their final breakthrough, it reminds them that the learning has made a difference. This is motivating for them, it's motivating for you and it motivates decision-makers to commission your training courses.
In an era of shrinking budgets, evidence of impact is essential.
Makerble makes it easy to track learning journeys over time. We help training providers demonstrate transformation, not just attendance and satisfaction. Our free platform gives you the tools to write surveys yourself or you can give us a brief and we'll write the surveys for you. Learn more about how longitudinal surveys can help you prove your impact by reading this customer story from The Centre for Emotional Health who train thousands of participants globally.













.jpg)
.jpg)








.png)


.png)






.png)

%208.png)








.png)

