Home/Resource Library/L&D ROI Presentation Template

L&D ROI presentation template: CFO-ready slide structure

A five-slide presentation structure for proving training investment ROI to finance leaders. Each slide includes the content framework, fill-in fields for your data, and presenter notes. Designed for US mid-market L&D leaders presenting to CFOs, COOs, or executive sponsors.

L&D ROI CFO presentation Training investment Executive readout

How to use this template

Each slide block below contains: the slide title and purpose, the content framework with fill-in fields for your specific data, and a presenter note with guidance on delivery. Replace all fill-in fields with your actual pilot data before presenting.

This template is designed for a 10–15 minute CFO or executive sponsor presentation. Keep it to 5 slides plus appendix. Finance leaders respect brevity; additional detail should go in the appendix, not the main presentation.

The 5-slide structure

Slide 1 The business problem Establish financial stakes before presenting solution

Slide title: "The hidden cost in our manager population"

Opening statement (read verbatim or adapt):

"Our [NUMBER] managers currently spend an estimated [X] hours per week on administrative tasks that don't require their judgment — status updates, report prep, meeting summaries, and data aggregation. At a fully-loaded cost of $[RATE]/hour, this represents approximately $[ANNUAL $] per year in manager capacity allocated to automatable work."

Supporting data point (from your time audit):

  • Average admin hours per manager per week: [FROM BASELINE AUDIT]
  • Time on the 3 highest-volume admin tasks: [TASK 1: X min / TASK 2: X min / TASK 3: X min]
  • Current AI tool usage rate across cohort: [X% using systematically]
Presenter note: Do not skip this slide. CFOs make budget decisions based on business problems, not program narratives. If you open with "we ran an AI training program," you've lost the frame. Open with the cost, then show the solution.
Slide 2 What we measured Establish credibility — you measured before you acted

Slide title: "Baseline measurement: what we tracked before the program"

Table: Baseline KPIs (fill in from your week-1 time audit)

KPIMeasurement methodBaseline value
Manager admin hours/weekTime audit survey (N=[#])[X hrs]
Status report turnaround timeTimed task sample[X min]
1:1 prep timeTime audit survey[X min]
Current AI workflow adoption rateDirect survey[X%]
Presenter note: This slide is your credibility builder. Most L&D presentations show no baseline data. Showing that you measured before acting immediately differentiates this from a typical "we ran a course" narrative. Finance will notice.
Slide 3 What changed The outcome comparison — before vs after

Slide title: "Results at week 4: before vs after"

KPIBaselineWeek 4Change
Manager admin hours/week[X][Y][-Z%]
Status report turnaround[X min][Y min][-Z%]
AI workflow adoption rate[X%][Y%][+Z pp]
Output quality consistencyVariable[Score]Standardized

Win examples (2–3 specific anecdotes from the cohort):

  • "[Role] reduced status report time from [X] to [Y] — saving [Z min] per week"
  • "[Role] now preps 1:1 agendas in under 5 minutes vs [X min] previously"
Presenter note: Keep the before/after table to 4–5 rows maximum. The finance leader needs to absorb the delta quickly. Use the win examples to make the data human — one specific story lands harder than a table of percentages alone.
Slide 4 The financial value Translate time savings to dollars — the CFO's language

Slide title: "Financial value of the pilot"

Hours saved per manager per week: [X hrs]
× Managers in cohort: [N]
× 48 working weeks
= Annual hours recaptured: [TOTAL HRS]

× Fully-loaded manager hourly rate: $[RATE]
= Annual value: $[ANNUAL $]

Program cost (pilot): $[COST]
ROI: [X]%
Payback period: [X] months

Sensitivity table (shows ROI holds even at conservative end):

Savings scenarioAnnual valueROIPayback
Conservative (12% reduction)$[X][X%][X mo]
Observed ([X%] reduction)$[X][X%][X mo]
Optimistic (30% reduction)$[X][X%][X mo]
Presenter note: Show the sensitivity table even if your observed result is strong. It demonstrates intellectual honesty and pre-empts the CFO's "what if it's not as good at scale?" question. The conservative scenario should still show positive ROI — if it doesn't, revisit your cost inputs.
Slide 5 The expansion recommendation Close with the ask — clear, bounded, decision-ready

Slide title: "Recommended next step: expand to [Function/Department]"

  • Next cohort: [Function, N managers]
  • Timeline: [Start date → End date]
  • Investment: $[EXPANSION COST]
  • Projected annual value (at observed savings rate): $[PROJECTED $]
  • Projected ROI: [X%]

Decision requested:

"Approval to proceed with Phase 2 — [Function] cohort of [N] managers — at a budget of $[AMOUNT], with the same KPI measurement structure and a readout at the end of the 4-week sprint."
Presenter note: Ask for a bounded decision: a specific function, a specific budget, a specific timeline. Do not ask for "program approval" or "budget for the year." Bounded asks are approved faster and leave a clear path to the next expansion after the next pilot readout.

Appendix: what to include (don't put in the main deck)

  • Full time audit methodology and survey questions
  • Individual KPI data by manager (not in main deck — privacy and relevance)
  • Full prompt library used in the pilot
  • Detailed cost breakdown (implementation hours, internal time, etc.)
  • Week-by-week adoption tracking data
  • Qualitative win collection (all 12+ examples, not just the 2–3 in slide 3)

CFO objection preparation

"Would this have improved anyway without training?"

The baseline + post-measurement structure answers this. The cohort's admin hours were not trending downward before the sprint — the change happened in the four-week window. The control is time: same managers, same roles, same workflows, before and after intervention.

"Is time recaptured actually used productively?"

This is a fair question. The conservative framing is: "We're not claiming headcount reduction. We're claiming that 2 hours per manager per week previously spent on low-judgment admin is now available for higher-value work — coaching, strategy, customer interaction." The dollar value uses the fully-loaded cost as a proxy for that time's value.

"Will this sustain at scale?"

Point to the week-8 adoption data if you have it. If you don't, commit to a week-8 check as part of the expansion scope. The answer is: "This is exactly why we run cohort pilots before org-wide deployment — we'll have week-8 sustainability data before Phase 3."

"Why not just use our existing LMS for this?"

The LMS tracks course completions. This program changed workflow behavior. The distinction is: after a manager completes a time management course in the LMS, they still write status reports the same way. After this sprint, they use a standardized AI workflow that takes 12 minutes instead of 45. Those are different outcomes.

Need the pilot data to fill this template?

Prentice's 4-week pilot generates all the before/after KPI data this template needs. Book an ROI scoping call and we'll scope the pilot to your cohort, role mix, and target KPIs.