Why incidents per 100 shifts is the only fleet KPI that matters
Raw incident counts lie. Normalized rates tell you what's actually happening across drivers, vehicles, and routes — and what to do about it.
Most fleet owners we talk to can rattle off last month's incident count from memory. Twenty-three. Forty-one. Seventeen. Then they ask the wrong question: "Is that good?"
The answer is always the same: the count tells you nothing on its own. Add a route, add a driver, run an extra Saturday — your raw number goes up. Cut your weakest van for repairs and your number drops without anything actually getting better. Raw counts move with size. They don't move with quality.
The metric that survives all of that — and the one we pin to the top of every dashboard inside Fleet by Elevera — is incidents per 100 shifts.
What "incidents per 100 shifts" actually is
It's a normalization, nothing fancier. You take every incident logged in a window — accident, customer damage, parking scrape, missed pre-shift photo, anything you've decided counts — and divide by the number of completed driver shifts in that same window. Multiply by 100 so the number is human-readable.
Incidents per 100 shifts =
(incidents in period / completed shifts in period) × 100
A DSP running 60 shifts a day across a 30-day month is doing roughly 1,800 shifts. If they logged 54 incidents, that's 3.0 per 100 shifts. A neighbouring DSP running 40 shifts a day with 36 incidents lands at exactly the same 3.0. Two very different fleet sizes — comparable performance.
That's the whole point. The metric strips out volume so you can see signal.
Why raw counts mislead — three ways
1. They reward shrinking. A fleet manager who pulls three vans off the road for a week will report fewer incidents and look like a hero. Nothing changed about driver behaviour, vehicle condition, or route risk. The fleet just did less work.
2. They punish growth. Onboarding a new route, picking up a peak-season block, hiring two seasonal drivers — all of it inflates the count. Owners panic-call the dispatcher. The dispatcher tightens screws on the wrong people.
3. They flatten the worst weeks. If your fleet had two terrible Saturdays in a 30-day window, the monthly count smooths them into the average. Per-100-shifts, segmented by day of week, surfaces them immediately.
Setting your baseline
Before you can act on the metric, you need to know what's normal for your fleet specifically. Industry averages don't help here — a parcel-only DSP in a dense urban grid runs a different risk profile than a heavy-package fleet in suburbia.
A reasonable starting protocol:
- Pull at least 90 days of history. Anything shorter and you're modelling noise.
- Calculate weekly per-100-shift rates for that window. You'll get 12–13 data points.
- Take the median, not the mean. One catastrophic week shouldn't define your baseline.
- Note your interquartile range. Anything outside it is a real signal, not weather.
Most well-run DSPs we've measured land between 2.0 and 4.5 incidents per 100 shifts over a rolling 30-day window. Below 2.0 usually means you're under-reporting (often because drivers don't trust the workflow). Above 5.0 means something specific is broken — usually a single driver, a single vehicle, or a single delivery zone.
"We thought we had a fleet-wide problem. Once we segmented per-100-shifts by driver, it was three drivers. Two months of coaching, one departure, and our rate dropped from 4.8 to 2.6." — Operations lead, 42-vehicle DSP, Hamburg
How to actually use the metric
Tracking it is half the work. The half people skip is deciding what triggers an action. A KPI without a trigger is wallpaper.
Here's the trigger system we ship by default with Fleet by Elevera:
| Rate (per 100 shifts) | Status | Default action |
|---|---|---|
| ≤ baseline median | Healthy | No action. Don't manufacture work. |
| Median → +50% | Watch | Auto-tag for the next ops review. No alert. |
| +50% → +100% | Investigate | Slack/email to dispatcher. Review by driver and vehicle. |
| > +100% | Escalate | Owner notified. Pull route history, run before/after photos. |
The point of the bands is that most of the time, you do nothing. Operations teams burn themselves out chasing every uptick. Most upticks regress to baseline within a week. The bands tell you when to actually pick up the phone.
Segmentation is where it gets useful
A fleet-wide rate is a smoke alarm. Segmented rates are a thermal camera.
The four cuts we look at every Monday morning:
-
By driver. Top-decile and bottom-decile drivers, ranked by their personal per-100-shifts rate over the last 60 days. The bottom decile gets coaching first. The top decile gets quoted in the team channel.
-
By vehicle. A van that runs hotter than the fleet baseline is usually telling you something — worn tires, blind-spot geometry, an electrical gremlin. Fleet's maintenance module ties incident clusters to service history so you can see it on one screen.
-
By route. Some delivery zones are structurally riskier (narrow streets, high parking density, frequent reversing). If route A runs at 4.8 and route B at 2.1, route A doesn't need a better driver — it needs a different van or a route restructure.
-
By shift type. Peak-season Saturdays often run 2× the incident rate of a normal Tuesday. If you can see that pattern, you can staff for it. If you only see the monthly average, you can't.
Common objections, briefly
"It penalises busy days." It actually does the opposite — by normalizing on shifts, it tells you whether your busy days are riskier per unit of work. Often the answer is "not really, we just had more work" — and that's a useful answer.
"My team won't log every incident." Then your baseline will be artificially low and your trend lines will be junk. The fix isn't to abandon the metric, it's to make logging take 10 seconds. (Driver-side, that's the entire reason Fleet Go exists.)
"What about severity?" Don't try to fold severity into a single number. Track per-100-shifts as a count, and then track severity-weighted cost per shift as a separate metric. Two metrics, two purposes.
A 30-minute first step
If you don't have this metric on your dashboard yet, you can stand it up before lunch:
- Export your last 90 days of incidents to a spreadsheet.
- Pull completed shift count for the same window from your scheduling tool.
- Build the weekly rate, plot it.
- Mark your median and your action thresholds.
- Decide who gets the alert when the rate crosses each band.
That's the entire setup. Everything after is execution.
If you'd rather not build it by hand, Fleet by Elevera ships with this dashboard pre-wired — every incident logged from a driver's phone is automatically counted, segmented, and trended against your baseline. The first 14 days are free. No card.
Related reading on the Fleet journal:
Nedim Aganović
Co-founder, Elevera
Writing about fleet operations, DSP management, and the data behind last-mile delivery. Part of the team building Fleet by Elevera.
// Continue reading
More from the Fleet Journal
How photo evidence at handoff cut one DSP's claim disputes by 64%
A real teardown of three months of inspection logs from a 28-vehicle fleet in Bremen.
Preventive vs reactive maintenance: a 12-month cost comparison
Two identical fleets. Two different policies. One painful answer.
Onboarding a new DSP route: the first 30 days, in order
What to track in week one, week two, week four. With templates.
Run a data-driven fleet from day one.
Join 240+ DSP operators who replaced Excel and WhatsApp with Fleet by Elevera. 14-day free trial, no card required.