The Line They Stopped Trusting
Date Section Blog
14:20 on a Wednesday
Line 5 goes down for a scheduled changeover. The new SKU is a 450g Mediterranean Vegetable Mix. New bag format, new film supplier, new label layout, different sealing characteristics. Planned downtime: 18 minutes. The crew has done this hundreds of times.
At 15:02, the line is still not stable.
The reject rate spiked on restart and has not settled. Operators have made three rounds of manual adjustments. The vision system is flagging good packs and passing ones it shouldn't. Line speed is at 65% of nominal. The scrap bins are filling.
The COO's first update is about the delivery schedule, before any data about defects or reject rates reaches him. By 15:02, the downstream effects are already in motion: a palletizing backlog, cold storage sequencing disrupted, two trucks at risk of missing departure windows.
From an 18-minute changeover.
The adjustment spiral
This is the part the incident report won't fully capture, because it unfolds in real time across a series of small decisions where each seems reasonable in isolation.
First, an operator overrides the vision system threshold on the seal check, reasoning the new film might be reading differently. The reject rate drops slightly, then rises again. Second, a different operator re-teaches the label alignment check using packs from the current run, some within spec, some not. The threshold is now calibrated to a mixed population of good and marginal product. Third, maintenance raises the sealing temperature slightly to compensate for the new film's thermal characteristics. This improves seal consistency on some pack sizes and either introduces a new variable or conflicts with the changes made by the first adjustment.
Ninety minutes after the scheduled restart, Line 5 has accumulated four manual interventions, none of which addressed the root cause, each of which introduced a new source of variability. The reject rate is still unstable. The operators are experienced and well-intentioned. The system gave them no signal about what was actually wrong. Every intervention was a guess.
The full accounting of that shift's scrap, reduced throughput, overtime, logistics disruption comes to approximately 38,000 units of lost production across reduced throughput, scrap, and downstream disruption. All from a single changeover.
Ready for to see exactly what is happening during changeovers?
The arithmetic of unreliability
Eight lines. An average of eight changeovers per line per day. Call it 64 across the plant on an average day. If even one in ten develops any degree of instability, not a full failure like Line 5, just elevated reject rates for the first 20 minutes, a modest speed reduction, a small scrap increase, the cumulative drag on daily output is significant before a single alarm has been triggered.
The events that get escalated are the visible failures: the 90-minute recovery, the missed delivery window, the overtime bill. The events that don't get escalated are the dozens of minor instabilities that resolve themselves, get absorbed by operator adjustments, and disappear into the shift's general variance.
Both categories cost production capacity. Only one generates a report. The COO looking at weekly throughput data sees the net result of both.
The behavior that follows
Later that week, the 450g Mediterranean Vegetable Mix appears on the schedule again. Same line. Same changeover window.
The operators weren't on Line 5 when it failed. But word travels on a production floor. They know the SKU was difficult. They know the last changeover took 90 minutes to stabilize. They don't know why, because the root cause investigation is still in progress. What they know is that this SKU is unpredictable.
So they slow down. They extend the pre-changeover checks. They run the line below nominal speed for longer after restart, waiting for the reject rate to settle before pushing throughput. They're not doing this because they were told to. They're doing it because they've lost confidence in the changeover, and their response to lost confidence is to build in margin.
That margin costs time. It costs throughput. And it's invisible in any operational report, because it looks like caution rather than inefficiency.
This is the compounding effect the incident report doesn't capture. The 38,000 units lost on Wednesday are a one-time event. The throughput tax that follows (as experienced operators adjust their behavior around a SKU they no longer trust) runs for every shift that SKU appears on the schedule.
In a plant where changeovers happen 64 times a day, confidence is a critical production input.
Over time, if the pattern repeats across multiple SKUs and multiple lines, the plant's effective flexibility begins to contract. The lines can run the formats, but the people running the lines have learned not to expect clean restarts and they've adjusted accordingly.
A frozen vegetables plant running 180 SKUs with six or more changeovers per line per day cannot afford that contraction. When schedule agility erodes, it shows up first as delivery performance, then as a capacity problem that no capital plan accounts for, because the constraint is not the equipment. It is the people running it, and what they have learned to expect.
--
Next week, we'll publish a calculator that quantifies what a changeover instability costs across a full month of shifts, not the incident itself but rather the throughput tax that follows.
Want to understand the same challenge from another perspective?
Read the CEO's perspective here: When the Right Decision Created the Wrong Outcome
Read the CFO's perspective here: The €220,000 Label