Why your stockout cutoff is wrong (and how percentile-based forecasting fixes it)
Most forecasting tools ship with global thresholds — a 90-day dead-stock cutoff, a fixed alert level. They are wrong for half the businesses that use them. Here is the case for percentiles instead.
Open almost any forecasting or inventory tool, find the settings page, and somewhere in there is a number that says “dead-stock cutoff: 90 days.” That number, set once during onboarding, will quietly drive flagging, alerting, and reporting for the life of the system. And in roughly half the businesses where it's set, it is meaningfully wrong.
This is the pattern at the heart of why generic inventory tools tend to be a bad fit for specialty retail, and why Voorcast was built differently. The fix is not a better global default. The fix is to stop having a global default at all.
Why a global cutoff fails
Inventory businesses don't share a velocity distribution. A high- rotation FMCG e-commerce shop sells most of its catalogue every few days; a fine-wine merchant might have a typical inter-sale gap of weeks or months for vintages that are nonetheless performing exactly as expected. The same 90-day cutoff is paranoid in the first case and absurd in the second.
The same logic applies to every threshold in a forecasting tool:
- What counts as a fast mover?
- What's an unusual stockout?
- When should an alert fire?
- How thin is too thin to forecast confidently?
Every one of these is a percentile question masquerading as a threshold question. The right answer in your business is wherever your distribution actually puts the boundary — not where a tool's designer picked a round number.
Percentiles over thresholds
The fix is straightforward and statistically clean. Instead of hardcoding “dead stock = sold less than X in 90 days,” you compute the distribution of inter-sale gaps across your own catalogue, pick the percentile that corresponds to inventory you'd call genuinely stagnant, and use that percentile as the cutoff. The same engine handles every other classifier: fast movers from sales- velocity percentiles, stockout risk from your own forecast-vs-stock ratios, confidence labels from your own forecast-error history.
The benefit isn't just that the cutoffs are more accurate. The bigger benefit is that they stay accurate. As your catalogue grows and your mix shifts, the percentiles recompute. There is nothing to retune. The system is self-calibrating, by construction.
Statistical floors versus business thresholds
This isn't a blanket argument against constants. There are statistical floors — minimum sample sizes, peer-reviewed cutoffs like the Syntetos-Boylan ADI threshold, technical lower bounds — that are not business-specific and don't belong in your data. Those stay as code constants.
What goes is the business thresholds: the numbers that depend on what your catalogue looks like. Those should never be set by the tool's designer or by the customer in onboarding. They should be read from your data and recomputed continuously.
What this means in practice
You shouldn't have to know what a good dead-stock cutoff looks like for your business. You shouldn't have to maintain a best-sellers list. You shouldn't have to retune anything when you expand into a new category or a new season. The tool should be the analyst, not you.
This isn't a UX preference. It's a correctness argument. Every threshold a buyer is asked to set is a place where the tool can be wrong in your business in a way the tool will never tell you. Pull it out at the root and the whole class of problem disappears.
Voorcast was built around this principle from day one. The engineering rule in our codebase is literally “self-calibrating over configurable” — derive from data, do not hardcode, do not expose as a setting unless there is no other choice. The result is a forecasting tool that works as well for slow-rotation specialty as for high-velocity e-commerce, without the customer having to know which they are.