# Why your forecasting software will under-forecast your best sellers
Every AI forecasting tool I've worked with has the same bias. It under-forecasts bestsellers. Not occasionally, not for specific SKU types, but systematically and consistently for the products that matter most to your business.
This is not a bug in any particular tool. It's a structural consequence of how these models are trained. The model has seen many products. Most products have moderate, relatively stable sales velocity. The model learns what normal looks like and extrapolates forward from there. Your bestsellers are not normal. They have faster velocity, steeper acceleration trajectories, and stronger seasonality than the average SKU in the training data. The model looks at recent weeks and interpolates forward. For a fast-moving bestseller, that interpolation is almost always too low.
The downstream effect is that your planning tool generates reorder recommendations that are correct on average and wrong precisely when you need them most.
What the under-forecasting actually looks like
In the Wargames Delivered implementation, the team discovered that Flieber's AI model was consistently producing reorder quantities for top SKUs that didn't account for the velocity acceleration those products were experiencing. The model saw a recent four-week average and projected it forward. The actual trajectory was steeper.
The practical consequence: reorder recommendations triggered late, quantities were too small, and bestsellers went out of stock. At one point, 20.5% of bestsellers were out of stock simultaneously. Some of that was cumulative — one stockout affects rank, which reduces velocity, which changes what the model sees next cycle — which is why this kind of problem tends to get worse over time rather than better.
Why bestsellers are structurally different from average products
A few properties of bestsellers that the typical AI forecasting model handles poorly:
Velocity acceleration. When a product is ranking well and getting organic traffic, each sale generates a bit more visibility, which generates a bit more organic traffic, which generates a bit more sales. The trajectory isn't flat; it curves upward. Historical averages don't capture a curve.
Seasonality concentration. Many bestsellers have sharp seasonal peaks, not gentle seasonal patterns. A product that does 30% of its annual volume in six weeks around a holiday has a sales pattern the model recognizes as seasonal but often underestimates in magnitude. The model knows Q4 is higher than Q2. It tends to underestimate how high Q4 actually gets for the top performers.
In-stock dependency. Bestsellers are often bestsellers partly because they have good in-stock rates, which supports consistent BSR and organic visibility. The model's historical data for these SKUs includes the periods when they were well-stocked. When reorder recommendations cause partial stockouts, the model then sees lower sales and recalibrates downward. The forecasting problem creates data that makes the forecasting problem worse.
The manual override model
The fix is not to replace the software. The fix is to build a two-tier approach that uses the software's model selectively.
For established bestsellers with 12 or more months of sales history: anchor the forecast on last year's sales, not the AI projection. Last year's numbers for a stable bestseller are almost always a better starting point than a model trained on average products. Add a judgment-based uplift for known demand drivers: is this going into a Prime Day? Did a competitor go out of stock? Are there early signals of organic velocity acceleration?
For new SKUs without history: let the AI model run. It's more useful when there's no history to work from, and the stakes are lower because new SKUs typically aren't yet driving a large share of revenue.
This hybrid approach — last year's sales for established bestsellers, AI model for new products — was the specific configuration that resolved the under-forecasting problem for Wargames Delivered. It wasn't complicated to implement once the underlying problem was diagnosed. The complication was getting there, because the tool was assumed to be working correctly until the stockout pattern became impossible to ignore.
The human judgment problem
No software update will fully fix this. AI forecasting tools will keep getting better, and they'll keep running into the same structural issue: they're calibrated on average products, and your best sellers deviate from the average in exactly the ways that matter most for inventory planning.
The places where the model is reliably wrong are identifiable in advance. Your top 10 to 20 SKUs by revenue. Your products with the strongest seasonality concentration. Any product with an accelerating organic velocity trend. Those are the SKUs where the model output should be reviewed manually and where human judgment should have the authority to override.
That authority needs to be explicit. Forecasting systems create a false sense of precision. A number that comes from a model looks more reliable than a number that came from a judgment call, even when the judgment call is better grounded in actual context. Teams that don't build in a formal override process end up in a dynamic where the person who knows the model is wrong doesn't feel empowered to change the output because "the system says this."
Build the override process. Apply it to your top sellers. Do it before you're looking at a stockout, not after.