There’s a tempting story we tell about machines: given enough data and compute, they’ll find patterns where we can’t, predict what looks unpredictable, and make sense of chaos. That story is partly true — but it has limits.
Machines are exceptionally good at finding structure in noisy data, but randomness is a different beast. AI still struggles with statistical challenges like predicting lottery outcomes or a critical hit chance in an RPG, where chance outweighs computation.

Table Of Contents 👉
When patterns exist, AI excels
Machine learning thrives when the world offers regularities. Feed a model thousands of labelled photos and it will learn facial features, lighting cues, and common poses.
Train it on historical player movement in an FPS game, and it can forecast tactical patterns that human opponents miss. Those successes are rooted in statistics: repeated behaviour creates predictable signals, and models mine those signals effectively.
But that efficiency depends on two things. First, the data must genuinely reflect future conditions. Second, the source of variability must be reducible to a pattern. When those two conditions hold, AI is superb. When they don’t — when outcomes are truly random or deliberately obfuscated — the advantage shrinks fast.
Randomness isn’t one thing
Randomness comes in flavours. Some noise is pseudo-random — produced by deterministic systems with complex initial conditions.
This is the standard Random Number Generator (RNG) used in most video games to determine loot drops or card shuffles; it’s designed to look random but is technically predictable if you know the seed.
Other noise is genuinely stochastic, where no underlying pattern exists to be learned. Then there’s adversarial randomness: when someone intentionally hides the pattern. The math distinguishes these cases, and we should, too.
Importantly, what we call “random” often depends on our model and data. A sequence that looks random to a shallow model might be predictable to a deeper one with more context.
Still, certain processes, like independent coin flips or a well-designed ‘true’ random system for an online poker server, generate outcomes that are, by design, unpredictable beyond chance.
The role of chance in limits
Why can’t AI predict a fair loot drop? Because the in-game RNG is often designed to maximize entropy and eliminate exploitable structure.
No historical sweep of past chest openings provides leverage to forecast the next item beyond what probability theory allows. Computation can simulate possibilities, but it can’t change the underlying probability; that fact is the hard limit.
There’s also a practical angle: models are built on algorithms, libraries, and hardware that introduce their own pseudo-randomness.
Training runs can vary with different random seeds, GPU nondeterminism, or even library versions. That’s not mystical — it’s engineering reality. It means reproducibility and careful evaluation matter a lot when claims edge into the “we beat randomness” zone.
So where does that leave us?
In short: AI can tame many forms of apparent randomness by expanding context, adding features, or modelling more complex dependencies. It can’t, however, overturn pure chance.
Predictable enemy movement, common build orders in strategy games, and player skill trajectories are fruitful terrain. Pure stochastic processes and systems designed to be unpredictable remain outside reliable reach.
That distinction has consequences. It reminds us to be sceptical of bold claims that promise forecasting miracles for fundamentally random systems, like predicting the outcome of every dice roll in a digital board game.
It also pushes us to design models and experiments with humility: measure uncertainty, quantify limits, and report them plainly.
This careful approach is particularly necessary because we often rely on sophisticated, yet ultimately deterministic, Random Number Generators (RNGs) within simulation and testing frameworks to ensure a game’s fairness.
Would you bet on an algorithm to pick the next Mythic-tier item from a guaranteed 1% chance box? I wouldn’t. But I’d trust a model to spot subtle shifts in player toxicity or account boosting behaviour long before a human would.
If this sparked a thought or made you grin at the hubris of “we’ll predict everything,” leave a comment. Tell us where you think AI should draw the line, or share a surprising example of machine intuition that changed your mind.