A profitable backtest is the beginning of a question, not the end of the research. The result says that a set of rules performed well on a specific historical sample under specific assumptions. It does not say that the strategy will continue to work, that it can be executed at the modeled prices, or that the drawdowns are acceptable for real capital.
The first problem is cost. Many simple strategies look strong before fees and weak after fees. A daily rebalanced strategy that trades heavily can lose most of its apparent edge to commissions, spreads, taxes, borrow costs, and market impact. Even in highly liquid markets, assuming perfect execution at the close or the next open can be optimistic.
The second problem is slippage. A backtest often uses one price point, but real orders face queues, spreads, gaps, partial fills, and latency. A strategy that depends on small price differences is especially sensitive to execution assumptions.
The third problem is data leakage. Leakage happens when information from the future enters the training or decision process. Examples include using revised macro data without point-in-time adjustment, ranking stocks with today's index constituents for a test that begins years ago, or normalizing features with statistics calculated from the entire data set.
The fourth problem is overfitting. If a researcher tests many parameters, indicators, filters, and date ranges, the best backtest may simply be the one that matched noise. A strategy can look scientific while quietly becoming a custom fit to history.
A stronger process uses out-of-sample tests, walk-forward validation, realistic costs, stress periods, and simple rules. The strategy should be evaluated not only by total return but also by maximum drawdown, volatility, turnover, exposure, capacity, and behavior during adverse regimes.
Backtests are valuable because they force ideas into measurable form. They become dangerous when they are marketed as proof. Treat them as research evidence, not as a promise.