Capacity Constraints in Trading Algorithm Selection
Why the most important question about any algorithm isn't "what are the returns?" but "how much capital can it actually absorb?"—and how to avoid the costly mistake of deploying capital into strategies that have already exceeded their natural limits.
The pitch deck shows a Sharpe ratio of 2.8 and annual returns of 47%. The backtest spans fifteen years with consistent performance across market regimes. The strategy appears to be exactly what your portfolio needs. You allocate $50 million. Six months later, your realized Sharpe ratio is 0.9 and returns are running at 8% annually. The algorithm isn't broken—it's simply drowning in too much capital.
This scenario plays out with depressing regularity across the institutional investment landscape. Sophisticated allocators, armed with extensive due diligence processes and quantitative evaluation frameworks, consistently underweight the single factor most predictive of future performance degradation: capacity constraints. They ask detailed questions about signal construction, risk management, and historical drawdowns while accepting capacity estimates that are little more than optimistic guesses—or worse, deliberate exaggerations designed to attract larger allocations.
Capacity is not a secondary consideration in algorithm selection; it is the primary filter through which all performance claims must be evaluated. A strategy with modest backtested returns but genuine capacity for your intended allocation will outperform a spectacular backtest that cannot absorb real capital. Understanding capacity constraints—how to measure them, how to verify claims, and how to structure allocations accordingly—separates successful algorithm deployment from expensive disappointment.
This analysis provides a comprehensive framework for evaluating capacity in trading algorithm selection. We examine the fundamental drivers of capacity constraints, the methodologies for estimating realistic capacity, the red flags that signal overstated claims, and the structural approaches that maximize the probability of achieving backtested performance with real capital. The goal is practical: equipping allocators with the tools to avoid capacity-related disappointment and identify algorithms that can genuinely deliver.
The Physics of Capacity Constraints
Capacity constraints emerge from fundamental market mechanics that no amount of cleverness can circumvent. Understanding these mechanics is essential for realistic capacity assessment.
Market Impact: The Inescapable Tax
Every trade moves prices. When you buy, your demand pushes prices up; when you sell, your supply pushes prices down. This market impact is not a bug in execution—it's a fundamental feature of how markets incorporate information. The more you trade, the more you pay.
Market impact scales non-linearly with trade size. Doubling your position doesn't double your impact—it more than doubles it. This non-linearity creates a natural ceiling on strategy capacity: at some point, the market impact cost exceeds the expected alpha, and the strategy becomes unprofitable.
Impact = σ × k × √(Q / ADV)
Where σ = volatility, k = impact coefficient, Q = order size, ADV = average daily volume
The square-root relationship means that trading 4% of daily volume costs roughly twice as much per share as trading 1% of daily volume. This relationship, documented extensively in academic literature and observed consistently in practice, establishes hard limits on how much capital a strategy can deploy.
Signal Crowding: The Decay of Shared Ideas
Alpha signals work because they identify mispricings before the broader market corrects them. But signals are rarely unique. Academic publication, commercial data vendors, and parallel research efforts mean that profitable signals are discovered—and exploited—by multiple participants simultaneously.
When many participants trade the same signal, several destructive dynamics emerge:
- Front-running: Faster participants capture alpha before slower ones can act
- Price impact amplification: Correlated trading increases collective market impact
- Signal arbitrage: The mispricing that generated alpha gets corrected faster
- Crowded exits: When the signal reverses, everyone rushes for the door simultaneously
The result is alpha decay—the progressive erosion of signal profitability as more capital chases the same opportunity. Strategies that showed 3% monthly alpha when managing $10 million may show 0.5% when $500 million is deployed across all participants trading similar signals.
Liquidity Constraints: The Walls of the Pool
Markets provide finite liquidity. The total amount that can be bought or sold at any price level is limited by the willingness of counterparties to transact. When strategy demands exceed available liquidity, execution quality deteriorates rapidly.
Liquidity Dimensions:
- Depth: Volume available at the best bid/ask
- Breadth: Volume available across price levels
- Resiliency: Speed at which liquidity replenishes after large trades
- Immediacy: Cost of executing immediately vs. patiently
Strategies targeting less liquid instruments—small-cap equities, emerging market currencies, exotic derivatives—face tighter capacity constraints than those trading large-cap equities or major currency pairs. But even liquid markets have limits: trying to move $500 million through Apple stock in an hour will incur substantial impact costs.
Opportunity Frequency: The Scarcity of Trades
Some strategies generate frequent signals; others trade rarely. A high-frequency market-making algorithm might execute thousands of trades daily, while a macro momentum strategy might rebalance monthly. Opportunity frequency directly constrains capacity.
Consider a strategy that generates 10 trading opportunities per day, each with expected profit of $10,000. The strategy's gross capacity is $100,000 daily profit potential. If you want to deploy $100 million at 20% annual return target, you need $20 million in annual profits—but the strategy can only generate roughly $25 million annually (250 trading days × $100,000). You're already at 80% of theoretical capacity before accounting for any execution costs.
Many backtests dramatically overstate opportunity frequency by assuming perfect execution at historical prices. Real trading requires time to execute, during which opportunities may disappear or prices may move adversely.
| Capacity Driver | Low Capacity Indicators | High Capacity Indicators |
|---|---|---|
| Market Impact | Small-cap, illiquid instruments; concentrated positions | Large-cap, liquid instruments; diversified positions |
| Signal Crowding | Well-known signals; published research; common data | Proprietary signals; unique data; novel approaches |
| Liquidity | Thin markets; wide spreads; low ADV | Deep markets; tight spreads; high ADV |
| Opportunity Frequency | Rare signals; long holding periods | Frequent signals; short holding periods |
Quantifying Strategy Capacity
Moving from conceptual understanding to numerical estimates requires systematic methodology. Several approaches provide different perspectives on capacity.
The Market Impact Approach
The most rigorous capacity estimation method models market impact explicitly and determines the capital level at which impact costs consume expected alpha.
Step 1: Estimate Expected Alpha
Determine the strategy's gross expected return before transaction costs. Use out-of-sample testing, live trading results, or conservative backtest estimates with appropriate haircuts.
Step 2: Model Market Impact
For each instrument in the strategy universe, estimate the market impact function. The Almgren-Chriss framework provides a standard approach:
g(v) = η × v
Permanent Impact
h(v) = γ × v
Where v = trading rate, η = temporary impact coefficient, γ = permanent impact coefficient
Step 3: Calculate Breakeven Capacity
Determine the capital level at which expected impact costs equal expected alpha:
α × AUM = Impact Cost(AUM) + Fixed Costs
Solve for AUM where net alpha becomes zero
Step 4: Apply Safety Margin
The breakeven capacity is the theoretical maximum—operating there yields zero net alpha. Practical capacity should be 30-50% of breakeven to maintain meaningful net returns.
The Participation Rate Approach
A simpler approach constrains trading to a maximum percentage of daily volume:
Capacity = Target Participation % × Universe ADV × Turnover Adjustment
Where Target Participation typically ranges from 1-10% depending on strategy type
For example, if a strategy trades a universe with $10 billion combined daily volume, targets 5% participation, and has 50% monthly turnover:
Capacity ≈ 5% × $10B × (1/0.5) = $1 billion
This approach is quick but crude—it ignores the concentration of trading in specific instruments and the non-linearity of market impact.
The Historical Decay Approach
For strategies with live trading history at multiple AUM levels, regression analysis can estimate the relationship between capital and performance:
Returnt = α - β × ln(AUMt) + εt
Logarithmic specification captures diminishing returns to scale
This empirical approach captures all sources of capacity constraint—impact, crowding, and opportunity scarcity—without requiring explicit modeling of each component. However, it requires substantial live trading history at varying asset levels.
Strategy-Specific Capacity Estimates
| Strategy Type | Typical Capacity Range | Primary Constraint | Capacity Sensitivity |
|---|---|---|---|
| High-Frequency Market Making | $10M - $100M | Latency competition, rebate economics | Very High |
| Statistical Arbitrage (Equities) | $100M - $2B | Market impact, signal crowding | High |
| Equity Long/Short | $500M - $10B | Idea generation, market impact | Medium |
| Global Macro | $1B - $50B+ | Opportunity frequency, crowding | Low-Medium |
| Managed Futures/CTA | $500M - $20B | Market impact in smaller markets | Medium |
| Cryptocurrency Momentum | $20M - $500M | Exchange liquidity, market fragmentation | High |
These ranges are indicative only—specific strategies within each category may have dramatically different capacity based on their particular construction, instruments traded, and competitive positioning.
The Capacity Verification Problem
Capacity claims are notoriously unreliable. Strategy developers have strong incentives to overstate capacity—larger capacity means larger potential allocations, which means larger fees. Even well-intentioned estimates often prove optimistic when confronted with real market conditions.
Why Capacity Claims Are Inflated
Backtest Assumptions: Most backtests assume execution at historical closing prices or volume-weighted average prices (VWAP). These assumptions ignore the market impact that real trading would have caused. A backtest showing $500 million capacity might have realistic capacity of $100 million once impact is properly modeled.
Favorable Period Selection: Backtests often span periods of favorable liquidity conditions. Capacity during the calm of 2017 differs dramatically from capacity during the volatility of March 2020. Prudent capacity estimation uses stressed liquidity assumptions.
Universe Expansion Assumptions: Some capacity estimates assume the strategy can expand into additional instruments as AUM grows. But new instruments may have different characteristics, lower liquidity, or weaker signal efficacy. The strategy that works in 100 stocks may not work in 500.
Competitive Blindness: Capacity estimates rarely account for other participants trading similar strategies. Your $200 million allocation joins $5 billion already chasing the same signals, collectively overwhelming the available alpha.
Verification Techniques
Live Trading Analysis: The gold standard for capacity verification is actual trading results at meaningful scale. Request performance data segmented by AUM level. If a strategy has only traded $10 million but claims $500 million capacity, treat the claim with extreme skepticism.
Strategies that have been validated through live trading at substantial scale—not just backtested—provide far more reliable capacity estimates. The gap between backtested capacity and live-verified capacity frequently exceeds 50%.
Implementation Shortfall Analysis: Examine the difference between theoretical (backtest) execution prices and actual achieved prices. Growing implementation shortfall as AUM increases signals capacity constraint approach:
IS = (Actual Execution Price - Decision Price) / Decision Price
Rising IS with AUM indicates capacity stress
Fill Rate Degradation: Monitor what percentage of intended trades actually execute. Strategies approaching capacity limits show declining fill rates as orders become too large for available liquidity.
Independent Market Impact Modeling: Don't accept developer's impact assumptions. Build independent estimates using market microstructure data and compare to their claims. Significant discrepancies warrant deeper investigation.
Red Flags in Capacity Claims
Certain patterns reliably indicate overstated capacity:
- "Unlimited capacity": No strategy has unlimited capacity. This claim reveals either ignorance or dishonesty.
- Capacity exceeds 10% of market: Claiming to trade 15% of a market's daily volume without impact is implausible.
- No sensitivity analysis: Legitimate capacity estimates include ranges and assumptions. Single-point estimates without uncertainty bounds are suspect.
- Backtested only: Capacity claims without live trading validation at meaningful scale are hypothetical at best.
- Capacity grew with performance: If capacity estimates conveniently increased after strong backtest results, the estimation process may be outcome-driven.
- No degradation at current AUM: Even well within capacity, strategies should show some impact. Zero degradation claims are unrealistic.
The Value of Conservative Capacity Claims
Paradoxically, algorithms marketed with conservative capacity claims often prove more attractive than those claiming vast capacity. Conservative claims signal intellectual honesty about limitations, rigorous understanding of market microstructure, and alignment of interests with allocators. When a developer states "this strategy has $75 million capacity, and we won't accept allocations that would push it beyond that," they're demonstrating the kind of discipline that protects investor returns. Compare this to developers who claim $500 million capacity for strategies trading illiquid instruments—which claim would you trust?
Capacity and the Algorithm Selection Process
Integrating capacity analysis into algorithm selection requires systematic approach throughout the evaluation process.
Stage 1: Initial Screening
Before detailed due diligence, screen for basic capacity plausibility:
- Does claimed capacity align with strategy type benchmarks?
- What is the liquidity profile of instruments traded?
- What is the current AUM, and how does it compare to claimed capacity?
- Has the strategy operated at scale comparable to your intended allocation?
Strategies that fail basic plausibility tests don't warrant deeper analysis regardless of performance claims.
Stage 2: Quantitative Capacity Analysis
For strategies passing initial screening, conduct rigorous capacity analysis:
Market Impact Modeling:
- Obtain position-level turnover data
- Map positions to liquidity metrics (ADV, spread, depth)
- Apply market impact model to estimate execution costs at various AUM levels
- Calculate net alpha after impact at target allocation size
Participation Rate Analysis:
- Calculate position sizes at target AUM
- Express positions as percentage of daily volume
- Flag positions exceeding 5-10% of ADV as capacity concerns
- Aggregate to strategy-level participation metric
Historical Verification:
- Request performance by AUM band
- Regress performance against AUM to estimate degradation rate
- Extrapolate to target allocation size
- Compare extrapolated performance to developer claims
Stage 3: Competitive Capacity Assessment
Even if a strategy has individual capacity for your allocation, aggregate industry flows matter:
- What similar strategies exist in the market?
- What is aggregate AUM chasing similar signals?
- Has signal alpha decayed over time (evidence of crowding)?
- Are there capacity-advantaged participants (better execution, faster speed)?
A strategy with $500 million individual capacity becomes effectively capacity-constrained if $5 billion of competitor capital trades similar signals.
Stage 4: Allocation Sizing
Even for strategies passing capacity diligence, appropriate allocation sizing matters:
Max Allocation = Min(Your Limit, 25% of Strategy Capacity, 50% of Live-Tested AUM)
Multiple constraints ensure buffer against capacity stress
This conservative approach ensures:
- You don't push the strategy beyond proven operational limits
- Room exists for other allocators without collective capacity breach
- Performance has been validated at comparable scale
The IP Model Capacity Advantage
The structure of algorithm acquisition significantly impacts capacity dynamics. The traditional fund model—where allocators invest in funds alongside other LPs—creates inherent capacity conflicts. The intellectual property (IP) model—where allocators purchase algorithms for exclusive operation—offers structural advantages.
The Fund Model's Capacity Problem
In traditional fund structures, capacity is shared among all investors:
- Collective impact: All LP capital trades simultaneously, amplifying market impact
- Incentive misalignment: Managers benefit from larger AUM (larger fees) even as performance degrades
- No exclusivity: Your capital competes with other LPs for the same alpha
- Opacity: Difficult to verify actual capacity utilization across all investors
A fund claiming $500 million capacity might already have $400 million from existing LPs. Your $100 million allocation pushes the fund to capacity limits, degrading returns for everyone—including you.
The IP Model's Capacity Solution
When you purchase algorithm intellectual property for exclusive operation, capacity dynamics fundamentally change:
- Exclusive capacity: The algorithm's entire capacity is yours alone
- Your execution: Market impact depends only on your trading, not collective fund flows
- Verified scale: You know exactly how much capital operates the strategy
- Alignment: No incentive to overstate capacity to attract additional investors
An algorithm with $100 million capacity, purchased for exclusive operation with your $50 million allocation, has 50% capacity utilization and room to scale. The same algorithm in a fund structure might already be at 95% capacity from existing investors.
Capacity Verification in IP Transactions
The IP model enables more rigorous capacity verification:
- Complete transparency: Access to full strategy logic enables independent capacity modeling
- Historical position data: Verify actual trading patterns and impact
- Execution analysis: Examine real fills versus theoretical prices
- No hidden AUM: No concern about undisclosed co-investors
Sophisticated algorithm sellers provide capacity analysis as part of the transaction, including market impact modeling, liquidity analysis, and conservative capacity estimates. They understand that honest capacity assessment protects both parties—the buyer gets realistic expectations, and the seller maintains reputation for delivering on commitments.
| Factor | Fund Model | IP Purchase Model |
|---|---|---|
| Capacity Ownership | Shared with other LPs | Exclusive to buyer |
| Market Impact | Collective impact from all investors | Only your trading |
| Verification | Limited transparency | Full strategy access |
| Incentive Alignment | Manager benefits from AUM growth | One-time sale; reputation matters |
| Scaling Flexibility | Subject to fund-level decisions | Scale at your discretion |
| Exit Capacity | Compete with other LPs to exit | Exit on your timeline |
Capacity Management Post-Allocation
Capacity evaluation doesn't end at allocation. Ongoing monitoring ensures strategies remain within sustainable operating limits.
Key Monitoring Metrics
Implementation Shortfall Trend: Track the gap between theoretical and actual execution over time. Rising shortfall indicates capacity stress.
Fill Rate Evolution: Monitor percentage of orders that execute fully at intended prices. Declining fill rates signal liquidity constraints.
Alpha Decay Rate: Compare recent performance to historical. Accelerating decay may indicate increased competition or capacity breach.
Market Share in Key Positions: Track your trading as percentage of daily volume in core positions. Rising market share increases impact risk.
Capacity Breach Response
When monitoring indicates capacity stress, several responses are available:
- Reduce allocation: Scale back capital to restore sustainable operating level
- Slow execution: Trade more patiently to reduce impact, accepting longer implementation time
- Modify universe: Shift to more liquid instruments, accepting potential signal dilution
- Execution optimization: Improve execution algorithms to reduce impact at current scale
- Strategy evolution: Modify strategy to accommodate larger scale (if possible without destroying alpha)
The Capacity Reserve Concept
Prudent operators maintain capacity reserve—operating below maximum capacity to preserve flexibility and performance quality:
Reserve = (Maximum Capacity - Current AUM) / Maximum Capacity
Recommended reserve: 25-50% to maintain performance quality
Maintaining reserve provides:
- Buffer against temporary liquidity deterioration
- Room to increase allocation opportunistically
- Protection against estimation error in capacity models
- Margin of safety for execution quality
Case Studies in Capacity Failure
Case Study 1: The Overcrowded Factor
Situation: A quantitative equity fund developed a momentum strategy with strong backtested performance. Initial live trading at $50 million confirmed the backtest. The fund raised $800 million based on claimed $2 billion capacity.
What Happened: Performance degraded steadily as AUM grew. By the time the fund reached $600 million, Sharpe ratio had fallen from 1.8 to 0.7. Investigation revealed that the momentum signal had become crowded—aggregate industry AUM chasing similar signals exceeded $50 billion. The fund's individual capacity estimate ignored competitive dynamics.
Lesson: Individual strategy capacity is necessary but not sufficient. Aggregate capacity across all participants trading similar signals determines actual alpha availability.
Case Study 2: The Liquidity Illusion
Situation: A cryptocurrency fund claimed $200 million capacity based on backtests using historical volume data. The backtest showed consistent execution at quoted prices.
What Happened: Real trading at $50 million revealed that historical volume data significantly overstated actual tradeable liquidity. Much reported volume was wash trading or arbitrage between exchanges. Actual available liquidity was roughly 20% of reported figures. The fund's real capacity was closer to $40 million.
Lesson: Historical liquidity metrics may not reflect actual tradeable liquidity. Independent verification using real execution data is essential, particularly in markets with known data quality issues.
Case Study 3: The Scaling Failure
Situation: A statistical arbitrage strategy showed exceptional returns at $20 million. The developer claimed linear scalability to $500 million based on expanding the trading universe.
What Happened: As the strategy scaled and added new instruments, signal quality deteriorated. The original 200-stock universe had been carefully selected for signal efficacy. The expanded 1,000-stock universe included many instruments where the signal didn't work. Performance at $200 million was half that at $20 million—not due to market impact, but due to signal dilution from universe expansion.
Lesson: Capacity claims based on universe expansion assume the strategy works equally well across expanded instruments. This assumption frequently fails in practice.
Case Study 4: The Capacity-Aware Success
Situation: An algorithm provider offered a fixed income relative value strategy with stated capacity of $150 million. Despite strong interest, they declined allocations that would have pushed total deployment above this limit, turning away potential fees.
What Happened: The strategy maintained consistent performance over five years, delivering 12% annual returns with Sharpe ratio of 1.4. Competitors who accepted larger allocations saw dramatic performance degradation. The disciplined provider built a reputation for reliability and attracted premium valuations for subsequent algorithm offerings.
Lesson: Providers who enforce capacity discipline—even at the cost of short-term fees—create sustainable value and build trust with sophisticated allocators.
Advanced Capacity Concepts
Dynamic Capacity
Capacity is not static—it varies with market conditions:
- Volatility regimes: Higher volatility typically means higher market impact but also potentially higher alpha
- Liquidity cycles: Market liquidity varies with economic conditions and risk appetite
- Competitive dynamics: Competitor flows change over time, affecting available alpha
- Regulatory changes: New regulations can affect trading costs and liquidity
Sophisticated capacity management adjusts allocation based on current conditions rather than assuming fixed capacity limits.
Capacity Correlation Across Strategies
In multi-strategy portfolios, capacity constraints may be correlated:
- Strategies trading similar instruments share liquidity constraints
- Strategies using common factors face correlated crowding
- Market stress simultaneously reduces capacity across strategies
Portfolio-level capacity analysis must account for these correlations. Aggregate capacity may be less than the sum of individual strategy capacities.
Capacity Alpha
The relationship between capacity and alpha can itself be a source of edge:
- Capacity arbitrage: Trading strategies that larger players cannot execute due to impact constraints
- Capacity timing: Scaling into strategies when crowding is low, scaling out when crowding increases
- Capacity diversification: Building portfolios across strategies with uncorrelated capacity constraints
Small and mid-sized allocators have structural advantages in capacity-constrained strategies—they can access alpha pools that are too small for large institutions.
Framework Summary: Capacity Due Diligence Checklist
Apply this checklist when evaluating algorithm capacity claims:
Initial Assessment
- □ Does claimed capacity align with strategy type benchmarks?
- □ What is current AUM relative to claimed capacity?
- □ Has the strategy been live-traded at scale comparable to your allocation?
- □ What instruments are traded, and what is their liquidity profile?
Quantitative Analysis
- □ What market impact model was used in capacity estimation?
- □ What participation rate assumptions underlie the estimate?
- □ Has implementation shortfall been measured at various AUM levels?
- □ What is the relationship between historical AUM and performance?
Competitive Assessment
- □ What similar strategies exist in the market?
- □ What is aggregate AUM across competing strategies?
- □ Has signal alpha decayed over time?
- □ Are there capacity-advantaged competitors?
Red Flag Check
- □ Are capacity claims supported by live trading evidence?
- □ Do claims include sensitivity analysis and uncertainty ranges?
- □ Is claimed participation rate plausible (<10% of ADV)?
- □ Does provider enforce capacity limits or accept all capital?
Structural Assessment
- □ Is capacity exclusive to you or shared with others?
- □ Can you verify actual trading scale and impact?
- □ Are incentives aligned to protect capacity-constrained performance?
Conclusion: Capacity as the Ultimate Performance Predictor
In algorithm selection, capacity constraints deserve attention equal to—or greater than—performance metrics. A strategy's capacity determines whether backtested or historical returns can be achieved with your capital. Spectacular returns that cannot absorb your allocation are worthless; modest returns with genuine capacity create real wealth.
The most sophisticated allocators have internalized this reality. They treat capacity claims with healthy skepticism, demand verification through live trading evidence, model impact independently, and structure allocations to maintain meaningful capacity reserve. They prefer algorithms with honest, conservative capacity estimates over those claiming unlimited scalability. They understand that capacity discipline from algorithm providers signals integrity and alignment.
The IP acquisition model offers structural advantages for capacity-conscious allocators. Exclusive ownership of algorithm capacity, verified through full transparency, with no competing investors—these features transform capacity from a source of uncertainty into a known, manageable parameter. When evaluating algorithm acquisition, the capacity dimension alone often justifies the IP approach over traditional fund investment.
The framework presented in this analysis provides tools for rigorous capacity evaluation. Apply it systematically, maintain skepticism toward inflated claims, and structure allocations conservatively. The algorithms that will compound your wealth over decades are not those with the highest backtested returns but those whose capacity genuinely accommodates your capital while preserving meaningful alpha. Find those algorithms, size your allocations appropriately, and let compound returns do their work.
References
- Almgren, R. & Chriss, N. (2001). "Optimal Execution of Portfolio Transactions." Journal of Risk, 3(2), 5-39.
- Perold, A.F. (1988). "The Implementation Shortfall: Paper Versus Reality." Journal of Portfolio Management, 14(3), 4-9.
- Kissell, R. & Glantz, M. (2003). "Optimal Trading Strategies: Quantitative Approaches for Managing Market Impact and Trading Risk." AMACOM.
- Engle, R., Ferstenberg, R., & Russell, J. (2012). "Measuring and Modeling Execution Cost and Risk." Journal of Portfolio Management, 38(2), 14-28.
- Frazzini, A., Israel, R., & Moskowitz, T.J. (2018). "Trading Costs." Working Paper.
- McLean, R.D. & Pontiff, J. (2016). "Does Academic Research Destroy Stock Return Predictability?" Journal of Finance, 71(1), 5-32.
- Korajczyk, R.A. & Sadka, R. (2004). "Are Momentum Profits Robust to Trading Costs?" Journal of Finance, 59(3), 1039-1082.
- Novy-Marx, R. & Velikov, M. (2016). "A Taxonomy of Anomalies and Their Trading Costs." Review of Financial Studies, 29(1), 104-147.
- Lo, A.W. (2002). "The Statistics of Sharpe Ratios." Financial Analysts Journal, 58(4), 36-52.
- Grinold, R.C. & Kahn, R.N. (2000). "Active Portfolio Management." McGraw-Hill.
Additional Resources
- CFA Institute - Best Execution - Research on transaction costs and execution
- Replicating Anomalies Paper - Research on strategy capacity and crowding
- Breaking Alpha Algorithms - Capacity-verified algorithmic strategies
- Breaking Alpha Consulting - Capacity analysis and algorithm due diligence services