Saturday, January 3, 2026

Quantifying Recency Bias in Investor Volatility Expectations

Investors and traders often suffer from behavioral biases, which is where behavioral finance originates. Among these biases, recency bias is probably the most detrimental, yet it has infrequently been studied in a comprehensive quantitative manner.

Reference [1] addresses this gap by investigating recency bias in stocks with high idiosyncratic volatility (IVOL). The authors hypothesize that investors excessively extrapolate recent changes in volatility, particularly when high-IVOL stocks have become more volatile in recent periods relative to earlier ones, and they propose using changes in IVOL as a measure of recency bias.

The paper further develops a trading strategy to exploit this bias by buying low-IVOL stocks with declining volatility and short-selling high-IVOL stocks with increasing volatility. The authors pointed out,

We bring in the role of investors’ excess extrapolation associated with the recency bias as an explanation of the IVOL anomaly. We hypothesize that investors have higher tendency to excessively extrapolate past return volatilities of high IVOL stocks whose returns became more volatile in recent periods. The extrapolation bias accelerates investors’ preference for such stocks and further enhance the magnitude of overvaluation.

Accordingly, we form a recency-enhanced IVOL strategy to capture investors’ excess extrapolation to emphasize more on recent IVOL. We show that it generates significant and robust profitability. The other components of the standard IVOL strategy, which is referred to as the non-recency IVOL strategy, is mostly unprofitable. An implication to practitioners is that considering the recency effect is important when trading against idiosyncratic volatility. Our study also adds to the vast literature on the understanding of the IVOL anomaly. The implication to future studies examining the anomaly is that the role of recency biases should be considered when examining overvaluation of high IVOL stocks.

In short, the paper attributes the IVOL anomaly to investors’ recency bias, and by incorporating this recency effect, a recency-enhanced IVOL strategy generates significant profitability.

This is an important topic, with implications not only for stocks and volatilities but also for trading strategies themselves, as traders often abandon sound strategies due to recent poor performance. Further research in this area would be highly valuable.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Wen-Chi Lo, Kuan-Cheng Ko, Recency biases and the idiosyncratic volatility puzzle, Finance Research Letters, Volume 91, March 2026, 109468

Article Source Here: Quantifying Recency Bias in Investor Volatility Expectations



source https://harbourfronts.com/quantifying-recency-bias-investor-volatility-expectations/

Friday, January 2, 2026

Forecasting Market Crashes with Machine Learning Techniques

Predicting market direction is challenging, and forecasting market crashes is even more difficult, yet this remains a growing area of research. We previously discussed market correction prediction, and Reference [1] continues this line of inquiry by examining how machine learning can be used to predict market crashes within the Adaptive Market Hypothesis framework.

The study considers three categories of factors:

  1. Internal factors, such as technical indicators designed to capture endogenous market dynamics, including momentum, trend strength, and money flow arising from investor behavior and adaptive learning;
  2. External factors, including macroeconomic and commodity variables that proxy for systematic, exogenous risks affecting fundamental valuations; and
  3. Volatility features that quantify market fear and uncertainty.

The authors evaluate the performance of three predictive models—logistic regression, random forest, and a long short-term memory (LSTM) network. They pointed out,

The findings of this thesis suggest that while market crashes remain inherently difficult to forecast with perfect accuracy, they are not entirely random events. Meaningful predictive signals do exist, but their detection requires a careful consideration of model choice and complexity. The primary conclusion is not that one model is universally superior, but that different models reveal different facets of predictability, presenting a practical trade-off for risk managers and investors.

The Logistic Regression model, with its high recall, serves as an excellent "earlywarning system." Its strength lies in its sensitivity; it is highly effective at flagging periods of potential danger, making it suitable for risk monitoring applications where the cost of a missed event is catastrophic. Its primary drawback is the high rate of false positives, which would make it costly to use as a direct trading signal.

The LSTM network, conversely, represents a more refined and balanced predictor. By matching the high recall of the logistic model while offering improved precision, it provides a more reliable signal. This suggests that incorporating the temporal dimension of financial data is a key avenue for enhancing predictive power. The practical implication is that while linear relationships capture the brute force of market panic, sequence modeling is required to understand the more subtle, evolving patterns that precede it. The choice between these models is therefore a strategic one, contingent on the specific application and the user’s tolerance for different types of error.

In short, the study concludes that market crashes are difficult to forecast but not entirely random, and different models capture different aspects of predictability. Logistic regression functions well as a high-recall early warning tool, while LSTM models provide more balanced signals.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Michele Della Mura, Predicting Stock Market Crashes, A Comparative Analysis of Econometric and Machine Learning Models, Politecnico di Torino, 2025

Originally Published Here: Forecasting Market Crashes with Machine Learning Techniques



source https://harbourfronts.com/forecasting-market-crashes-machine-learning-techniques/

Friday, December 26, 2025

Delta Hedging with Implied vs. Historical Volatility, Part 2

Hedging is an important topic in portfolio and risk management; however, relatively little research has been conducted in this area. Many questions remain open, such as when to hedge, how frequently to hedge, and which volatility measure should be used in hedging decisions.

We have previously discussed whether hedging should rely on historical volatility or implied volatility. Reference [1] extends this line of inquiry by comparing the performance of hedging strategies based on historical versus implied volatility using S&P500 index ETF options.

The authors pointed out,

We experimentally show that the degree to which IV and HV hedging are effective as hedge depends on the context of the market. For calmer periods, our result shows that using HV-based delta hedge is still effective, with lower tracking error and transaction cost, which implies that backward looking of HV is somewhat beneficial when the speed at which volatility dynamics change is slow. On the other hand, in times of high volatility and due to structural breaks the IV-based rules perform better as options capture forward looking information. The asymmetric role of regimes highlights the relevance of regime adaptive hedging systems for SPY/S&P 500 options…

Although our results go in line with most of the literature in the topic, they give indications which deserve a deeper analysis. For instance, we observe a tendency for the relative performance disadvantage of IV- and HV-based approaches to become larger in presence of volatility clustering,indicating that hybrid models which combine the robustness of HV and the foresight of IV might lead to even better hedging results. In addition, transaction costs (especially for high frequency rebalancing) can significantly change the relative net advantage of one strategy versus another.

In short, the article finds that implied volatility reacts faster and captures short-term risks more effectively but performs worse in stable markets, while historical volatility delivers lower tracking errors in calm conditions, suggesting that the choice—and potential combination—of IV and HV should depend on market regimes.

We find the results valuable and the research direction worth pursuing. However, as with the previous article, the data sample size is relatively small, although the findings are intuitively consistent.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Haocheng Yang, Hedging Effectiveness of Implied Volatility vs Historical Volatility, Proceedings of ICFTBA 2025 Symposium: Global Trends in Green Financial Innovation and Technology

Article Source Here: Delta Hedging with Implied vs. Historical Volatility, Part 2



source https://harbourfronts.com/delta-hedging-implied-vs-historical-volatility-part-2/

Tuesday, December 23, 2025

Toward Rigorous Validation of Data-Driven Trading Strategies

With the rapid advancement in computing power, quantitative researchers can now develop trading strategies quickly, employing multiple variables and methodologies. These approaches extend beyond traditional time-series and statistical models to include machine learning and AI-based techniques.

However, such models often deliver impressive in-sample results but fail in live trading, largely due to overfitting. While researchers still seek to exploit increased computing power, the key challenge remains how to address this overfitting problem. One common solution is rigorous out-of-sample testing and validation, yet a widely accepted and robust validation framework has not been established.

Reference [1] proposes what the authors describe as a rigorous walk-forward validation framework. In this approach, trading systems are developed using machine learning techniques and then tested 34 times over a 10-year sample, with each test period independent and trained solely on past data.

The authors pointed out,

This paper develops and validates a hypothesis-driven trading framework addressing critical methodological deficiencies in quantitative trading research. Our primary contribution is methodological rather than empirical: we establish a rigorous, generalizable validation protocol that prevents lookahead bias, incorporates realistic transaction costs, maintains full interpretability, and extends naturally to any hypothesis generation approach including large language models.

Through 34 independent out-of-sample tests spanning 10 years, we demonstrate the framework using five illustrative hypothesis types, documenting modest but realistic performance (0.55% annualized, Sharpe ratio 0.33) with strong regime dependence and exceptional downside protection (maximum drawdown -2.76% versus -23.8% for SPY). Aggregate returns are not statistically significant (p-value 0.34), reflecting honest reporting rather than p-hacking—a critical contribution toward correcting publication bias in finance.

The key empirical finding is that market microstructure signals derived from daily data exhibit strong regime dependence, working during high-volatility periods (0.60% quarterly, 2020-2024) but failing in stable markets (-0.16%, 2015-2019). This reveals that daily OHLCV-based signals require elevated information arrival and trading activity to function effectively, with implications for both deployment strategies and future research design.

While the initiative is commendable and highlights the need for more research on system validation, several limitations remain. We observe the following,

  • First, the reported performance is rather modest.
  • Second, rather than employing traditional rolling or anchored walk-forward analysis, the authors perform repeated out-of-sample tests using independent, non-overlapping data periods. This is the main contribution of the paper.
  • Third, a critical unaddressed issue is that although the full sample spans multiple market regimes, the choice of the number of intervals and the length of each data window is itself arbitrary and should be treated as random variables. As a result, the reported trading performance is also conditional on these design choices and may be materially affected by them, undermining the claimed rigor of the validation framework.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Gagan Deep, Akash Deep, William Lamptey, Interpretable Hypothesis-Driven Trading: A Rigorous Walk-Forward Validation Framework for Market Microstructure Signals,  arXiv:2512.12924

Post Source Here: Toward Rigorous Validation of Data-Driven Trading Strategies



source https://harbourfronts.com/toward-rigorous-validation-data-driven-trading-strategies/

Wednesday, December 17, 2025

Intraday Elasticity Between VIX Futures and Volatility ETPs

VIX futures and ETPs are widely used instruments for both volatility speculation and hedging, making a clear understanding of their behavior essential for these purposes. Several studies have examined the relationship between spot VIX, VIX futures, and volatility-linked ETNs.

Reference [1] contributes to this literature by analyzing the sensitivity of VIX ETPs to movements in VIX futures. Specifically, the authors investigate the intraday price dynamics of the SPVXSTR, along with three VIX ETNs (VXX, XIV, TVIX) and three ETFs (VIXY, SVXY, UVXY), all linked to that index. Rather than relying on standard OLS regression, the study employs quantile regression, which minimizes a weighted sum of absolute errors and allows for asymmetric penalties on over- and under-predictions.

The authors pointed out,

Decile regressions highlight the sensitivity of VIX Futures prices to ETP prices and how these changes in different volatility environments. Results show that VIX futures, as proxied by SPVXSTR, are more responsive to VXX, than to TVIX and XIV except 3:45–4:15p.m.where XIV is dominant. Results also show increasing sensitivity of SPVXSTR to XIV across the full day (panel A) in higher deciles, where higher returns on VIX futures may well drive higher hedging demands for such products such that hedging is brought-forward earlier in the day. Similarly, the results for the associated ETFs, which are less actively traded, are more ambiguous. Practically speaking, the higher elasticity during the trading day means that intraday trading conditions amplify the responsiveness of VIX futures to ETP price changes. Traders may seek to exploit these variations. Similarly, elasticity is heightened at the extreme ends of the distribution at close. VIX futures may overreact to ETP movements and this has implications for calibrating hedging strategies in stress scenarios.

In short, the results show that VIX futures (SPVXSTR) are generally more sensitive to VXX than to TVIX or XIV, with the exception of the late-afternoon window (3:45–4:15 p.m.). Intraday elasticity is elevated—especially near the close and in the tails—implying that VIX futures can overreact to ETP price changes, which creates potential trading opportunities and important considerations for hedging under stress.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Michael O'Neill, Gulasekaran Rajaguru, Elasticity dynamics between VIX futures and ETPs: a quantile regression analysis of intraday and closing market behavior, Journal of Accounting Literature (2025) 47 (5): 694–701.

Originally Published Here: Intraday Elasticity Between VIX Futures and Volatility ETPs



source https://harbourfronts.com/intraday-elasticity-vix-futures-volatility-etps/

Saturday, December 13, 2025

Short-Term Stock Price Forecasting Using Geometric Brownian Motion

In these days of big data, machine learning, and AI, many researchers are showing growing interest in sophisticated models for stock price prediction or to refine basic models of stock dynamics. Reference [1] takes the opposite approach. It uses a classical model for stock price dynamics—the Geometric Brownian Motion (GBM)—and examines whether it can still be used to forecast stock prices. Specifically, the study applies four volatility measures to large-cap stocks in an emerging market to estimate volatility, then incorporates these estimates into the GBM to generate price forecasts.

The authors pointed out,

One effective method for forecasting short-term investment involves models like GBM. This study specifically applied GBM over a two-week period, focusing on the crucial aspect of volatility measurement. By examining four distinct volatility measurements, simple volatility (S), log volatility (L), high-low volatility (HL) and high-low-closed volatility (HLC), the findings indicate that simple volatility (S) yielded the closest forecast to actual stock prices, as evidenced in Table 3 and Figure 1.

Furthermore, the overall high accuracy of the forecasts generated by GBM, with most MSE, MAPE, and MAD values falling below 10% as shown in Table 4, confirms its potential as a valuable tool for short-term stock market forecasting. These results suggest that for investors and analysts focusing on short-term investment in the Malaysian stock market, utilizing GBM with a simple volatility measurement can provide a reasonably accurate basis for making timely trading decisions.

In short, and somewhat surprisingly, the simple GBM model combined with a basic volatility measure delivers the most accurate forecasts over short horizons of up to two weeks.

We note the following,

  1. The forecast accuracy is limited to the short term,
  2. Although four volatility measures are tested, the simplest performs best,
  3. The analysis is conducted in an emerging market, and
  4. The sample size is small.

Overall, this study runs counter to the current trend and suggests that simple models—both in volatility measurement and price dynamics—can still be effective. This is an interesting study and worth further examination.

Let us know what you think in the comments below or in the discussion forum.

References

[1]  FS Fauzi, SM Sahrudin, NA Abdullah, SN Zainol Abidin, SM Md Zain, Forecasting stock market prices using Geometric Brownian Motion by applying the Optimal Volatility measurement, Mathematical Sciences and Informatics Journal (2025) Vol. 6, No. 2

Post Source Here: Short-Term Stock Price Forecasting Using Geometric Brownian Motion



source https://harbourfronts.com/short-term-stock-price-forecasting-using-geometric-brownian-motion/

Tuesday, December 9, 2025

Option Pricing with Quantum Mechanical Methods

It is well known that put options are often overpriced, especially in equities. The literature is filled with papers explaining this phenomenon. However, most research still relies on the Black-Scholes-Merton framework, where the underlying asset follows a Geometric Brownian Motion (GBM).

Reference [1] also addresses this question, but it departs from the usual framework by casting the problem into a model rooted in quantum mechanics. Essentially, the new approach proceeds as follows:

  1. Start with a general stochastic process and solve it by converting the Fokker–Planck (FP) equation into the Schrödinger equation.
  2. Introduce the delta potential and the Laplace distribution for the stock price.
  3. Derive a closed-form solution for European put options within the context of quantum mechanics.

The authors pointed out,

To resolve the well-known overpriced put puzzle, we propose an option pricing model inspired by quantum mechanics. Starting from an SDE of stock returns, we convert the FP equation into the Schrödinger equation. We then obtain the PDF of stock returns and a closed-form solution of European options. Our model indicates that S&P 500 index returns follow a Laplace distribution with power-law decay in the tail. We demonstrate that our QM outperforms GBM-based models in describing S&P 500 index returns and their corresponding put option prices. Our results indicate that high put option prices in the market are close to fairness and can be accurately modeled via quantum approaches.

In short, the paper proposes a quantum-mechanics–inspired option pricing model that converts the Fokker–Planck equation into the Schrödinger equation, yielding both the return distribution and a closed-form solution for European options. The model shows that S&P 500 returns follow a Laplace distribution with power-law tails and that quantum methods outperform GBM-based models in explaining return dynamics and put option prices.

This is an interesting formulation of option pricing theory. We note that the framework operates in the physical world rather than under the risk-neutral measure. We believe it may have practical applications in option trading, particularly for traders who rehedge less frequently.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Minhyuk Jeong, Biao Yang, Xingjia Zhang, Taeyoung Park & Kwangwon Ahn, A quantum model for the overpriced put puzzle, Financial Innovation (2025) 11:130

Originally Published Here: Option Pricing with Quantum Mechanical Methods



source https://harbourfronts.com/option-pricing-quantum-mechanical-methods/

Saturday, December 6, 2025

Enhancing the Wheel Strategy with Bayesian Networks

The option wheel strategy is a systematic approach that combines selling cash-secured puts and covered calls. The process begins by selling puts on a stock the investor is willing to own; if assigned, the investor acquires the shares and then sells covered calls against the position to collect additional premium. The cycle repeats, though returns depend heavily on underlying volatility, assignment risk, and disciplined position management.

This is another popular options strategy among investors and was widely promoted by trading educators. However, experienced investors recognize that it suffers from the same drawback as the covered call strategy.

Reference [1] revisits the wheel strategy, but with a twist: it applies an LLM-based Bayesian network on top of the wheel framework. Essentially, this Bayesian network is used to characterize market regimes and guide position sizing and strike selection. The authors pointed out,

This paper introduces a novel model-first hybrid AI architecture that overcomes key limitations of using LLMs directly for quantitative financial decision-making, specifically in options wheel strategy decisions. Instead of employing LLMs as decision-makers, we use them as intelligent model constructors. This approach yields strong and stable returns with enhanced downside protection, achieving a Sharpe ratio of 1.08 and a maximum drawdown of -8.2%. The strategy delivers 15.3% annualized returns over 18.75 years (2007–September 2025), including volatile periods such as 2020–2022. Additionally, the model provides full transparency through 27 decision factors per trade… Our comprehensive baseline comparisons demonstrate the effectiveness of the model-first architecture. Pure LLM approaches yield 8.7% returns with a 0.45 Sharpe ratio. Static Bayesian networks achieve 11.2% returns and a 0.67 Sharpe ratio. Rules-based systems produce 9.8% returns with a 0.52 Sharpe ratio. In contrast, our hybrid approach attains 15.3% returns and a 1.08 Sharpe ratio, while maintaining superior risk management.

In short, using the LLM-based Bayesian network, the performance of the wheel strategy improved significantly.

We find the results appear unusually impressive and [glossary_exclude]warrant [/glossary_exclude]caution, but the underlying design and architecture are worth examining.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Xiaoting Kuang, Boken Lin, A Hybrid Architecture for Options Wheel Strategy Decisions: LLM-Generated Bayesian Networks for Transparent Trading, arXiv:2512.01123

Post Source Here: Enhancing the Wheel Strategy with Bayesian Networks



source https://harbourfronts.com/enhancing-wheel-strategy-bayesian-networks/

Wednesday, December 3, 2025

Numerical Methods for Implied Volatility Surface Construction in Crypto Markets

The implied volatility surface is a fundamental building block in modern financial markets, as it underpins the pricing of both vanilla and exotic instruments and supports key risk-management functions such as hedging and scenario analysis. It has been modeled extensively in traditional finance; in crypto, however, few studies exist. Given the volatile nature of the crypto market, it is important to examine this area.

Reference [1] proposes a numerical method for reconstructing the implied vol surface of major cryptocurrencies: Bitcoin, Ethereum, Solana, and Ripple. The main steps are as follows:

  1. Apply the Black-Scholes-Merton (BSM) equation,
  2. Convert it into a discretized framework using the Finite Difference Method, and
  3. Fit the resulting implied volatilities into bivariate polynomials.

The authors pointed out,

In this study, we developed and implemented a method for reconstructing smooth local volatility surfaces for cryptocurrency options by extending the generalized BS model with a bivariate polynomial volatility function. The proposed approach combines a finite difference method with an optimization routine to calibrate volatility surfaces from observed option prices. The resulting local volatility functions are smooth, flexible, and differentiable with respect to both the underlying asset price and time, which enhances their analytical tractability and practical usability…

Through extensive computational tests using [glossary_exclude]real option[/glossary_exclude] data from BTC, ETH, SOL, and XRP, we confirmed that the reconstructed local volatility surfaces successfully reproduce observed market prices across different maturities and strike ranges. In particular, the method captures the market phenomenon that volatility tends to increase as the underlying asset deviates from the spot level. Numerical comparisons showed that the model-generated prices closely matched the actual market prices, which highlights the effectiveness of the proposed calibration procedure. Our algorithm provides a tractable and robust methodology for approximating volatility surfaces in highly volatile crypto markets…

In short, the authors successfully develop a numerical procedure that accurately reconstructs the implied volatility surface for major cryptocurrencies.

This paper carries important practical relevance. However, we note the confusing terminology: the authors refer to their volatility surface as “local volatility,” which may be misleading, as it can be mistaken for the classical local volatility surface introduced by Derman, Kani, and Dupire. These two concepts are distinct.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Yunjae Nam, Youngjin Hwang & Junseok Kim, Reconstructing Smooth Local Volatility Surfaces for Cryptocurrency Options, Int. J. Appl. Comput. Math (2025) 11:242

Article Source Here: Numerical Methods for Implied Volatility Surface Construction in Crypto Markets



source https://harbourfronts.com/numerical-methods-implied-volatility-surface-construction-crypto-markets/

Friday, November 28, 2025

ChatGPT as a Personal Financial Advisor: Capabilities and Limitations

Artificial intelligence (AI) is advancing rapidly, and traders and investors are finding ways to leverage this progress to gain an additional edge. Reference [1] examines the effectiveness of AI—ChatGPT, in particular—in personal finance.

Unlike previous studies that focus on quantitative aspects, the paper evaluates AI performance in a qualitative way. Specifically, it prompts ChatGPT with 21 personal finance cases and assesses not only the accuracy of its suggestions but also their contextual appropriateness, emotional intelligence, and attention to detail. These dimensions are critical for real-world impact, especially when users make decisions based on AI-generated advice. The authors pointed out,

We see clear improvements in ChatGPT-4o compared to ChatGPT-3.5, such as more detailed suggestions and alternative solutions (out-of-box thinking). However, the newer model, in its current form, does not appear to be capable of replacing human financial advisors. This is because it tends to provide generalized advice, overlooks important aspects of the financial planning process, such as determining client goals and expectations, and makes mathematical errors in retirement problems. Moreover, ChatGPT sometimes lacks a moral or legal compass…

We find that the quality of financial advice improves (but not always) with prompt engineering. However, the issue is that through prompt engineering, ChatGPT appears to mirror the focus of the user’s attention. If a user is not thinking about taxes, ChatGPT may still provide useful financial advice, but it may omit any considerations of taxes….However, we suggest that this tool be used with great caution as its omissions of important details, such as taxes and legal issues could create problems for users. Finally, we believe that the benefits of using ChatGPT outweigh its drawbacks in the personal finance domain.

In short, the paper finds that ChatGPT-4o shows meaningful improvements over earlier versions in handling personal finance cases, but it still cannot replace human advisors due to its generalized advice, omissions of key details, and occasional mathematical, legal, or moral oversights. This study concludes that ChatGPT is useful for initial guidance, yet must be used with caution, and that in personal finance, the benefits of using ChatGPT outweigh its drawbacks.

Let us know what you think in the comments below or in the discussion forum.

References

[1] Minh Tam Tammy Schlosky, and Sterling Raskie, ChatGPT as a Financial Advisor: A Re-Examination, Journal of Risk and Financial Management, 18(12), 664.

Originally Published Here: ChatGPT as a Personal Financial Advisor: Capabilities and Limitations



source https://harbourfronts.com/chatgpt-personal-financial-advisor-capabilities-limitations/