Publication Library

Publication Library

LSTM-ARIMA as a Hybrid Approach in Algorithmic Investment Strategies

Description: This study focuses on building an algorithmic investment strategy employing a hybrid approach that combines LSTM and ARIMA models referred to as LSTM-ARIMA. This unique algorithm uses LSTM to produce final predictions but boost results of this RNN by adding the residuals obtained from ARIMA predictions among other inputs. The algorithm is tested across three equity indices (S&P 500, FTSE 100, and CAC 40) using daily frequency data spanning from January, 2000 to August, 2023. The architecture of testing is based on the walk-forward procedure which is applied for hyperparameter tunning phase that uses using Random Search and backtesting the algorithms. The selection of the optimal model is determined based on adequately selected performance metrics combining focused on risk-adjusted return measures. We considered two strategies for each algorithm: Long-Only and Long-Short in order to present situation of two various groups of investors with different investment policy restrictions. For each strategy and equity index, we compute the performance metrics and visualize the equity curve to identify the best strategy with the highest modified information ratio (IR**). The findings conclude that the LSTM-ARIMA algorithm outperforms all the other algorithms across all the equity indices what confirms strong potential behind hybrid ML-TS (machine learning- time series) models in searching for the optimal algorithmic investment strategies.

Created At: 14 December 2024

Updated At: 14 December 2024

Statistical arbitrage in multi-pair trading strategy based on graph clustering algorithms in US equities market

Description: The study seeks to develop an effective strategy based on the novel framework of statistical arbitrage based on graph clustering algorithms. The amalgamation of quantitative and machine learning methods, including the Kelly criterion, and an ensemble of machine learning classifiers have been used to improve risk-adjusted returns and increase the immunity to transaction costs over existing approaches. The study seeks to provide an integrated approach to optimal signal detection and risk management. As a part of this approach, innovative ways of optimizing take profit and stop loss functions for daily frequency trading strategies have been proposed and tested. All of the tested approaches outperformed appropriate benchmarks. The best combinations of the techniques and parameters demonstrated significantly better performance metrics than the relevant benchmarks. The results have been obtained under the assumption of realistic transaction costs, but are sensitive to changes in some key parameters.

Created At: 14 December 2024

Updated At: 14 December 2024

Supervised Autoencoder MLP for Financial Time Series Forecasting

Description: This paper investigates the enhancement of financial time series forecasting with the use of neural networks through supervised autoencoders, aiming to improve investment strategy performance. It specifically examines the impact of noise augmentation and triple barrier labeling on risk-adjusted returns, using the Sharpe and Information Ratios. The study focuses on the S&P 500 index, EUR/USD, and BTC/USD as the traded assets from January 1, 2010, to April 30, 2022. Findings indicate that supervised autoencoders, with balanced noise augmentation and bottleneck size, significantly boost strategy effectiveness. However, excessive noise and large bottleneck sizes can impair performance, highlighting the importance of precise parameter tuning. This paper also presents a derivation of a novel optimization metric that can be used with triple barrier labeling. The results of this study have substantial policy implications, suggesting that financial institutions and regulators could leverage techniques presented to enhance market stability and investor protection, while also encouraging more informed and strategic investment approaches in various financial sectors.

Created At: 14 December 2024

Updated At: 14 December 2024

Hedging Properties of Algorithmic Investment Strategies using Long Short-Term Memory and Time Series models for Equity Indices

Description: This paper proposes a novel approach to hedging portfolios of risky assets when financial markets are affected by financial turmoils. We introduce a completely novel approach to diversification activity not on the level of single assets but on the level of ensemble algorithmic investment strategies (AIS) built based on the prices of these assets. We employ four types of diverse theoretical models (LSTM- Long Short-Term Memory, ARIMA-GARCH- Autoregressive Integrated Moving Average- Generalized Autoregressive Conditional Heteroskedasticity, momentum, and contrarian) to generate price forecasts, which are then used to produce investment signals in single and complex AIS. In such a way, we are able to verify the diversification potential of different types of investment strategies consisting of various assets (energy commodities, precious metals, cryptocurrencies, or soft commodities) in hedging ensemble AIS built for equity indices (S&P 500 index). Empirical data used in this study cover the period between 2004 and 2022. Our main conclusion is that LSTM-based strategies outperform the other models and that the best diversifier for the AIS built for the S&P 500 index is the AIS built for Bitcoin. Finally, we test the LSTM model for a higher frequency of data (1 hour). We conclude that it outperforms the results obtained using daily data.

Created At: 14 December 2024

Updated At: 14 December 2024

Efficient Nested Estimation of CoVaR A Decoupled Approach

Description: This paper addresses the estimation of the systemic risk measure known as CoVaR, which quantifies the risk of a f inancial portfolio conditional on another portfolio being at risk. We identify two principal challenges: conditioning on a zeroprobability event and the repricing of portfolios. To tackle these issues, we propose a decoupled approach utilizing smoothing techniques and develop a model-independent theoretical framework grounded in a functional perspective. We demonstrate that the rate of convergence of the decoupled estimator can achieve approximately OP(Γ−1/2), where Γ represents the computational budget. Additionally, we establish the smoothness of the portfolio loss functions, highlighting its crucial role in enhancing sample efficiency. Our numerical results confirm the effectiveness of the decoupled estimators and provide practical insights for the selection of appropriate smoothing techniques.

Created At: 14 December 2024

Updated At: 14 December 2024

First 10 11 12 13 14 15 16 Last