Resumen
Because of the non-linearity inherent in energy commodity prices, traditional mono-scale smoothing methodologies cannot accommodate their unique properties. From this viewpoint, we propose an extended mode decomposition method useful for the time-frequency analysis, which can adapt to various non-stationarity signals relevant for enhancing forecasting performance in the era of big data. To this extent, we employ variants of mode decomposition-based extreme learning machines namely: (i) Complete Ensemble Empirical Mode Decomposition with Adaptive Noise-based ELM Model (CEEMDAN-ELM), (ii) Ensemble Empirical Mode Decomposition-based ELM Model (EEMD-ELM) and (iii) Empirical Mode Decomposition Based ELM Model (EMD-ELM), which cut-across soft computing and artificial intelligence to analyze multi-commodity time series data by decomposing them into seven independent intrinsic modes and one residual with varying frequencies that depict some interesting characterization of price volatility. Our findings show that in terms of the model-specific forecast accuracy measures different dynamics in the two scenarios namely the (non) COVID periods. However, the introduction of a benchmark, namely the autoregressive integrated moving average model (ARIMA) reveals a slight change in the earlier dynamics, where ARIMA outperform our proposed models in the Japan gas and the US gas markets. To check the superiority of our models, we apply the model-confidence set (MCS) and the Kolmogorov-Smirnov Predictive Ability test (KSPA) with more preference for the former in a multi-commodity framework, which reveals that in the pre-COVID era, CEEMDAN-ELM shows persistence and superiority in accurately forecasting Crude oil, Japan gas, and US gas. Nonetheless, this paradigm changed during the COVID-era, where CEEMDAN-ELM favored Japan gas, US gas, and coal market with different rankings via the Model confidence set evaluation methods. Overall, our numerical experiment indicates that all decomposition-based extreme learning machines are superior to the benchmark model.