跳转到主要内容

category

Photo by Boris Smokrovic on Unsplash

The field of time series forecasting is going through a very exciting period. In only the last three years, we have seen many important contributions, like N-BEATSN-HiTSPatchTST and TimesNet.

At the same time, large language models (LLMs) have gained a lot of popularity lately, with applications like ChatGPT, as they can adapt to a wide variety of tasks without further training.

Which leads to the question: can foundation models exist for time series like they exist for natural language processing? Is it possible that a large model pre-trained on massive amounts of time series data can then produce accurate predictions on unseen data?

With TimeGPT-1, proposed by Azul Garza and Max Mergenthaler-Canseco, the authors adapt the techniques and architecture behind LLMs to the field of forecasting, successfully building the first time series foundation model capable of zero-shot inference.

In this article, we first explore the architecture behind TimeGPT and how the model was trained. Then, we apply it in a forecasting project to evaluate its performance against other…