My Master's Thesis
Hardware Acceleration of Neural Network for Time Series Forecasting using FPGA
2025-10-24
Overview
My biggest project up to date. The thesis's topic was "Hardware Acceleration of Neural Network for Time Series Forecasting using FPGA and Vitis AI". I succesfully defended it (5.0!) in October.
The goal was to train a neural network model (LSTM and TCN) to forecast the photovolatic and wind sources production in Poland. Then, to deploy such model on three different hardware platforms: CPU, GPU and FPGA - with comparison what the trade-offs in accuracy, latency, and energy consumption are. Lastly, the objective was to accelerate this model on the Kria KV260 FPGA platform by using the Vitis AI development environment.
Datasets used have been provided by the Polish power system with their Daily operational reports regarding PV and WI sources production. The other was The CAMS Solar Radiation Time-Series dataset. The Global Horizontal Irradiance (GHI) parameter has been the key here. It refers to the total solar radiation incident on a horizontal surface.
Both datasets cover a period between June 14, 2024 and February 14, 2025, with high-frequency measurements recorded at intervals of approximately 15 minutes.
Results
I succesfully trained and implemented the neural network for time series forecasting. Effectively represented the differences in performance on different platforms.
Some conclusions - FPGA-based acceleration can be considered a particularly promising direction. Combining AI solutions with FPGA can be a real step in providing inteligent Energy systems.
TCNs achieved competitive predictive accuracy. Clear advantages from a hardware perspective in terms of parallelization and efficient utilization of FPGA resources.
What I Learned
- Broaden my knowledge in the areas of recurrent and convolutional networks (RNNs and CNNs), as well as implementing them using Python ML libraries
- Better understanding of FPGA boards and the use of Vitis AI
- The differences in efficiency, latency between CPU, GPU and FPGA
- FPGA Tools - quantization, pruning and model partitioning.
Technical Challenges
- Finding the most optimized solution (i.e the choice of layers, epochs, overfitting tools (Dropout, Batch Normalization, Early Stopping), training data division)
- Vitis AI, as its documentation is a bit messy and the software itself is not too easy.
Future Improvements
- Transformer-based forecasting
- Incorporate additional exogenous variables
- Custom FPGA architecture with tools like Vitis High-Level Synthesis that should offer better efficiency