This code is the official PyTorch implementation of our AAAI'26 paper: APN: Rethinking Irregular Time Series Forecasting: A Simple yet Effective Baseline.
If you find this project helpful, please don't forget to give it a ⭐ Star to show your support. Thank you!
🚩 News (2025.12) APN has been accepted by AAAI 2026.
APN (Adaptive Patching Network), which introduces a novel Adaptive Patching paradigm to rethink irregular multivariate time series forecasting. Specifically, it designs a Time-Aware Patch Aggregation (TAPA) module that learns dynamically adjustable patch boundaries and employs a time-aware weighted averaging strategy. This transforms irregular sequences into high-quality, regularized representations in a channel-independent manner. Equipped with a simple query module and a shallow MLP, APN effectively integrates historical information while maintaining high efficiency.
The comparisons between Fixed Patching and our Adaptive Patching: (a) Fixed Patching vs. (b) Adaptive Patching.
Important
This project is fully tested under Python 3.8. It is recommended that you set the Python version to 3.8.
Given a python environment (note: this project is fully tested under python 3.8 and PyTorch 2.6.0+cu124), install the dependencies with the following command:
pip install -r requirements.txtOur model is evaluated on four widely used irregular time series datasets: PhysioNet, MIMIC, HumanActivity, and USHCN. The data preparation process differs slightly depending on the dataset's access restrictions.
For HumanActivity, PhysioNet ('12), and USHCN, you generally do not need to prepare the data manually. Our code allows for automatic downloading and preprocessing upon the first run.
- HumanActivity: The script will automatically download and process the data. The processed files will be stored in:
./storage/datasets/HumanActivity - PhysioNet & USHCN: These datasets are managed via the
tsdmlibrary. They will be automatically downloaded and cached in your home directory:~/.tsdm/datasets/ # Processed data ~/.tsdm/rawdata/ # Raw data
Due to privacy regulations, the MIMIC dataset requires credentialed access. Please follow the steps below to prepare it manually:
- Request Access: Obtain the raw data from PhysioNet MIMIC. You do not need to extract the
.csv.gzfiles. - Preprocessing: We adopt the standard preprocessing pipeline from gru_ode_bayes.
- Clone the gru_ode_bayes repository.
- Follow their instructions to generate the
complete_tensor.csvfile.
- File Placement: Move the generated
complete_tensor.csvto the specific path expected by our dataloader (create folders if they don't exist):
mkdir -p ~/.tsdm/rawdata/MIMIC_III_DeBrouwer2019/
mv /path/to/your/complete_tensor.csv ~/.tsdm/rawdata/MIMIC_III_DeBrouwer2019/Once the file is in place, our code will handle the final formatting (generating .parquet files) automatically during the first training session.
- To see the model structure of APN, click here.
- We provide all the experiment scripts for APN and other baselines under the folder
./scripts. For example you can reproduce the experiment results on the USHCN dataset as the following script:
sh ./scripts/APN/P12.shExtensive experiments on 4 real-world datasets (PhysioNet, MIMIC, HumanActivity, USHCN) demonstrate that APN outperforms existing state-of-the-art (SOTA) methods such as GraFITi and tPatchGNN in both MSE and MAE metrics.
Comparison of computational efficiency on the USHCN dataset. APN exhibits significant advantages in Peak GPU Memory, Parameters, Training Time, and Inference Time.
Results of parameter sensitivity analysis on the number of patches (
If you find this repo useful, please cite our paper.
@inproceedings{liu2026apn,
title = {Rethinking Irregular Time Series Forecasting: A Simple yet Effective Baseline},
author = {Xvyuan Liu and Xiangfei Qiu and Xingjian Wu and Zhengyu Li and Chenjuan Guo and Jilin Hu and Bin Yang},
booktitle = {AAAI},
year = {2026}
}This work was partially supported by the National Natural Science Foundation of China (No.62472174) and the Fundamental Research Funds for the Central Universities.
If you have any questions or suggestions, feel free to contact:
- Xvyuan Liu ([email protected])
- Xiangfei Qiu ([email protected])
- Xingjian Wu ([email protected])
Or describe it in Issues.




