Quick Sense Temporal Graph Transformer with Effective Representation Augmentation.
Published in IJCNN, 2025
Temporal Graph Transformers (TGTs), which incorporates Transformer into the temporal graph learning models, are powerful tools for analyzing and predicting temporal graph data. However, most existing TGT models focus on one-hop interactions due to sequence correlation and computational complexity caused by neighborhood explosion. This limited focus on local subgraph structures restricts the representational power of current TGTs. Additionally, the introduction of higher-order structures exacerbates efficiency issues in TGTs, with the time-consuming feature processing stage often neglected, leading to low training efficiency. To address these challenges, we propose QSFormer (Quick Sense Temporal Graph TransFormer), a solution designed to enhance local sensation ability and accelerate training efficiency in TGTs. QSFormer includes a sense augmentation strategy that incorporates high-order neighborhoods with position-differentiated encoding and extends common neighbor. Furthermore, QSFormer implements a quick training framework for TGTs to accelerate feature processing and model convergence, including padded parallel sampling and adaptive mini-batch generation. Extensive experiments demonstrate that QSFormer consistently outperforms existing baselines, including TGTs such as DyGFormer and HOT. Notably, QSFormer surpasses these TGTs in training speed by over four and seven times, respectively. Our code is publicly available at https://anonymous.4open.science/r/QSFormer-828E.
Recommended citation: Ziqi Huang, Tongya Zheng, Rui Wang, Longjiao Zhang, Wenjie Huang, Xinyu Wang, "Quick Sense Temporal Graph Transformer with Effective Representation Augmentation." the International Joint Conference on Neural Networks (IJCNN 2025), ROME, ITALY, June 30 - July 5, 2025.