Extending Transformer to Predict Both the Order and Occurrence Times of Elements in a Sequence
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

Recently, sequence prediction techniques using Transformers have become essential in various fields. However, so far Transformers have only focused on predicting the next elements in a sequence and do not predict their occurrence times. Therefore, in this paper, we propose an extension of Transformer to predict not only the next elements but also their occurrence times. For this purpose, we extend Transformer in three ways: (1) We propose a new positional encoding method that can reflect both the order and occurrence time of each element in a sequence, (2) We extend the output layer of Transformer to simultaneously predict the next element and its occurrence time, and (3) We refine the loss function to measure the difference between sequences considering both the order and occurrence times of elements. Through experiments using real datasets, we confirmed that the proposed model more accurately predicts the order and occurrence time of each element than the existing Transformer. © 2024 IEEE.

키워드

Positional encodingSequence predictionTimestamped sequencesTransformer
제목
Extending Transformer to Predict Both the Order and Occurrence Times of Elements in a Sequence
저자
Ryu, HyewonYu, SaraLee, Ki Yong
DOI
10.1109/BigComp60711.2024.00074
발행일
2024-02
유형
Proceedings Paper
저널명
Proceedings - 2024 IEEE International Conference on Big Data and Smart Computing, BigComp 2024
페이지
371 ~ 372