상세 보기
- Cho, Yu-jin;
- Lee, Ah Hyeon;
- Kim, Byung Gyu;
- Platoš, Jan
WEB OF SCIENCE
0SCOPUS
0초록
The Vision Transformer (ViT) has demonstrated remarkable performance in a wide range of computer vision tasks, such as image classification, object detection, and image generation. Unlike convolutional neural networks (CNNs), ViT benefits from a global receptive field, which enables more effective modeling of relationships between image patches. However, the lack of inductive biases makes ViT models difficult to train stably, especially on limited datasets. Without access to large-scale pre-trained weights, performance often degrades significantly. To address this issue, we propose a novel architecture called RMSF-ViT. It employs a progressive fusion strategy that incorporates fine-grained patch information beyond the fixed single patch size used in conventional ViT architectures. In addition, RMSF-ViT reduces the number of attention heads by half compared to vanilla ViT models. This design improves both performance and computational efficiency, as demonstrated on the CIFAR-10, CIFAR-100, Flowers, and Pets datasets.
키워드
- 제목
- RMSF-ViT: Randomized Multi-scale Fusion Vision Transformer
- 저자
- Cho, Yu-jin; Lee, Ah Hyeon; Kim, Byung Gyu; Platoš, Jan
- 발행일
- 2025-10
- 유형
- Conference paper
- 권
- 2675
- 페이지
- 125 ~ 137