Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Contrastive learning for unsupervised image-to-image translation[Formula presented]

Authors
Lee, HanbitSeol, JinseokLee, Sang-gooPark, JaehuiShim, Junho
Issue Date
Jan-2024
Publisher
Elsevier Ltd
Keywords
Contrastive learning; Generative adversarial networks; Image-to-image translation; Self-supervised learning; Style transfer
Citation
Applied Soft Computing, v.151
Journal Title
Applied Soft Computing
Volume
151
URI
https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/159740
DOI
10.1016/j.asoc.2023.111170
ISSN
1568-4946
1872-9681
Abstract
Image-to-image translation (I2I) aims to learn a mapping function to transform images into different styles or domains while preserving their key structures. Typically, I2I models require manually defined image domains as a training set to learn the visual differences among the image domains and achieve the ability to translate images across them. However, constructing such multi-domain datasets on a large scale requires expensive data collection and annotation processes. Moreover, if the target domain changes or is expanded, a new dataset should be collected, and the model should be retrained. To address these challenges, this article presents a novel unsupervised I2I method that does not require manually defined image domains. The proposed method automatically learns the visual similarity between individual samples and leverages the learned similarity function to transfer a specific style or appearance across images. Therefore, the developed method does not rely on cost-intensive manual domains or unstable clustering results, leading to improved translation accuracy at minimal cost. For quantitative evaluation, we implemented a state-of-the-art I2I models and performed image transformation on the same input image using the baselines and our method. The image quality was then assessed using two quantitative metrics: Frechet inception distance (FID) and translation accuracy. The proposed method exhibited significant improvements in image quality and translation accuracy compared with the latest unsupervised I2I methods. Specifically, the developed technique achieved a 25% and 19% improvement over the best-performing unsupervised baseline in terms of FID and translation accuracy, respectively. Furthermore, this approach demonstrated performance nearly comparable to those of supervised learning-based methods trained using manually collected and constructed domains. © 2023 Elsevier B.V.
Files in This Item
Go to Link
Appears in
Collections
공과대학 > 소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Shim, Junho photo

Shim, Junho
공과대학 (소프트웨어학부(첨단))
Read more

Altmetrics

Total Views & Downloads

BROWSE