ACORT: A compact object relation transformer for parameter efficient image captioning

Tan, Jia Huei and Tan, Ying Hua and Chan, Chee Seng and Chuah, Joon Huang (2022) ACORT: A compact object relation transformer for parameter efficient image captioning. Neurocomputing, 482. pp. 60-72. ISSN 0925-2312, DOI https://doi.org/10.1016/j.neucom.2022.01.081.

Full text not available from this repository.

Abstract

Recent research that applies Transformer-based architectures to image captioning has resulted in stateof-the-art image captioning performance, capitalising on the success of Transformers on natural language tasks. Unfortunately, though these models work well, one major flaw is their large model sizes. To this end, we present three parameter reduction methods for image captioning Transformers: Radix Encoding, cross-layer parameter sharing, and attention parameter sharing. By combining these methods, our proposed ACORT models have 3.7x to 21.6x fewer parameters than the baseline model without compromising test performance. Results on the MS-COCO dataset demonstrate that our ACORT models are competitive against baselines and SOTA approaches, with CIDEr score P126. Finally, we present qualitative results and ablation studies to demonstrate the efficacy of the proposed changes further. Code and pre-trained models are publicly available at https://github.com/jiahuei/sparse-image-captioning. (c) 2022 Published by Elsevier B.V.

Item Type: Article
Funders: UNSPECIFIED
Uncontrolled Keywords: Image captioning; Deep network compression; Deep learning
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Computer Science & Information Technology > Department of Artificial Intelligence
Faculty of Engineering > Department of Electrical Engineering
Depositing User: Ms. Juhaida Abd Rahim
Date Deposited: 11 Aug 2022 00:45
Last Modified: 11 Aug 2022 00:45
URI: http://eprints.um.edu.my/id/eprint/32731

Actions (login required)

View Item View Item