
软件学报 ISSN 1000-9825, CODEN RUXUEW E-mail: jos@iscas.ac.cn
Journal of Software,2018,29(Suppl.(2)):3043 http://www.jos.org.cn
©中国科学院软件研究所版权所有. Tel: +86-10-62562563
一种基于生成式对抗网络的图像描述方法
薛子育
1
,
郭沛宇
1
,
祝晓斌
2
,
张乃光
1
1
(国家广播电视总局 广播科学研究院 信息技术研究所,北京 100866)
2
(北京工商大学 计算机与信息工程学院,北京 100048)
通讯作者: 郭沛宇, E-mail: guopeiyu@abs.ac.cn 祝晓斌, E-mail: zhuxiaobin@btbu.edu.cn
摘 要: 近年来,深度学习在图像描述领域得到越来越多的关注.现有的深度模型方法一般通过卷积神经网络进
行特征提取,递归神经网络对特征拼接生成语句.然而,当图像较为复杂时,特征提取不准确且语句生成模型模式固
定,部分语句不具备连贯性.基于此,提出一种结合多频道特征提取模型与生成式对抗网络框架的图像描述方法——
CACNN-GAN.此方法在卷积层加入频道注意力机制在各频道提取特征,与 COCO 图像集进行近似特征比对,选择排
序靠前的图像特征作为生成式对抗网络的输入,通过生成器与鉴别器之间的博弈过程,训练句法多样、语句通顺、
词汇丰富的语句生成器模型.在实际数据集上的实验结果表明,CACNN-GAN 能够有效地对图像进行语义描述,相
比其他主流算法,显示出了更高的准确率.
关键词: 图像描述;生成式对抗网络;频道注意力模型;卷积神经网络
中文引用格式: 薛子育,郭沛宇,祝晓斌,张乃光.一种基于生成式对抗网络的图像描述方法.软件学报,2018,29(Suppl.(2)):
3043. http://www.jos.org.cn/1000-9825/18015.htm
英文引用格式: Xue ZY, Guo PY, Zhu XB, Zhang NG. Image description method based on generative adversarial networks. Ruan
Jian Xue Bao/Journal of Software, 2018,29(Suppl.(2)):3043 (in Chinese). http://www.jos.org.cn/1000-9825/18015.htm
Image Description Me thod Base d o n Generative Adversarial Networks
XUE Zi-Yu
1
, GUO Pei-Yu
1
, Zhu Xiao-Bin
2
, ZHANG Nai-Guang
1
1
(Information Technology Institute, Academy of Broadcasting Science, National Radio and Television Administration, Beijing 100866,
China)
2
(School of Computer and Information Engineering, Beijing Technology and Business University, Beijing 100048, China)
Abstra ct : In recent years, deep learning has gained more and more attention in image description. The existing deep learning methods
using CNNs to extract features and RNNs to fold into one sentence. Nevertheless, when dealing with complex images, the feature
extraction is inaccurate. And the fixed mode of sentence generation model leads to inconsistent sentences. To solve this problem, this
study proposes a method combine channel-wise attention model and GANs, named CACNN-GAN. The channel-wise attention mechanism
is added to each conv-layer to extract features, compare with the COCO dataset, and select the top features to generate sentence. Using
GANs to generate the sentences, which is generated by the game process between the generator and the discriminator. After that, we can
get a sentence generator contains the varied syntax, smooth sentence, and rich vocabulary. Experiments on real datasets illustrates that
CACNN-GAN can effectively describe images, and get higher accuracy compared with the state-of-art.
Key words: image description; generative adversarial networks; channel-wise attention model; convolutional neural network
随着科技的快速发展,图像、视频等媒体数据大量出现在互联网中,成为信息传播的主要介质,且呈现出爆
炸性增长的趋势.无标记和错误标记的图像散落在网络当中无法被检索和利用,造成大量资源的浪费.利用机器
基金项目: 国家广播电视总局广播科学研究院基本科研业务费课题(130016018000123)
Foundation item: Basal Research Fund of Academy of Broadcasting Science, National Radio and Television Administration
(130016018000123)
收稿时间: 2018-04-16; 采用时间: 2018-10-24
评论