tensorflow――attention机制(Spatial and Channel-Wise Attention )
论文 SCA-CNN的tensorflow代码实现(保存下来): 论文: in Convolutional Networks for Image Captioning """ Attention Model: WARNING: Use BatchNorm layer otherwise no accuracy gain. Lower layer with SpatialAttention, high layer with ChannelWiseAttention. In Visual155, Accuracy at 1, from 75.39% to 75.72%(↑0.33%). """ import tensorflow as tf def spatial_attention(feature_map, K=1024, weight_decay=0.00004, scope="", reuse=None): """This method is used to add spatial attention to model. Parameters --------------- @feature_map: Which visual feature map as branch to use. @K: Map `H*W` units to K units. Now unused.