Fully Convolutional Network

╄→尐↘猪︶ㄣ 提交于 2020-01-06 07:55:10

问题


TensorFlow.js version

tfjs-node-gpu 0.2.1

Describe the problem or feature request

I'm trying to make a supervised fully convolutional network and am not able to generate appropriate outputs. The net structure is based on several FCN examples done, specifically this one: http://deeplearning.net/tutorial/fcn_2D_segm.html

I've put the mask in a one-hot 4d boolean vector with the order of [batch, height, width, class] with only a single class. The input data is altered to a float32 tensor of [batch, height, width, 1] (no RGB channels) with a range of 0 to 1.

Data is here, and from the same tutorial above: https://drive.google.com/file/d/0B_60jvsCt1hhZWNfcW4wbHE5N3M/view

        const input = tf.input({ shape: [this._dims[1], this._dims[2], this._dims[3]], name: 'Input', });

        const batchNorm_0 = tf.layers.batchNormalization().apply(input);
        //**Begin A-Scan Net*/
        const fcn_1_0 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64,  } ).apply(input);
        const fcn_2 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_1_0);

        const fcn_3_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_2);
        const fcn_3_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_0);
        const fcn_3_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_1);
        const fcn_4 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_3_2);

        const fcn_5_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_4);
        const fcn_5_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_0);
        const fcn_5_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_1);
        const fcn_6 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_5_2);

        const fcn_7_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_6);
        const fcn_7_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_0);
        const fcn_7_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_1);
        const fcn_8 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_7_2);

        const fcn_9_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_8);
        const fcn_9_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_0);
        const fcn_9_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_1);
        const fcn_10 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_9_2);

        const fcn_11 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: 2048 }).apply(fcn_10);

        const fcn_12 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: this._classes }).apply(fcn_11);

        const upsample_5 = tf.layers.conv2dTranspose( { kernelSize: [32, 32], strides: [32, 32], filters: this._classes, activation: 'relu', padding: 'same' } ).apply(fcn_12);
        const upsample_6 = tf.layers.conv2d( { kernelSize: [1, 1], strides: [1, 1], filters: this._classes, activation: 'softmax', padding: 'same' } ).apply(upsample_5);

        var model = tf.model( { name: 'AdvancedCNN', inputs: [input], outputs: [upsample_6] } );

The loss / meteric / optimizer is:

        const LEARNING_RATE = .00001;
        const optimizer = tf.train.adam(LEARNING_RATE)
        model.compile({
            optimizer,
            loss: tf.losses.logLoss,
            metrics: tf.metrics.categoricalCrossentropy,
        });

The issue is that the network isn't learning and the output class is either all 0 or all 1, even after multiple epochs. I've tried with and without batch norm and altering the learning rate. The data seems sound, so either I'm formatting the data wrong or there is an issue with the loss function, label structure, etc.

Has anyone else built an FCN using TensorFlow.js?


回答1:


In fully CNN convolutional neural networks, the last layer is a dense layer or fully connected layer. It is from that dense layer that the softmax activation is computed. Currently, your NN neural network architecture is missing such a layer, therein you're unable to get your classification correctly.

Actually is this the last layer, the dense layer who does perform the classification using the different features learnt by the convolutional layers.

The only thing to point out is that you might need to use a flatten layer at the entry of the dense layer - just for dimensions matching

Update: Using an upsampling layer for the last layer will likely cause your loss to decrease. I think the issue has to do with the transpose layer. This article explains what is upsampling




回答2:


I was able to solve the issue. I swapped out the conv2dTranspose with upSampling2d and conv2d layers. The one-hot encoding of the mask is sufficient as is the tf.losses.softmaxCrossEntropy for the loss function.

Finally, resizing my images down to 256x512 helped speed up training time. The final net structure that worked (super primitive network, so use it as you will) is:

        const fcn_1_0 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64, } ).apply(input);
        const fcn_1_1 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64,  } ).apply(fcn_1_0);
        const fcn_1_2 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64,  } ).apply(fcn_1_1);
        const fcn_2 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_1_2);

        const fcn_3_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_2);
        const fcn_3_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_0);
        const fcn_3_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_1);
        const fcn_4 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_3_2);

        const fcn_5_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_4);
        const fcn_5_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_0);
        const fcn_5_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_1);
        const fcn_6 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_5_2);

        const fcn_7_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_6);
        const fcn_7_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_0);
        const fcn_7_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_1);
        const fcn_8 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_7_2);

        const fcn_9_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_8);
        const fcn_9_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_0);
        const fcn_9_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_1);
        const fcn_10 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_9_2);

        // const fcn_11_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_10);
        // const fcn_11_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_11_0);
        // const fcn_11_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_11_1);
        // const fcn_12 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_11_2);

        //const fcn_13 = tf.layers.conv2d({ kernelSize: [7, 7], strides: [1, 1], activation: 'relu', padding: 'same', filters: 4096 }).apply(fcn_12);
        const fcn_13_0 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: 4096 }).apply(fcn_10);
        //const drop_0 = tf.layers.dropout( { rate: .5 } ).apply(fcn_13);

        //const fcn_15 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: this._classes }).apply(fcn_13_0);

        const upsample_1 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(fcn_13_0);
        const conv_upsample1 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_1);

        const upsample_2 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample1);
        const conv_upsample2 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_2);

        const upsample_3 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample2);
        const conv_upsample3 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_3);

        const upsample_4 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample3);
        const conv_upsample4 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_4);

        const upsample_5 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample4);
        const conv_upsample5 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_5);

        //const upsample_6 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(fcn_15);
        const conv_upsample = tf.layers.conv2dTranspose( { kernelSize: 1, strides: 1, activation: 'softmax', padding: 'same', filters: this._classes }).apply(conv_upsample5);


来源:https://stackoverflow.com/questions/54600956/fully-convolutional-network

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!