Normalizing to [0,1] vs [-1,1]

后端 未结 2 1639
走了就别回头了
走了就别回头了 2021-02-06 02:54

I\'ve been going through a few tutorials on using neural networks for key points detection. I\'ve noticed that for the inputs (images) it\'s very common to divide by 255 (normal

2条回答
  •  野的像风
    2021-02-06 03:51

    I think the most common for image normalization for neural network in general is to remove the mean of the image and dividing by its standard deviation

    X = (X - mean_dataset) / std_dataset
    

    I think key points detection problems should not be too different.

    It might be interesting to see the differences in performance. My guess is that removing mean and dividing by std ([-1,1]) will converge more quickly compared to a [0,1] normalization.

    Because the bias in the model will be smaller and thus need less time to reach if they are initialised at 0.

提交回复
热议问题