Convolutional neural network outputting equal probabilities for all labels
I am currently training a CNN on MNIST, and the output probabilities (softmax) are giving [0.1,0.1,...,0.1] as training goes on. The initial values aren't uniform, so I can't figure out if I'm doing something stupid here? I'm only training for 15 steps, just to see how training progresses; even though that's a low number, I don't think that should result in uniform predictions? import numpy as np import tensorflow as tf import imageio from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original') # Getting data from sklearn.model_selection import train_test_split def one_hot