问题
What I want to do is to do a simple pixel-wise classification or regression task. Therefore I have an input image and a ground_truth. What I want to do is to do an easy segmentation task where I have a circle and a rectangle. And I want to train, where the circle or where the rectangle is. That means I have an ground_truth images which has value "1" at all the locations where the circle is and value "2" at all the locations where the rectangle is. Then I have my images and ground_truth images as input in form of .png images.
Then I think I can either to a regression or classification task depending on my loss layer: I have been using the fully convolutional AlexNet from fcn alexnet
classification:
layer {
name: "upscore"
type: "Deconvolution"
bottom: "score_fr"
top: "upscore"
param {
lr_mult: 0
}
convolution_param {
num_output: 3 ## <<---- 0 = backgrund 1 = circle 2 = rectangle
bias_term: false
kernel_size: 63
stride: 32
}
}
layer {
name: "score"
type: "Crop"
bottom: "upscore"
bottom: "data"
top: "score"
crop_param {
axis: 2
offset: 18
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss" ## <<----
bottom: "score"
bottom: "ground_truth"
top: "loss"
loss_param {
ignore_label: 0
}
}
regression:
layer {
name: "upscore"
type: "Deconvolution"
bottom: "score_fr"
top: "upscore"
param {
lr_mult: 0
}
convolution_param {
num_output: 1 ## <<---- 1 x height x width
bias_term: false
kernel_size: 63
stride: 32
}
}
layer {
name: "score"
type: "Crop"
bottom: "upscore"
bottom: "data"
top: "score"
crop_param {
axis: 2
offset: 18
}
}
layer {
name: "loss"
type: "EuclideanLoss" ## <<----
bottom: "score"
bottom: "ground_truth"
top: "loss"
}
However, this produces not even the results I want to have. I think there is something wrong with my understanding of pixel-wise classification / regression. Could you tell me where my mistake is?
EDIT 1
For regression the retrieval of the output would look like this:
output_blob = pred['result'].data
predicated_image_array = np.array(output_blob)
predicated_image_array = predicated_image_array.squeeze()
print predicated_image_array.shape
#print predicated_image_array.shape
#print mean_array
range_value = np.ptp(predicated_image_array)
min_value = predicated_image_array.min()
max_value = predicated_image_array.max()
# make positive
predicated_image_array[:] -= min_value
if not range_value == 0:
predicated_image_array /= range_value
predicated_image_array *= 255
predicated_image_array = predicated_image_array.astype(np.int64)
print predicated_image_array.shape
cv2.imwrite('predicted_output.jpg', predicated_image_array)
This is easy since the output is 1 x height x width and the values are the actual output values. But how would one retrieve the output for classification / SotMaxLayer since the output is 3 (num labels) x height x width. But I do not know the meaning of the content of this shape.
回答1:
first of all, your problem is not regression
, but classification
!
if you want to teach the net recognise circles and rectangles you have to make a different data set - an images and labels, for example: circle - 0 and rectangle - 1
. you do it by making text file that containsthe images path and the images labels, for example: /path/circle1.png 0 /path/circle2.png 0 /path/rectangle1.png 1 /path/rectangle1.png 1
. here is a nice tutorial for a problem like yours. good luck.
来源:https://stackoverflow.com/questions/40549334/caffe-pixel-wise-classification-regression