gradient descent seems to fail

前端 未结 9 1934
忘掉有多难
忘掉有多难 2020-12-12 15:54

I implemented a gradient descent algorithm to minimize a cost function in order to gain a hypothesis for determining whether an image has a good quality. I did that in Octav

9条回答
  •  感情败类
    2020-12-12 16:28

    I think that your computeCost function is wrong. I attended NG's class last year and I have the following implementation (vectorized):

    m = length(y);
    J = 0;
    predictions = X * theta;
    sqrErrors = (predictions-y).^2;
    
    J = 1/(2*m) * sum(sqrErrors);
    

    The rest of the implementation seems fine to me, although you could also vectorize them.

    theta_1 = theta(1) - alpha * (1/m) * sum((X*theta-y).*X(:,1));
    theta_2 = theta(2) - alpha * (1/m) * sum((X*theta-y).*X(:,2));
    

    Afterwards you are setting the temporary thetas (here called theta_1 and theta_2) correctly back to the "real" theta.

    Generally it is more useful to vectorize instead of loops, it is less annoying to read and to debug.

提交回复
热议问题