I have a depth image, that I\'ve generated using 3D CAD data. This depth image can also be taken from a depth imaging sensor such as Microsoft Kinect or any other stereo cam
So there are a couple of things that are undefined in your question, but I'll do my best to outline an answer.
The basic idea for what you want to do is to take the gradient of the image, and then apply a transformation to the gradient to get the normal vectors. Taking the gradient in matlab is easy:
[m, g] = imgradient(d);
gives us the magnitude (m) and the direction (g) of the gradient (relative to the horizontal and measured in degrees) of the image at every point. For instance, if we display the magnitude of the gradient for your image it looks like this:

Now, the harder part is to take this information we have about the gradient and turn it into a normal vector. In order to do this properly we need to know how to transform from image coordinates to world coordinates. For a CAD-generated image like yours, this information is contained in the projection transformation used to make the image. For a real-world image like one you'd get from a Kinect, you would have to look up the spec for the image-capture device.
The key piece of information we need is this: just how wide is each pixel in real-world coordinates? For non-orthonormal projections (like those used by real-world image capture devices) we can approximate this by assuming each pixel represents light within a fixed angle of the real world. If we know this angle (call it p and measure it in radians), then the real-world distance covered by a pixel is just sin(p) .* d, or approximately p .* d where d is the depth of the image at each pixel.
Now if we have this info, we can construct the 3 components of the normal vectors:
width = p .* d;
gradx = m .* cos(g) * width;
grady = m .* sin(g) * width;
normx = - gradx;
normy = - grady;
normz = 1;
len = sqrt(normx .^ 2 + normy .^ 2 + normz .^ 2);
x = normx ./ len;
y = normy ./ len;
z = normz ./ len;