I am working with text recognition on tires. In order to use an OCR, I must first get a clear binary map.
I have processed images and the text appears with broken an
You could apply first a max-filter (assign to each pixel in a new image the maximum value from a neighborhood around the same pixel in the original image), then a min-filter (assign minimum from neighborhood in max-image). Especially if you shape the neighborhood a bit wider than it is high (say, 2 or 3 pixels to the right/left, 1 pixel top/bottom), you should be able to get some of your characters (your image appears to mainly show gaps in the horizontal direction).
Optimal neighborhood size and shape depend on your specific problem, so you'll have to experiment some. You might experience glueing characters together by this operation - you'll possibly have to detect the blobs and split them if they're too wide compared to the other blobs.
edit: Also, binarization settings are absolutely key. Try several different binarization algorithms (Otsu, Sauvola, ...) to see which one (and which parameters) works best for you.