Ok, I am reading in dat files into a byte array. For some reason, the people who generate these files put about a half meg\'s worth of useless null bytes at the end of the
if in the file null bytes can be valid values, do you know that the last byte in the file cannot be null. if so, iterating backwards and looking for the first non-null entry is probably best, if not then there is no way to tell where the actual end of the file is.
If you know more about the data format, such as there can be no sequence of null bytes longer than two bytes (or some similar constraint). Then you may be able to actually do a binary search for the 'transition point'. This should be much faster than the linear search (assuming that you can read in the whole file).
The basic idea (using my earlier assumption about no consecutive null bytes), would be:
var data = (byte array of file data...);
var index = data.length / 2;
var jmpsize = data.length/2;
while(true)
{
jmpsize /= 2;//integer division
if( jmpsize == 0) break;
byte b1 = data[index];
byte b2 = data[index + 1];
if(b1 == 0 && b2 == 0) //too close to the end, go left
index -=jmpsize;
else
index += jmpsize;
}
if(index == data.length - 1) return data.length;
byte b1 = data[index];
byte b2 = data[index + 1];
if(b2 == 0)
{
if(b1 == 0) return index;
else return index + 1;
}
else return index + 2;