I have a file with many lines in each line there are many columns(fields) separated by blank \" \" the numbers of columns in each line are different I want to remove the fir
You can use sed:
sed 's/^[^ ][^ ]* [^ ][^ ]* //'
This looks for lines starting with one-or-more non-blanks, a blank, another set of one-or-more non-blanks and another blank, and deletes the matched material, aka the first two fields. The [^ ][^ ]* is marginally shorter than the equivalent but more explicit [^ ]\{1,\} notation, and the second might run into issues with GNU sed (though if you use --posix as an option, even GNU sed can't screw it up). OTOH, if the character class to be repeated was more complex, the numbered notation wins for brevity. It is easy to extend this to handle 'blank or tab' as separator, or 'multiple blanks' or 'multiple blanks or tabs'. It could also be modified to handle optional leading blanks (or tabs) before the first field, etc.
For awk and cut, see Sampson-Chen's answer. There are other ways to write the awk script, but they're not materially better than the answer given. Note that you might need to set the field separator explicitly (-F" ") in awk if you do not want tabs treated as separators, or you might have multiple blanks between fields. The POSIX standard cut does not support multiple separators between fields; GNU cut has the useful but non-standard -i option to allow for multiple separators between fields.
You can also do it in pure shell:
while read junk1 junk2 residue
do echo "$residue"
done < in-file > out-file