I am currently writing a compiler and I\'m in the Lexer phase.
I know that the lexer tokenizes the input stream.
However, consider the following stream:
There is no simple answer for the general case.
Usually it is easier to have the lexer identify "higher level" elements like identifier or even type or variable if the grammar of the languages allows to. The more dynamic the grammar is and interpretation of tokens depends more on internal state if the parser then it might be easier to pose the interpretation onto the parser. Otherwise the communication between lexer and parser might get overly complex. (E.g. consider a languate where int is a type in one location and a valid variable name in another and a language keyword in a third case)
As a rule of thumb: let the lexer do all the work that keeps the grammer easy without causing extra complexity between lexer and parser.