Currently, literal parsing is handled in two stages. Initial discrimination is handled by the lexer, but full sanitization and parsing (calculating the underlying value) is handled by the AST. This is wasteful, and buggy because not all invalid literal tokens are rejected by the lexer.
A better solution is to integrate the integer and float literal state machines into the lexer, and have it append the calculated literal to a ArrayList(u64) literal "tape". The parser can then pop literals off this when it encounters a corresponding .int_lit or .float_lit token.
Currently, literal parsing is handled in two stages. Initial discrimination is handled by the lexer, but full sanitization and parsing (calculating the underlying value) is handled by the AST. This is wasteful, and buggy because not all invalid literal tokens are rejected by the lexer.
A better solution is to integrate the integer and float literal state machines into the lexer, and have it append the calculated literal to a ArrayList(u64) literal "tape". The parser can then pop literals off this when it encounters a corresponding .int_lit or .float_lit token.