Longest match lexer option to mimic the Unix tool Lex#1490
Longest match lexer option to mimic the Unix tool Lex#1490AiStudent wants to merge 2 commits intolark-parser:masterfrom
Conversation
|
IMO, it is better to just write your grammar correctly which I am pretty sure is always possible. If your grammar is not ambiguous, you will get nice performance guarantees. (i.e. Note that if you really don't want to adjust your grammar, you can just create a custom lexer and pass it to |
|
I share @MegaIng 's concerns regarding the performance of this lexing method. If we were to include this behavior as an official lexer, I think it makes more sense to specify a subset of "competing" terminals, rather than the entire set of terminals. i.e. that only (AC, AB) will be evaluated for length, and not every terminal in the grammar. Another thing that is maybe worth pointing out - regexes are technically capable of solving this particular example using a single match: If there was a way to manually (or even automatically?) merge these terminals, and later discern which one was matched, I believe that would address the performance issues, while still supporting this behavior. |
Hi. I added a lexer for the lalr parser as a complement, which mimics the behavior of the Unix tool Lex and libraries such as flex. The behavior in question is to: match with the longest match found; if there are multiple longest matches the precedence of the terminals is in the order that they are defined.
This also means that the terminals are not sorted according to the priority rules defined in grammar.md:
By not relying on the above rules for precedence, it is possible to use grammars defined for Lex and its derivations in Lark.
Below follows an example using
longest_match:This is a grammar that neither the basic nor contextual lexer can deal with. Both will use AB, and the contextual will not try AC as AB is a possible token to parse from start. It's not possible for the programmer to set the precedence in this scenario to tokenize "ab" or "ac" correctly. Using basic yields:
Using contextual yields:
Regarding the implementation it consists of a new lexer and a new scanner: LongestMatchLexer, which inherits from BasicLexer; LongestMatchScanner, which attempts to match against every terminal, yielding the longest match. (Not optimal - but it's an option.)
It seems some other users have attempted to use longest matches (as I tried when I used lark first):
#370
#1463
Edit:
An issue with using earley instead, is that it may not yield the desired result for ambiguous grammars such as in the example below (which is a simplification of a grammar that worked for a lex derivation):
Which yields: