parsec

Python implementation of Parsec?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 00:35:38
I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language. Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options. In Haskell

In Parsec, is there a way to prevent lexeme from consuming newlines?

时光怂恿深爱的人放手 提交于 2019-12-03 12:00:38
All of the parsers in Text.Parsec.Token politely use lexeme to eat whitespace after a token. Unfortunately for me, whitespace includes new lines, which I want to use as expression terminators. Is there a way to convince lexeme to leave a new line? No, it is not. Here is the relevant code. From Text.Parsec.Token : lexeme p = do{ x <- p; whiteSpace; return x } --whiteSpace whiteSpace | noLine && noMulti = skipMany (simpleSpace <?> "") | noLine = skipMany (simpleSpace <|> multiLineComment <?> "") | noMulti = skipMany (simpleSpace <|> oneLineComment <?> "") | otherwise = skipMany (simpleSpace <|>

User state in Parsec

↘锁芯ラ 提交于 2019-12-03 10:21:45
I'm parsing an expression using Parsec and I want to keep track of variables in these expressions using the user state in Parsec. Unfortunately I don't really get how to do it. Given the following code: import Data.Set as Set inp = "$x = $y + $z" data Var = V String var = do char '$' n <- many1 letter let v = Var n -- I want to modify the set of variables here return v parseAssignment = ... -- parses the above assignment run = case runIdentity $ runParserT parseAssignment Set.empty "" inp of Left err -> ... Right -> ... So, the u in ParsecT s u m a would be Set.Set . But how would I integrate

Using Parsec to parse regular expressions

浪子不回头ぞ 提交于 2019-12-03 07:45:19
I'm trying to learn Parsec by implementing a small regular expression parser. In BNF, my grammar looks something like: EXP : EXP * | LIT EXP | LIT I've tried to implement this in Haskell as: expr = try star <|> try litE <|> lit litE = do c <- noneOf "*" rest <- expr return (c : rest) lit = do c <- noneOf "*" return [c] star = do content <- expr char '*' return (content ++ "*") There are some infinite loops here though (e.g. expr -> star -> expr without consuming any tokens) which makes the parser loop forever. I'm not really sure how to fix it though, because the very nature of star is that it

How do Scala parser combinators compare to Haskell's Parsec? [closed]

杀马特。学长 韩版系。学妹 提交于 2019-12-03 04:54:11
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 4 years ago . I have read that Haskell parser combinators (in Parsec) can parse context sensitive grammars. Is this also true for Scala parser combinators? If so, is this what the "into" (aka ">>") function is for? What are some strengths/weaknesses of Scala's implementation of parser

What is the advantage of using a parser generator like happy as opposed to using parser combinators?

馋奶兔 提交于 2019-12-03 04:14:27
问题 To learn how to write and parse a context-free grammar I want to choose a tool. For Haskell, there are two big options: Happy, which generates a parser from a grammar description and *Parsec, which allows you to directly code a parser in Haskell. What are the (dis)advantages of either approach? 回答1: External vs internal DSL The parser specification format for Happy is an external DSL, whereas with Parsec you have the full power of Haskell available when defining your parsers. This means that

Should I use a lexer when using a parser combinator library like Parsec?

前提是你 提交于 2019-12-03 01:52:15
问题 When writing a parser in a parser combinator library like Haskell's Parsec, you usually have 2 choices: Write a lexer to split your String input into tokens, then perform parsing on [Token] Directly write parser combinators on String The first method often seems to make sense given that many parsing inputs can be understood as tokens separated by whitespace. In other places, I have seen people recommend against tokenizing (or scanning or lexing , how some call it), with simplicity being

Haskell Parsec - error messages are less helpful while using custom tokens

[亡魂溺海] 提交于 2019-12-02 18:37:47
I'm working on seperating lexing and parsing stages of a parser. After some tests, I realized error messages are less helpful when I'm using some tokens other than Parsec's Char tokens. Here are some examples of Parsec's error messages while using Char tokens: ghci> P.parseTest (string "asdf" >> spaces >> string "ok") "asdf wrong" parse error at (line 1, column 7): unexpected "w" expecting space or "ok" ghci> P.parseTest (choice [string "ok", string "nop"]) "wrong" parse error at (line 1, column 1): unexpected "w" expecting "ok" or "nop" So, string parser shows what string is expected when

How do Scala parser combinators compare to Haskell's Parsec? [closed]

感情迁移 提交于 2019-12-02 18:09:04
I have read that Haskell parser combinators (in Parsec) can parse context sensitive grammars. Is this also true for Scala parser combinators? If so, is this what the "into" (aka ">>") function is for? What are some strengths/weaknesses of Scala's implementation of parser combinators, vs Haskell's? Do they accept the same class of grammars? Is it easier to generate error messages or do other miscellaneous useful things with one or the other? How does packrat parsing (introduced in Scala 2.8) fit into this picture? Is there a webpage or some other resource that shows how different operators

What is the advantage of using a parser generator like happy as opposed to using parser combinators?

落爺英雄遲暮 提交于 2019-12-02 17:31:57
To learn how to write and parse a context-free grammar I want to choose a tool. For Haskell, there are two big options: Happy, which generates a parser from a grammar description and *Parsec, which allows you to directly code a parser in Haskell. What are the (dis)advantages of either approach? External vs internal DSL The parser specification format for Happy is an external DSL, whereas with Parsec you have the full power of Haskell available when defining your parsers. This means that you can for example write functions to generate parsers, use Template Haskell and so on. Precedence rules