antlr

How to implement the visitor pattern for nested function

ε祈祈猫儿з 提交于 2019-12-05 03:38:26
I am a newbie to Antlr and I wanted the below implementation to be done using Antlr4. I am having the below-written functions. 1. FUNCTION.add(Integer a,Integer b) 2. FUNCTION.concat(String a,String b) 3. FUNCTION.mul(Integer a,Integer b) And I am storing the functions metadata like this. Map<String,String> map=new HashMap<>(); map.put("FUNCTION.add","Integer:Integer,Integer"); map.put("FUNCTION.concat","String:String,String"); map.put("FUNCTION.mul","Integer:Integer,Integer"); Where, Integer:Integer,Integer represents Integer is the return type and input params the function will accespts are

ANTLR JavaScript Target

走远了吗. 提交于 2019-12-05 02:51:56
I have been using ANTLR to generate a parser + tree grammar for a mark up language with Java target which works fine. Now I am trying to get the target in JavaScript to use it in my web browser. However, I have not been able to locate any good documentation on how to go about doing this. I am using eclipse with ANTLR IDE, and when i specify the language as JavaScript, I get the following errors. Multiple markers at this line (10): internal error: group JavaScript does not satisfy interface ANTLRCore: mismatched arguments on these templates [treeParser(grammar, name, scopes, tokens, tokenNames,

What is minimal sample Gradle project for ANTLR4 (with antlr plugin)?

天大地大妈咪最大 提交于 2019-12-05 02:05:39
I have created new Gradle project, added apply plugin: 'antlr' and dependencies { antlr "org.antlr:antlr4:4.5.3" to build.gradle . Created src/main/antlr/test.g4 file with the following content grammar test; r : 'hello' ID; ID : [a-z]+ ; WS : [ \t\r\n]+ -> skip ; But it doesn't work. No java source files generated (and no error occurred). What I missed? Project is here: https://github.com/dims12/AntlrGradlePluginTest2 UPDATE I found my sample is actually works, but it put code into \build\generated-src which I was not expecting :shame: I will add onto other answers here. Issue 1 : Generated

Is ANTLR an appropriate tool to serialize/deserialize a binary data format?

雨燕双飞 提交于 2019-12-05 01:37:41
I need to read and write octet streams to send over various networks to communicate with smart electric meters. There is an ANSI standard, ANSI C12.19, that describes the binary data format. While the data format is not overly complex the standard is very large (500+ pages) in that it describes many distinct types. The standard is fully described by an EBNF grammar. I am considering utilizing ANTLR to read the EBNF grammar or a modified version of it and create C# classes that can read and write the octet stream. Is this a good use of ANTLR? If so, what do I need to do to be able to utilize

How can i see the live parse tree using Antlr4 Ide in Eclipse?

て烟熏妆下的殇ゞ 提交于 2019-12-05 00:45:38
I'm new using Antlr4 but I know that exist a plugin for Eclipse. I have a simple question...After I created the g4 file how can I visualize the live parse tree in order to see the tree of an input expression? Thanks Currently there is no provision for viewing live parse tree in ANTLR 4 IDE for Eclipse . Meanwhile, you can see the parse tree using -gui switch in the command line. It also provides a feature of saving the parse tree as PNG. After installing Antlr4Ide plugin in Eclipse: Window>Show View>Other, Antlr4>Parse tree Activate g4 file Click parsing rule in g4 file, Parse tree now shows

ANTLR vs. Happy vs. other parser generators

。_饼干妹妹 提交于 2019-12-04 23:13:28
问题 I want to write a translator between two languages, and after some reading on the Internet I've decided to go with ANTLR. I had to learn it from scratch, but besides some trouble with eliminating left recursion everything went fine until now. However, today some guy told me to check out Happy, a Haskell based parser generator. I have no Haskell knowledge, so I could use some advice, if Happy is indeed better than ANTLR and if it's worth learning it. Specifically what concerns me is that my

How to make CMake target executed whether specified file was changed?

倾然丶 夕夏残阳落幕 提交于 2019-12-04 23:12:01
I'm trying to use ANTLR in my C++ project. I made a target for running ANTLR generator for specified grammar and made main prjct dependent from it. ADD_CUSTOM_TARGET(GenerateParser COMMAND ${ANTLR_COMMAND} ${PROJECT_SOURCE_DIR}/src/MyGrammar.g -o ${PROJECT_SOURCE_DIR}/src/MyGrammar ) ADD_LIBRARY(MainProject ${LIBRARY_TYPE} ${TARGET_SOURCES} ${TARGET_OPTIONS}) ADD_DEPENDENCIES(MainProject GenerateParser) The problem is that ANTLR generator running every time I build project and consumes enough time. How can I make it run only whether my grammar has been changed? Or may be it is possible to make

How Lexer lookahead works with greedy and non-greedy matching in ANTLR3 and ANTLR4?

北城以北 提交于 2019-12-04 21:35:01
If someone would clear my mind from the confusion behind look-ahead relation to tokenizing involving greery/non-greedy matching i'd be more than glad. Be ware this is a slightly long post because it's following my thought process behind. I'm trying to write antlr3 grammar that allows me to match input such as: "identifierkeyword" I came up with a grammar like so in Antlr 3.4: KEYWORD: 'keyword' ; IDENTIFIER : (options {greedy=false;}: (LOWCHAR|HIGHCHAR))+ ; /** lowercase letters */ fragment LOWCHAR : 'a'..'z'; /** uppercase letters */ fragment HIGHCHAR : 'A'..'Z'; parse: IDENTIFIER KEYWORD EOF

Replace token in ANTLR

偶尔善良 提交于 2019-12-04 17:27:30
I want to replace a token using ANTLR. I tried with TokenRewriteStream and replace, but it didn't work. Any suggestions? ANTLRStringStream in = new ANTLRStringStream(source); MyLexer lexer = new MyLexer(in); TokenRewriteStream tokens = new TokenRewriteStream(lexer); for(Object obj : tokens.getTokens()) { CommonToken token = (CommonToken)obj; tokens.replace(token, "replacement"); } The lexer finds all occurences of single-line comments, and i want to replace them in the original source too. EDIT: This is the grammar: grammar ANTLRTest; options { language = Java; } @header { package main; }

Hive SQL 编译过程详解

﹥>﹥吖頭↗ 提交于 2019-12-04 17:22:46
Hive是基于Hadoop的一个数据仓库系统,在各大公司都有广泛的应用。美团数据仓库也是基于Hive搭建,每天执行近万次的Hive ETL计算流程,负责每天数百GB的数据存储和分析。Hive的稳定性和性能对我们的数据分析非常关键。 在几次升级Hive的过程中,我们遇到了一些大大小小的问题。通过向社区的咨询和自己的努力,在解决这些问题的同时我们对Hive将SQL编译为MapReduce的过程有了比较深入的理解。对这一过程的理解不仅帮助我们解决了一些Hive的bug,也有利于我们优化Hive SQL,提升我们对Hive的掌控力,同时有能力去定制一些需要的功能。 1、MapReduce实现基本SQL操作的原理 详细讲解SQL编译为MapReduce之前,我们先来看看MapReduce框架实现SQL基本操作的原理 1.1 Join的实现原理 select u.name, o.orderid from order o join user u on o.uid = u.uid; 在map的输出value中为不同表的数据打上tag标记,在reduce阶段根据tag判断数据来源。MapReduce的过程如下(这里只是说明最基本的Join的实现,还有其他的实现方式) 1.2 Group By的实现原理 select rank, isonline, count(*) from city group