pentaho-data-integration

Pentaho BI - MongoDB input Aggregation error due to recent MongoDB upgrade to 3.6

白昼怎懂夜的黑 提交于 2021-02-08 07:46:30
问题 Due to recent Mogodb upgrade to 3.6, pentaho kettle mongoinput step aggregation not be able to fetch data from the Mongodb. The error message: com.mongodb.MongoCommandException: Command failed with error 9: 'The 'cursor' option is required, except for aggregate with the explain argument' on server localhost:2915. The full response is { "ok" : 0.0, "errmsg" : "The 'cursor' option is required, except for aggregate with the explain argument", "code" : 9, "codeName" : "FailedToParse" } It seems

lookup_Pentaho data Integration

ⅰ亾dé卋堺 提交于 2021-01-29 06:10:30
问题 I have two files (App.csv and Acess.csv) App.csv has one column called Application Application App-A App-B Access.csv contains 3 columns (Application, entitlement, userid) Application, entitlement, userid App-A,ent-A,user1 App-A,ent-B,user1 App-B,ent-c,user2 App-B,ent-d,user1 App-C,ent-c,user2 App-C,ent-d,user1 I need extract all the App-A and App-B details if it matches Application file column and output should like be below App-A,ent-A,user1 App-A,ent-B,user1 App-B,ent-c,user2 App-B,ent-d

Java Pentaho Exception MongoDB

痴心易碎 提交于 2020-01-26 00:17:47
问题 I have designed a transformation in Pentaho Data Integration ui tool and wrote a java code to execute the transformation. I followed below resources link as it is, try { /** * Initialize the Kettle Enviornment */ KettleEnvironment.init(); /** * Create a trans object to properly assign the ktr metadata. * * @filedb: The ktr file path to be executed. * */ TransMeta metadata = new TransMeta("Districts.ktr"); Trans trans = new Trans(metadata); // Execute the transformation trans.execute(null);

Pentaho Data Integration (newest version) - Not detecting MySQL driver

假装没事ソ 提交于 2020-01-05 03:57:09
问题 I'm new to this tool and I'm trying to create MySQL connection to a database but when I press the 'Test' button it apears this message: Error connecting to database [MySQL (_configuracionesEF)] :org.pentaho.di.core.exception.KettleDatabaseException: Error occurred while trying to connect to the database Driver class 'org.gjt.mm.mysql.Driver' could not be found, make sure the 'MySQL' driver (jar file) is installed. org.gjt.mm.mysql.Driver org.pentaho.di.core.exception.KettleDatabaseException:

(stderr) =256m“”==“” was unexpected at this time in Pentaho Shell script

陌路散爱 提交于 2019-12-25 00:44:54
问题 This question is almost similar to pentaho: error (stderr) =256m""=="" was unexpected at this time. while calling kitchen command (dos command using shell script job entry) from job But It differs in clarity as I am providing exact details and also that question is 3 years old which also brings in version issues. Also, that question isn't answered yet and there are no other solutions available on internet except few pages which only have unanswered question. Hence posting this again with more

How to validate one csv data compare with another csv file using Pentaho?

六月ゝ 毕业季﹏ 提交于 2019-12-24 20:11:13
问题 I have two csv file . In one file i have 10 rows and in another list of data . What i want to do is , check the data of one filed of first csv and compare it with another csv file . So how can i achieve this ? Any help would be great . 回答1: The step you are looking for is named the a Stream Lookup step.` Read you CSV and the reference files, and drop the two flows in a Stream Lookup and set it up as follow: a) Lookup step = the step that reads the reference b) Keys / field = the name of field

Pentaho date format issue

守給你的承諾、 提交于 2019-12-24 10:34:30
问题 my input excel sheet has the field with two different types of values column in the format YYYY/MM/DD Now, when I have added the excel sheet into Pentaho the columns along with datatype I got which shows string datatype in the date formats column. which you can see below After this, I tried to integrate with postgres but I am unable to find the result the error which I got attached below Updated I tried with the given timestamp format yyyy/MM/dd HH:mm:ss this works fine for me but this format

Postgres to Json. Pentaho 7.0 ( Data Integration)

孤街浪徒 提交于 2019-12-23 04:52:26
问题 I make a query to a database of postgres and I bring two fields, "USER" and "CREATED" (DATE) I extract the year from the creation date, and then it is traversing the records and according to the year and the user create the new json object And I would like to generate a json with the following structure.: [ {year:2015, users[ { user:"Ana" created: 4 }, { user:"Pedro" created: 7 } ]}, year:2016, users[ { user:"Ana" created: 4 }, { nombre:"Pedro" created: 7 } ]} ] I create a modification with

Break string into columns using Regular Expression

陌路散爱 提交于 2019-12-13 23:53:12
问题 I am new in regex, i want to break the give string into 6 parts using regular expression. I am using the Pentaho data integration tool (ETL tool) Given string: 1x 3.5 mL SST. 1x 4.0 mL gray cap cryovial. Note: There are many more string with same format I want output as: Thanks in advance !! 回答1: The single string datum you've given looks like it should match the regex pattern: (\d*)x\s(\d*\.\d*)\smL\s(.*)\.\s(\d*)x\s(\d*\.\d*)\smL\s(.*)\. You can use it with Regex Evaluation step: 回答2: Use

Merge Rows (diff) is comparing row by row, not one row to entire rows of other table

感情迁移 提交于 2019-12-12 20:07:02
问题 I am comparing two sheets using Merge Rows (diff). 1st excel sheet: 2nd excel sheet: and my pentaho transaction: in preview data showing, that id 2.0 at 2nd row is add new row and at 4 row its showing same data is delete, its suppose to identical, so how it can be achieve. 回答1: Merge rows (diff) requires both input streams to be sorted by the merge keys (there's a warning about it when you edit the step's properties). Put a sort rows step in each stream ahead of the Merge Rows (diff) step. 来源