pentaho

Unable to connect to HDFS using PDI step

强颜欢笑 提交于 2019-12-04 17:17:34
I have successfully configured Hadoop 2.4 in an Ubuntu 14.04 VM from a Windows 8 system. Hadoop installation is working absolutely fine and also i am able to view the Namenode from my windows browser. Attached Image Below: So, my host name is : ubuntu and hdfs port : 9000 (correct me if I am wrong). Core-site.xml : <property> <name>fs.defaultFS</name> <value>hdfs://ubuntu:9000</value> </property> The issue is while connecting to HDFS from my Pentaho Data Integration Tool. Attached Image Below. PDI version: 4.4.0 Step Used: Hadoop Copy Files Please kindly help me in connecting to HDFS using PDI

using variable names for a database connection in Pentaho Kettle

余生颓废 提交于 2019-12-04 15:35:41
I am working on PDI kettle. Can we define a variable and use it in a database connection name. So that if in future if i need to change the connections in multiple transformations i would just change the variable value in kettle properties file? Just use variables in the Database Connection . For instance ${DB_HostName} , and ${DB_Name} etc. Then just put it in your kettle.properties: DB_HostName=localhost You can see what fields that support variables by the S in the blue diamond. 来源: https://stackoverflow.com/questions/36052615/using-variable-names-for-a-database-connection-in-pentaho-kettle

How to retrieve OUT parameter from MYSQL stored procedure to stream in Pentaho Data Integration (Kettle)?

五迷三道 提交于 2019-12-04 15:00:46
I am unable to get the OUT parameter of a MySQL procedure call in the output stream with the procedure call step of Pentaho Kettle. I'm having big trouble retrieving OUT parameter from MYSQL stored procedure to stream. I think it's maybe a kind of bug becouse it only occurs with Integer out parameter, it works with String out parameter. The exception I get is: Invalid value for getLong() - ' I think the parameters are correctly set as you can see in the ktr. You can replicate the bug in this way: Schema create schema if not exists test; use test; DROP PROCEDURE IF EXISTS procedure_test;

Data loading is slow while using “Insert/Update” step in pentaho

倖福魔咒の 提交于 2019-12-04 14:38:44
问题 Data loading is slow while using "Insert/Update" step in pentaho 4.4.0 I am using pentaho 4.4.0. While using the "Insert/Update" step in kettle the speed of the data load is too slow compared to mysql. This step will scan through the entire records in table before inserting. If the record exist it will do a update. So what shall be done to optimize the performance while doing "Insert/Update" . and the process speed is 4 r/s, so totally my records will be above 1 lakh... The process takes 2

Pass parameter to pentaho CDE report

假如想象 提交于 2019-12-04 13:49:07
I created a CDE parameter report in this report I want pass parameter through url my CDE report link as below http://localhost:8080/pentaho/content/pentaho-cdf-dd/Render?solution=demo&path=&file=pass_parameter.wcdf&userid=joe&password=password and my cda query url as below http://localhost:8080/pentaho/content/cda/doQuery?path=demo/pass_parameter.cda&dataAccessId=jdbc&paramdeviceType=deviceType In above cda query url if I pass deviceType as below http://localhost:8080/pentaho/content/cda/doQuery?path=demo/pass_parameter.cda&dataAccessId=jdbc&paramdeviceType=Linux It shows me a json formatted

pentaho server install

我的未来我决定 提交于 2019-12-04 11:36:22
// Server [localhost]: Database [postgres]: Port [5432]: Username [postgres]: 用户 postgres 的口令: psql (12.0) 输入 "help" 来获取帮助信息. postgres=# \i E:/pentaho/server/pentaho-server/data/postgresql/create_quartz_postgresql.sql .... .... quartz=> \c postgres postgres-> \c - postgres ... postgres-# \i E:/pentaho/server/pentaho-server/data/postgresql/create_jcr_postgresql.sql ... postgres-# \i E:/pentaho/server/pentaho-server/data/postgresql/create_repository_postgresql.sql 来源: https://my.oschina.net/yunjie/blog/3129773

Can we compare saiku with Pentaho Analyzer?

匆匆过客 提交于 2019-12-04 08:10:19
问题 I'm currently in an internship and i have to create a whole BI application. I think i'll use pentaho, and I have to use just open source component. I know that Pentaho Analyzer is not free My question is: Is saiku an equivalent of analyzer? If yes, can I use it with pentaho instead of analyzer? thks 回答1: Yes of course. Both the tools use the same underlying OLAP engine - Mondrian. Saiku is essentially the same as analyzer providing many of the same features - however it has a different

JDBC converting Timestamp to NULL (zeroDateTimeBehavior issue)

浪尽此生 提交于 2019-12-04 06:55:57
I'm using Pentaho Data Integration (Table Input step) to pull in data from a MySQL server. A couple of fields are of the type 'Timestamp', and Pentaho keeps spewing out errors because of the timestamp being NULL (0000-00-00 00:00:00.000000). I added a zeroDateTimeBehavior=convertToNull to the parameters which should take care of the bad timestamps, but it's converting all of my Timestamp data to NULL. One reason why I think it may be happening is because some of my 'good' data is represented as, for example, 2013-03-14 04:55:09.000000. While most of the date is 'good data', the fractional

Pentaho6.1资源库MySQL迁移

寵の児 提交于 2019-12-04 06:24:13
一、软件环境 操作系统:Windows10 64位 Pentaho版本: biserver-ce-6.1.0.1-196 MySQL版本:5.5.15 MySQL Community Server (GPL) JDK版本:Java 1.7.0_80 二、执行官方sql脚本文件 1、 sql脚本目录 2、执行脚本 3、 运行结果 三、相关配置项的修改 1、修改quartz的资源库配置 文件位置: \biserver-ce\pentaho-solutions\system\quartz 修改项: org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate #org.quartz.jobStore.misfireThreshold = 60000 #org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate #org.quartz.jobStore.useProperties = false #org.quartz.jobStore.dataSource = myDS #org.quartz.jobStore.tablePrefix = QRTZ5_ #org

Pentaho6.1实现国际化二:Pentaho CDE通过资源文件实现国际化

非 Y 不嫁゛ 提交于 2019-12-04 06:23:32
本文是Pentaho国际化的第二部分----CDE,原理性的东东已经在第一部分做了简介,不懂得童鞋请阅读我之前写的文章: https://my.oschina.net/TaoPengFeiBlog/blog/797072 一、描述 假设我们要实现中英文的国际化,我们通过I18n会写出3个特殊的属性文件。且它们放置在与你所做的Dashboard相同的目录下。 1、任何资源文件都应该遵循以下3个规则之一: messages.properties 一个没有任何语言特定定义的基本资源文件; messages<underscore><language>.properties 小写形式的语言格式的资源文件,譬如: 'messages_en.properties', 'messages_zh.properties'; messages<underscore><language><hyphen><COUNTRY>.properties 小写形式+大写形式的语言格式的资源文件,譬如: 'messages_zh-CN.properties'; 2、资源文件之间共享key的覆盖规则: 所有在messages<underscore><language>.properties文件里的key消息将会覆盖掉所有在messages.properties文件里的key值; 所有在messages