hue

Convert String to Timestamp in Hive HQL

十年热恋 提交于 2019-12-12 17:25:32
问题 I'm having a string like "08/03/2018 02:00:00" which i'm trying to convert into a timestamp value. I'm using the below code: unix_timestamp("08/03/2018 02:00:00", "yyyy-MM-dd'T'HH:mm:ss.SSSXXX") when i use the above code it's throwing a NULL value. How can i convert this string to Timestamp in Hive/Hue Editor? 回答1: The format you specified does not match to the actual timestamp. If 08/03 in your example is dd/MM then: select unix_timestamp("08/03/2018 02:00:00", "dd/MM/yyyy HH:mm:ss") OK

Sqoop Export Oozie Workflow Fails with File Not Found, Works when ran from the console

怎甘沉沦 提交于 2019-12-12 12:32:07
问题 I have a hadoop cluster with 6 nodes. I'm pulling data out of MSSQL and back into MSSQL via Sqoop. Sqoop import commands work fine, and I can run a sqoop export command from the console (on one of the hadoop nodes). Here's the shell script I run: SQLHOST=sqlservermaster.local SQLDBNAME=db1 HIVEDBNAME=db1 BATCHID= USERNAME="sqlusername" PASSWORD="password" sqoop export --connect 'jdbc:sqlserver://'$SQLHOST';username='$USERNAME';password='$PASSWORD';database='$SQLDBNAME'' --table ExportFromHive

Hue 500 server error

限于喜欢 提交于 2019-12-12 04:11:51
问题 I am creating a MapReduce simple job. After submitting, its giving below error Suggest to fix this issue 回答1: I know I am too late to answer. But I have noticed that this usually gets solved if you clear your cookies. 来源: https://stackoverflow.com/questions/37207387/hue-500-server-error

Not able to export Hbase table into CSV file using HUE Pig Script

若如初见. 提交于 2019-12-12 02:43:28
问题 I have installed Apache Amabari and configured the Hue . I want to export hbase table data into csv file using pig script but I am getting following error. 2017-06-03 10:27:45,518 [ATS Logger 0] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Exception caught by TimelineClientConnectionRetry, will try 30 more time(s). Message: java.net.ConnectException: Connection refused 2017-06-03 10:27:45,703 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is

How can I pass a dynamic date in a hive server action as a parameter

只谈情不闲聊 提交于 2019-12-12 01:28:54
问题 In Oozie, I have used Hive action in Hue, same action I used parameter options to supply date parameter. Here I want to provide dynamic date parameter such as yesterday date and day before yesterday. How can I generate those date? and how can I pass as parameter. My HQL is : CREATE TABLE IF NOT EXISTS tmp_table as select * from emptable where day>=${fromdate} and day<=${todate} My HiveServer Action contains: a. HQL script b. Two parameters options one for each dates like as fromdate = ,

impala time in Hue UI

丶灬走出姿态 提交于 2019-12-11 14:27:52
问题 I am trying to Estimate the time required by queries from simple to complex in Impala and using the Hue UI. Will it be possible to know the time needed to complete the query through the UI. 回答1: Impala or Hive only provides a general estimate of progress. Hue could try to display an end time by extrapolating the start time by the current progress. Feel free to follow https://issues.cloudera.org/browse/HUE-1219. 回答2: Although it seems to be not possible with Hue UI but in Command Shell its the

hue配置HBase

China☆狼群 提交于 2019-12-11 11:47:09
1.修改HBase配置 cd /export/servers/hbase-1.2.0-cdh5.14.0/conf/ im hbase-site.xml <property> <name>hbase.thrift.support.proxyuser</name> <value>true</value> </property> <property> <name>hbase.regionserver.thrift.http</name> <value>true</value> </property> 拷贝到其他节点 scp hbase-site.xml node02:/$PWD scp hbase-site.xml node03:/$PWD 2.修改Hadoop的配置 cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim core-site.xml <property> <name>hadoop.proxyuser.hbase.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hbase.groups</name> <value>*</value> </property> 分配到其他节点 scp core-site.xml

Error in running livy spark server in hue

佐手、 提交于 2019-12-11 10:46:07
问题 When I run following command hue livy_server Following error is shown Failed to run spark-submit executable: java.io.IOException: Cannot run program "spark-submit": error=2, No such file or directory I have set SPARK_HOME=/home/amandeep/spark 回答1: If you run Livy on local mode it will except to find the spark-submit script in its environment. Check your shell PATH variable. 来源: https://stackoverflow.com/questions/31014656/error-in-running-livy-spark-server-in-hue

Apache Hue:安装步骤

青春壹個敷衍的年華 提交于 2019-12-11 08:44:46
Hue的安装 第一步: 上传解压安装包 Hue的安装支持多种方式,包括rpm包的方式进行安装、tar.gz包的方式进行安装以及cloudera manager的方式来进行安装等,我们这里使用tar.gz包的方式来进行安装。 Hue的压缩包的下载地址: http://archive.cloudera.com/cdh5/cdh/5/ 我们这里使用的是CDH5.14.0这个对应的版本,具体下载地址为 http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.14.0.tar.gz cd / export / servers / tar - zxvf hue - 3 . 9 . 0 - cdh5 . 14 . 0 . tar . gz 第二步 : 编译初始化工作 联网安装各种必须的依赖包 yum install - y asciidoc cyrus - sasl - devel cyrus - sasl - gssapi cyrus - sasl - plain gcc gcc - c+ + krb5 - devel libffi - devel libxml2 - devel libxslt - devel make openldap - devel python - devel sqlite - devel gmp - devel

Store documents (.pdf, .doc and .txt files) in MaprDB

会有一股神秘感。 提交于 2019-12-11 08:25:11
问题 I need to store documents such as .pdf, .doc and .txt files to MaprDB. I saw one example in Hbase where it stores files in binary and is retrieved as files in Hue, but I not sure how it could be implemented. Any idea how can a document be stored in MaprDB? 回答1: First thing is , Im not aware about Maprdb as Im using Cloudera. But I have experience in hbase storing many types of objects in hbase as byte array like below mentioned. Most primitive way of storing in hbase or any other db is byte