apache-zeppelin

Apache Zeppelin - Highcharts

╄→尐↘猪︶ㄣ 提交于 2020-01-13 06:00:54
问题 I am trying Apache zeppelin . I wanted to have highcharts. So I thought of using %html interpreter. I have done this print("%html <h3> Hello World!! </h3>") It perfectly works. Know I have the code for highcharts <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> <script

Timeout error: Error with 400 StatusCode: “requirement failed: Session isn't active.”

纵饮孤独 提交于 2020-01-06 04:26:08
问题 I'm using Zeppelin v0.7.3 notebook to run Pyspark scripts. In one paragraph, I am running script to write data from dataframe to a parquet file in a Blob folder. File is partitioned per country. Number of rows of dataframe is 99,452,829 . When the script reaches 1 hour , an error is encountered - Error with 400 StatusCode: "requirement failed: Session isn't active. My default interpreter for the notebook is jdbc . I have read about timeoutlifecyclemanager and added in the interpreter setting

How rename S3 files not HDFS in spark scala

99封情书 提交于 2020-01-05 05:32:06
问题 I have approx 1 millions text files stored in S3 . I want to rename all files based on their folders name. How can i do that in spark-scala ? I am looking for some sample code . I am using zeppelin to run my spark script . Below code I have tried as suggested from answer import org.apache.hadoop.fs._ val src = new Path("s3://trfsmallfffile/FinancialLineItem/MAIN") val dest = new Path("s3://trfsmallfffile/FinancialLineItem/MAIN/dest") val conf = sc.hadoopConfiguration // assuming sc = spark

Apache Zeppelin 0.6.1: Run Spark 2.0 Twitter Stream App

送分小仙女□ 提交于 2020-01-04 06:34:32
问题 I have a cluster with Spark 2.0 and Zeppelin 0.6.1 installed. Since the class TwitterUtils.scala is moved from Spark project to Apache Bahir, I can't use the TwitterUtils in my Zeppelin notebook anymore. Here the snippets of my notebook: Dependency loading: %dep z.reset z.load("org.apache.bahir:spark-streaming-twitter_2.11:2.0.0") DepInterpreter(%dep) deprecated. Remove dependencies and repositories through GUI interpreter menu instead. DepInterpreter(%dep) deprecated. Load dependency through

apache zeppelin throwing NullPointerException error

ぐ巨炮叔叔 提交于 2020-01-01 19:17:10
问题 I am new to zeppelin and trying to setup the zeppelin on my system. Till now I have done the following steps: Downloaded zeppelin from here Setup the JAVA_HOME at my system environment variable. Goto zeppelin-0.7.3-bin-all\bin and ran zeppelin.cmd Able to see zeppelin-ui at http://localhost:8090 When I am trying to run load data into table program mentioned in zeppelin tutotial -> Basic Features(spark) it is throwing following error java.lang.NullPointerException at org.apache.zeppelin.spark

When registering a table using the %pyspark interpreter in Zeppelin, I can't access the table in %sql

拈花ヽ惹草 提交于 2020-01-01 10:16:13
问题 I am using Zeppelin 0.5.5. I found this code/sample here for python as I couldn't get my own to work with %pyspark http://www.makedatauseful.com/python-spark-sql-zeppelin-tutorial/. I have a feeling his %pyspark example worked because if you using the original %spark zeppelin tutorial the "bank" table is already created. This code is in a notebook. %pyspark from os import getcwd # sqlContext = SQLContext(sc) # Removed with latest version I tested zeppelinHome = getcwd() bankText = sc.textFile

com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.3

六月ゝ 毕业季﹏ 提交于 2020-01-01 09:22:38
问题 My OS is OS X 10.11.6. I'm running Spark 2.0, Zeppelin 0.6, Scala 2.11 When I run this code in Zeppelin I get an exception from Jackson. When I run this code in spark-shell - no exception. val filestream = ssc.textFileStream("/Users/davidlaxer/first-edition/ch06") com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.3 at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:56) at com.fasterxml.jackson.module.scala

How to install libraries to python in zeppelin-spark2 in HDP

懵懂的女人 提交于 2020-01-01 07:27:31
问题 I am using HDP Version: 2.6.4 Can you provide a step by step instructions on how to install libraries to the following python directory under spark2 ? The sc.version (spark version) returns res0: String = 2.2.0.2.6.4.0-91 The spark2 interpreter name and value is as following zeppelin.pyspark.python: /usr/local/Python-3.4.8/bin/python3.4 The python version and current libraries are %spark2.pyspark import pip import sys sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed

No interpreters available in Zeppelin

不羁的心 提交于 2019-12-31 17:52:39
问题 I have just installed the following on my Mac (Yosemite 10.10.3): oracle java 1.8 update 45 scala 2.11.6 spark 1.4 (precompiled release: http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz) zeppelin from source (https://github.com/apache/incubator-zeppelin) no additional config, just copied created zeppelin-env.sh and zeppelin-site.xml from templates. no edits. I Followed the installation guidelines: https://zeppelin.incubator.apache.org/docs/install/install.html I have build