scala

Creating a Java Object in Scala

最后都变了- 提交于 2021-01-27 05:51:46
问题 I have a Java class "Listings". I use this in my Java MapReduce job as below: public void map(Object key, Text value, Context context) throws IOException, InterruptedException { Listings le = new Listings(value.toString()); ... } I want to run the same job on Spark. So, I am writing this in Scala now. I imported the Java class: import src.main.java.lists.Listings I want to create a Listings object in Scala. I am doing this: val file_le = sc.textFile("file// Path to file") Listings lists = new

compare case class fields with sub fields of another case class in scala

余生长醉 提交于 2021-01-27 05:43:11
问题 I have the following 3 case classes: case class Profile(name: String, age: Int, bankInfoData: BankInfoData, userUpdatedFields: Option[UserUpdatedFields]) case class BankInfoData(accountNumber: Int, bankAddress: String, bankNumber: Int, contactPerson: String, phoneNumber: Int, accountType: AccountType) case class UserUpdatedFields(contactPerson: String, phoneNumber: Int, accountType: AccountType) this is just enums, but i added anyway: sealed trait AccountType extends EnumEntry object

How to use Scala implicit class in Java

一个人想着一个人 提交于 2021-01-27 04:49:57
问题 I have a Scala Implicit class from RecordService API, which i wanted to use in Java file. package object spark { implicit class RecordServiceContext(ctx: SparkContext) { def recordServiceTextFile(path: String) : RDD[String] = { new RecordServiceRDD(ctx).setPath(path) .map(v => v(0).asInstanceOf[Text].toString) } } } Now i am trying to import this in a Java file using below import. import com.cloudera.recordservice.spark.*; But i am not able to use recordServiceTextFile("path") from

Implementing Multilevel Java Interfaces in Scala

爷,独闯天下 提交于 2021-01-27 04:42:33
问题 I have following hierarchy in java for my interface public interface Identifiable<T extends Comparable<T>> extends Serializable { public T getId(); } public interface Function extends Identifiable { public String getId(); } public abstract class Adapter implements Function { public abstract String getId(); } When I try to implement Adapter in scala as follows class MultiGetFunction extends Adapter { def getId() : String = this.getClass.getName } I am getting following error Multiple markers

Are there algebraic data types outside of sum and product?

倾然丶 夕夏残阳落幕 提交于 2021-01-27 04:32:22
问题 By most definitons the common or basic algebraic data types in Haskell or Scala are sum and product. Examples: 1, 2. Sometimes a definition just says algebraic data types are sum and product, perhaps for simplicity. However, the definitions leave an impression that other algebraic data types are possible, and sum and product are just the most useful to describe selection or combination of elements. Given there are subtraction, division, raising to an integer power operations in a basic

How can an implicit be unimported from the Scala repl?

前提是你 提交于 2021-01-27 04:02:02
问题 Is it possible to unimport an implicit from the repl? Say I do something like this, scala> import scala.math.BigInt._ import scala.math.BigInt._ scala> :implicits /* 2 implicit members imported from scala.math.BigInt */ /* 2 defined in scala.math.BigInt */ implicit def int2bigInt(i: Int): scala.math.BigInt implicit def long2bigInt(l: Long): scala.math.BigInt And then decide that it was all a big mistake. How can I remove those implicits from the current scope? My current technique is aborting

How can an implicit be unimported from the Scala repl?

我们两清 提交于 2021-01-27 04:00:17
问题 Is it possible to unimport an implicit from the repl? Say I do something like this, scala> import scala.math.BigInt._ import scala.math.BigInt._ scala> :implicits /* 2 implicit members imported from scala.math.BigInt */ /* 2 defined in scala.math.BigInt */ implicit def int2bigInt(i: Int): scala.math.BigInt implicit def long2bigInt(l: Long): scala.math.BigInt And then decide that it was all a big mistake. How can I remove those implicits from the current scope? My current technique is aborting

Calling scala code in pyspark for XSLT transformations

ぃ、小莉子 提交于 2021-01-27 02:48:17
问题 This might be a long shot, but figured it couldn't hurt to ask. I'm attempting to use Elsevier's open-sourced spark-xml-utils package in pyspark to transform some XML records with XSLT. I've had a bit of success with some exploratory code getting a transformation to work: # open XSLT processor from spark's jvm context with open('/tmp/foo.xsl', 'r') as f: proc = sc._jvm.com.elsevier.spark_xml_utils.xslt.XSLTProcessor.getInstance(f.read()) # transform XML record with 'proc' with open('/tmp/bar

Why I can't use a case object as a polymorphic type

隐身守侯 提交于 2021-01-27 02:16:50
问题 The following code doesn't compile : case object O trait Show[A] {def show(a: A) : String} class OShow extends Show[O] { override def show(a: O): String = "ahoy" } The compilation error is Error: not found: type O class OShow extends Show[O] { So, how to use a case object as a polymorphic type ? ^ 回答1: As @endeneu mention it, for case objects you need to use .type, also called singleton type annotation: class OShow extends Show[O.type] { override def show(a: O.type): String = "ahoy" } 回答2:

Why I can't use a case object as a polymorphic type

爱⌒轻易说出口 提交于 2021-01-27 02:16:32
问题 The following code doesn't compile : case object O trait Show[A] {def show(a: A) : String} class OShow extends Show[O] { override def show(a: O): String = "ahoy" } The compilation error is Error: not found: type O class OShow extends Show[O] { So, how to use a case object as a polymorphic type ? ^ 回答1: As @endeneu mention it, for case objects you need to use .type, also called singleton type annotation: class OShow extends Show[O.type] { override def show(a: O.type): String = "ahoy" } 回答2: