scala-collections

Is the Scala 2.8 collections library a case of “the longest suicide note in history”? [closed]

徘徊边缘 提交于 2019-12-16 22:19:31
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 6 years ago . I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those

Scala: How to check if all items are unique in a Seq?

不想你离开。 提交于 2019-12-13 17:04:58
问题 Is there a more idiomatic and maybe faster way to check if there are duplicates in a Seq , than this: mySeq.size == mySeq.toSet.size 回答1: This will be faster, because it can terminate early: def allUnique[A](to: TraversableOnce[A]) = { val set = scala.collection.mutable.Set[A]() to.forall { x => if (set(x)) false else { set += x true } } } 来源: https://stackoverflow.com/questions/23752677/scala-how-to-check-if-all-items-are-unique-in-a-seq

Is there a scala replacement for Guava MultiSet and Table concepts?

流过昼夜 提交于 2019-12-13 13:37:10
问题 To use guava Table and Multiset in scala? are there already different concenpts in scala instead of importing guava library for this usage? 回答1: You could use Map[(R, C), V] instead of Table<R, C, V> and Map[T, Int] instead of Multiset<T> . You could also add helper methods to Map[T, Int] like this: implicit class Multiset[T](val m: Map[T, Int]) extends AnyVal { def setAdd(e: T, i: Int = 1) = { val cnt = m.getOrElse(e, 0) + i if (cnt <= 0) m - e else m.updated(e, cnt) } def setRemove(e: T, i:

How call method based on Json Object scala spark?

倖福魔咒の 提交于 2019-12-13 09:06:19
问题 I Have two functions like below def method1(ip:String,r:Double,op:String)={ val data = spark.read.option("header", true).csv(ip).toDF() val r3= data.select("c", "S").dropDuplicates("C", "S").withColumn("R", lit(r)) r3.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").save(op) } def method2(ip:String,op:String)={ val data = spark.read.option("header", true).csv(ip).toDF() val r3= data.select("c", "S").dropDuplicates("C", "StockCode") r3.coalesce(1).write.format("com

How to create dataframe from list[Map] based on condition

孤人 提交于 2019-12-13 07:54:41
问题 I have a dataframe called DF1 like below. DF1: srcColumnZ|srcCoulmnY|srcCoulmnR| +---------+----------+----------+ |John |Non Hf |New york | |Steav |Non Hf |Mumbai | |Ram |HF |Boston | And also having one list of map with source to target column mapping like below. List(Map(targetColumn -> columnNameX, sourceColumn -> List(srcColumnX, srcColumnY, srcColumnZ, srcColumnP, srcColumnQ, srcColumnR)), Map(targetColumn -> columnNameY, sourceColumn -> List(srcColumnY)), Map(targetColumn ->

Extract a column value and assign it to another column as an array in Spark dataframe

孤人 提交于 2019-12-13 07:20:13
问题 I have a Spark Dataframe with the below columns. C1 | C2 | C3| C4 1 | 2 | 3 | S1 2 | 3 | 3 | S2 4 | 5 | 3 | S2 I want to generate another column C5 by taking distinct values from column C4 like C5 [S1,S2] [S1,S2] [S1,S2] Can somebody help me how to achieve this in Spark data frame using Scala? 回答1: You might want to collect the distinct items from column 4 and put them in a List firstly, and then use withColumn to create a new column C5 by creating a udf that always return a constant list:

Scala collections: util.Map[String, AnyRef] - Map[String, String]

爱⌒轻易说出口 提交于 2019-12-13 02:58:57
问题 I am getting started with Scala and I am replacing the deprecated JavaConversions library with JavaConverters . I have the following code: import scala.collection.JavaConversions._ new AMQP.BasicProperties.Builder() .contentType(message.contentType.map(_.toString).orNull) .contentEncoding(message.contentEncoding.orNull) .headers(message.headers) //<<<<--------------- I SEE THE ERROR ON THIS LINE (datatype of message.heads is Map[String, String] .deliveryMode(toDeliveryMode(message.mode))

how to extend Stream by implementing tailDefined

旧巷老猫 提交于 2019-12-13 02:04:02
问题 I'd like to extend scala.Stream . When I try, it tells me I can't, because I don't have the required method tailDefined . class S[T](s:Stream[T]) extends Stream[T] { } When I try this, it tells me tailDefined is protected: class S[T](s:Stream[T]) extends Stream[T] { def tailDefined = s.tailDefined } How do I get around this limitation and implement an extension of Stream ? 回答1: If you want to "add new methods" to Stream , use implicit classes: implicit class S[T](s:Stream[T]) { def method1 =

sort two lists by their first element and zip them in scala

半世苍凉 提交于 2019-12-12 23:39:49
问题 val descrList = cursorReal.interfaceInfo.interfaces.map { case values => (values.ifIndex , values.ifName , values.ifType) } val ipAddressList = cursorReal.interfaceIpAndIndex.filter(x=> (!x.ifIpAddress.equalsIgnoreCase("0"))).map { case values => (values.ifIndex,values.ifIpAddress) } For instance, val descrList = List((12,"VoIP-Null0",1), (8,"FastEthernet6",6), (19,"Vlan11",53), (4,"FastEthernet2",6), (15,"Vlan1",53), (11,"GigabitEthernet0",6), (9,"FastEthernet7",6), (22,"Vlan20",53), (13,

Scala F-Bounded Type Polymorphism

旧巷老猫 提交于 2019-12-12 19:20:13
问题 trait Account[T <: Account[T]] case class BrokerAccount(total:BigDecimal) extends Account[BrokerAccount] case class SavingsAccount(total:BigDecimal) extends Account[SavingsAccount] Below function declaration and invocation works fine. def foo1( xs: Array[T forSome { type T <: Account[T] }]):Array[T forSome { type T <: Account[T] }] = xs foo1(Array(BrokerAccount(100),SavingsAccount(50))) But below invocation gives compilation error. def foo2( xs: List[T forSome { type T <: Account[T] }]):List