Apache Spark's RDD splitting according to the particular size

懵懂的女人 提交于 2019-11-28 08:52:42

问题


I am trying to read strings from a text file, but I want to limit each line according to a particular size. For example;

Here is my representing the file.

aaaaa\nbbb\nccccc

When trying to read this file by sc.textFile, RDD would appear this one.

scala> val rdd = sc.textFile("textFile")
scala> rdd.collect
res1: Array[String] = Array(aaaaa, bbb, ccccc)

But I want to limit the size of this RDD. For example, if the limit is 3, then I should get like this one.

Array[String] = Array(aaa, aab, bbc, ccc, c)

What is the best performance way to do that?


回答1:


Not a particularly efficient solution (not terrible either) but you can do something like this:

val pairs = rdd
  .flatMap(x => x)  // Flatten
  .zipWithIndex  // Add indices
  .keyBy(_._2 / 3)  // Key by index / n

// We'll use a range partitioner to minimize the shuffle 
val partitioner = new RangePartitioner(pairs.partitions.size, pairs)

pairs
  .groupByKey(partitioner)  // group
  // Sort, drop index, concat
  .mapValues(_.toSeq.sortBy(_._2).map(_._1).mkString("")) 
  .sortByKey()
  .values

It is possible to avoid the shuffle by passing data required to fill the partitions explicitly but it takes some effort to code. See my answer to Partition RDD into tuples of length n.

If you can accept some misaligned records on partitions boundaries then simple mapPartitions with grouped should do the trick at much lower cost:

rdd.mapPartitions(_.flatMap(x => x).grouped(3).map(_.mkString("")))

It is also possible to use sliding RDD:

rdd.flatMap(x => x).sliding(3, 3).map(_.mkString(""))



回答2:


You will need to read all the data anyhow. Not much you can do apart from mapping each line and trim it.

rdd.map(line => line.take(3)).collect()


来源:https://stackoverflow.com/questions/35761980/apache-sparks-rdd-splitting-according-to-the-particular-size

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!