how to remove every nth element from the Scala list?

痴心易碎 提交于 2019-12-06 15:47:59

Simply:

list.zipWithIndex
    .filter { case (_, i) => (i + 1) % n != 0 }
    .map { case (e, _) => e }

Simplest so far, I think

def removeNth[A](myList: List[A], n: Int): List[A] = 
  myList.zipWithIndex collect { case (x,i) if (i + 1) % n != 0 => x }

collect is an oft-forgotten gem that takes a partial function as its second argument, maps elements with that function and ignores those that are not in its domain.

Here is a recursive implementation without indexing.

  def drop[A](n: Int, lst: List[A]): List[A] = {
    def dropN(i: Int, lst: List[A]): List[A] = (i, lst) match {
      case (0, _ :: xs) => dropN(n, xs)
      case (_, x :: xs) => x :: dropN(i - 1, xs)
      case (_, x) => x
    }
    dropN(n, lst)
  }

One more alternative, close to @elm's answer but taking into account that drop(1) is much faster for lists than takeing nearly the entire list:

def remove[A](xs: List[A], n: Int) = {
  val (firstPart, rest) = xs.splitAt(n - 1)
  firstPart ++ rest.grouped(n).flatMap(_.drop(1))
}

Simplest solution

scala> def dropNth[T](list:List[T], n:Int) :List[T] = {
     | list.take(n-1):::list.drop(n)
     | }

An approach without indexing, by chopping the list into chunks of length nth each,

xs.grouped(nth).flatMap(_.take(nth-1)).toList

From each chunk delivered by grouped we take up to nth-1 items.

This other approach is not efficient (note comment by @ Alexey Romanov), by using a for comprehension which desugars into a flatMap and a withFilter (lazy filter),

for (i <- 0 until xs.size if i % nth != nth-1) yield xs(i)

Here is tail-recursive implementation for List using accumulator:

  import scala.annotation.tailrec
  def dropNth[A](lst: List[A], n: Int): List[A] = {
    @tailrec
    def dropRec(i: Int, lst: List[A], acc: List[A]): List[A] = (i, lst) match {
      case (_, Nil) => acc
      case (1, x :: xs) => dropRec(n, xs, acc)
      case (i, x :: xs) => dropRec(i - 1, xs, x :: acc)
    }
    dropRec(n, lst, Nil).reverse
  }

Update: As noted in the comments, I have tried the other solutions here on large (1 to 5000000).toList input. Those with zipWithIndex filter/collect fail on OutOfMemoryError and the (non-tail) recurcive fails on StackOverflowError. Mine using List cons (::) and tailrec works well.

That is because the zipping-with-index creates new ListBuffer and is appending the tuples, that leads to OOM. And the recursive simply has 5 million levels of recursion, which is too much for the stack.

The tail-recursive creates no unnecessary objects and effectively creates two copies of the input (that is, 2*5 million of :: instances), both in O(n). The first is to create the filtered elements, which are in reverse order, because the output is prepended x :: acc (in O(1), while appending a List is O(n)). The second one is simply the reverse of the recursive output.

Yet another approach: make a function for List that does exactly what you need. This does the same as Martin's dropNth function, but doesn't need the O(n) reverse:

    import scala.collection.mutable.ListBuffer

    implicit class improvedList[A](xs: List[A]) {
      def filterAllWhereIndex(n: Int): List[A] = {
        var i = 1
        var these = xs
        val b = new ListBuffer[A]
        while (these.nonEmpty) {
          if (i !=  n) {
            b += these.head
            i += 1
          } else i = 1
          these = these.tail
        }
        b.result
      }
    }

    (1 to 5000000).toList filterAllWhereIndex 3

If you want efficient this does the trick. Plus it can be used as infix operator as shown above. This a good pattern to know in order to avoid using zipWithIndex, which seems a bit heavy handed on both time and space.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!