How can I implement zipWithIndex like Spark in Apache Beam?
问题 Pcollection<String> p1 = {"a","b","c"} PCollection< KV<Integer,String> > p2 = p1.apply("some operation ") //{(1,"a"),(2,"b"),(3,"c")} I need to make it scalable for large file like Apache Spark such that it works like: sc.textFile("./filename").zipWithIndex My goal is to preserve the order between rows within a large file by assigning row numbers in a scalable way. How can I get the result by Apache Beam? Some related posts: zipWithIndex on Apache Flink Ranking pcollection elements 回答1: There