java-stream

Stream versus Iterator in entrySet of a Map

大城市里の小女人 提交于 2019-12-05 12:52:19
问题 To my understanding, the following code should print true , since both Stream and Iterator are pointing to the first element. However, when I run the following code it is printing false : final HashMap<String, String> map = new HashMap<>(); map.put("A", "B"); final Set<Map.Entry<String, String>> set = Collections.unmodifiableMap(map).entrySet(); Map.Entry<String, String> entry1 = set.iterator().next(); Map.Entry<String, String> entry2 = set.stream().findFirst().get(); System.out.println

Stream.findFirst different than Optional.of?

旧城冷巷雨未停 提交于 2019-12-05 12:28:57
问题 Lets say I have two classes and two methods: class Scratch { private class A{} private class B extends A{} public Optional<A> getItems(List<String> items){ return items.stream() .map(s -> new B()) .findFirst(); } public Optional<A> getItems2(List<String> items){ return Optional.of( items.stream() .map(s -> new B()) .findFirst() .get() ); } } Why does getItems2 compile while getItems gives compiler error incompatible types: java.util.Optional<Scratch.B> cannot be converted to java.util

Excluding extrema from HashSet with a Stream

為{幸葍}努か 提交于 2019-12-05 12:19:04
I've been experimenting with Java 8 streams, is this is best way to remove the min and max scores. private final Set<MatchScore> scores = new HashSet<>(10); . . . public double OPR() { return scores.stream() .mapToDouble(MatchScore::getScore) .filter((num) -> { //Exclude min and max score return num != scores.stream() .mapToDouble(MatchScore::getScore) .max().getAsDouble() && num != scores.stream() .mapToDouble(MatchScore::getScore) .min().getAsDouble(); }) .average().getAsDouble(); } A simpler approach would be: return scores.stream() .mapToDouble(MatchScore::getScore) .sorted() .skip(1)

How to find unique value of a column of a 2D ArrayList in java?

霸气de小男生 提交于 2019-12-05 11:40:16
import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.Set; import java.util.function.Function; import java.util.stream.Collectors; public class Delete { public static void main(String[] args) { List<List<String>> list = new ArrayList<>(); list.add(List.of("A","B","C","R")); list.add(List.of("E","F","G","F")); list.add(List.of("A","B","C","D")); System.out.println(list.stream().distinct().count()); Map<String, Long> countMapOfColumn = list.stream() .filter(innerList -> innerList.size() >= 3) .map(innerList -> innerList.get(3 - 1))

NullPointerException in native java code while performing parallelStream.forEach(..)

若如初见. 提交于 2019-12-05 11:20:46
I have the following exception (the stacktrace): java.lang.NullPointerException at sun.reflect.GeneratedConstructorAccessor171.newInstance(Unknown Source) ~[?:?] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_40] at java.lang.reflect.Constructor.newInstance(Constructor.java:422) ~[?:1.8.0_40] at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598) ~[?:1.8.0_40] at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) ~[?:1.8.0_40] at java.util.concurrent.ForkJoinTask.invoke

Java8 calculate average of list of objects in the map

随声附和 提交于 2019-12-05 10:30:37
Initial data: public class Stats { int passesNumber; int tacklesNumber; public Stats(int passesNumber, int tacklesNumber) { this.passesNumber = passesNumber; this.tacklesNumber = tacklesNumber; } public int getPassesNumber() { return passesNumber; } public void setPassesNumber(int passesNumber) { this.passesNumber = passesNumber; } public int getTacklesNumber() { return tacklesNumber; } public void setTacklesNumber(int tacklesNumber) { this.tacklesNumber = tacklesNumber; } } Map<String, List<Stats>> statsByPosition = new HashMap<>(); statsByPosition.put("Defender", Arrays.asList(new Stats(10,

Split Java stream into two lazy streams without terminal operation

自闭症网瘾萝莉.ら 提交于 2019-12-05 10:24:55
I understand that in general Java streams do not split. However, we have an involved and lengthy pipeline, at the end of which we have two different types of processing that share the first part of the pipeline. Due to the size of the data, storing the intermediate stream product is not a viable solution. Neither is running the pipeline twice. Basically, what we are looking for is a solution that is an operation on a stream that yields two (or more) streams that are lazily filled and able to be consumed in parallel. By that, I mean that if stream A is split into streams B and C, when streams B

Sum attribute of object with Stream API

泪湿孤枕 提交于 2019-12-05 10:14:08
I currently have the following situation: I have got a Report object which can contain multiple Query objects. The Query objects have properties: Optional<Filter> comparisonFilter , Optional<String> filterChoice and int queryOutput . Not every query has a comparison filter, so I first check on that. Then, I make sure I get the queries for a particular filter (which is not the problem here, so I will not discuss this in detail). Every filter has some choices, of which the number of choices is variable. Here is an example of the input (these Query objects all have the same comparisonFilter):

Should I use Stream API for simple iteration?

回眸只為那壹抹淺笑 提交于 2019-12-05 09:35:10
Are there any benefits in using the new Stream API for simple iterations? Without Stream API: for (Map.Entry<String, String> entry : map.entrySet()) { doSomething(entry); } Using Stream API: map.entrySet().stream().forEach((entry) -> { doSomething(entry); }); Length and readability of code are about the same. Are there any important differences (e.g. in performance)? The Streams API makes parallelism much easier to accomplish (although you'll only see the benefit with a large sized collection). If you had to implement parallelism on your first example then there would be a sizeable difference

Performance for Java Stream.concat VS Collection.addAll

我的梦境 提交于 2019-12-05 09:29:26
For the purpose of combining two sets of data in a stream. Stream.concat(stream1, stream2).collect(Collectors.toSet()); Or stream1.collect(Collectors.toSet()) .addAll(stream2.collect(Collectors.toSet())); Which is more efficient and why? For the sake of readability and intention, Stream.concat(a, b).collect(toSet()) is way clearer than the second alternative. For the sake of the question, which is " what is the most efficient ", here a JMH test (I'd like to say that I don't use JMH that much, there might be some room to improve my benchmark test): Using JMH, with the following code: package