java-stream

How to convert a Java 8 Stream into a two dimensional array?

穿精又带淫゛_ 提交于 2019-12-07 07:13:14
问题 I’m trying to convert a map based Stream into a two-dimensional array. I have figured out how to store it in a one dimensional array. Here is working code snippet: Float[] floatArray = map.entrySet() .stream() .map(key -> key.getKey().getPrice()) .toArray(size -> new Float[size]); When I execute the above code, I get my Float array populated as expected. Now I need to extend this to a two-dimensional array where I need to store the result in first dimension of a 2d array along these lines:

Sum attribute of object with Stream API

眉间皱痕 提交于 2019-12-07 06:19:44
问题 I currently have the following situation: I have got a Report object which can contain multiple Query objects. The Query objects have properties: Optional<Filter> comparisonFilter , Optional<String> filterChoice and int queryOutput . Not every query has a comparison filter, so I first check on that. Then, I make sure I get the queries for a particular filter (which is not the problem here, so I will not discuss this in detail). Every filter has some choices, of which the number of choices is

Infinite Fibonacci Sequence with Memoized in Java 8

徘徊边缘 提交于 2019-12-07 06:10:55
问题 Firstly, I'm a JavaScript programmer, and fairly new to Java8 and trying the new functional feature. Since I expertise JS coding, I implemented my own JS lazy-functional library for proof of concept. https://github.com/kenokabe/spacetime Using the library, I could write Infinite sequence of Natural numbers and Fibonacci as below: JavaScript var spacetime = require('./spacetime'); var _ = spacetime.lazy(); var natural = _(function(n) //memoized automatically { return n; // Natural numbers is

Split Java stream into two lazy streams without terminal operation

末鹿安然 提交于 2019-12-07 06:09:17
问题 I understand that in general Java streams do not split. However, we have an involved and lengthy pipeline, at the end of which we have two different types of processing that share the first part of the pipeline. Due to the size of the data, storing the intermediate stream product is not a viable solution. Neither is running the pipeline twice. Basically, what we are looking for is a solution that is an operation on a stream that yields two (or more) streams that are lazily filled and able to

Effective way to get hex string from a byte array using lambdas and streams

拥有回忆 提交于 2019-12-07 06:05:26
This is a follow-up question of How can I make an IntStream from a byte array? I created a method converting given byte array to a joined hex string. static String bytesToHex(final byte[] bytes) { return IntStream.rang(0, bytes.length * 2) .map(i -> (bytes[i / 2] >> ((i & 0x01) == 0 ? 4 : 0)) & 0x0F) .mapToObj(Integer::toHexString) .collect(joining()); } My question is that, not using any 3rd party libraries, is above code effective enough? Did I do anything wrong or unnecessary? static String bytesToHex(final byte[] bytes) { return IntStream.range(0, bytes.length) .mapToObj(i->String.format("

Java8 calculate average of list of objects in the map

孤街醉人 提交于 2019-12-07 05:51:19
问题 Initial data: public class Stats { int passesNumber; int tacklesNumber; public Stats(int passesNumber, int tacklesNumber) { this.passesNumber = passesNumber; this.tacklesNumber = tacklesNumber; } public int getPassesNumber() { return passesNumber; } public void setPassesNumber(int passesNumber) { this.passesNumber = passesNumber; } public int getTacklesNumber() { return tacklesNumber; } public void setTacklesNumber(int tacklesNumber) { this.tacklesNumber = tacklesNumber; } } Map<String, List

Is it possible to do a lazy groupby, returning a stream, in java 8?

自闭症网瘾萝莉.ら 提交于 2019-12-07 05:42:04
问题 I have some large-ish text files that I want to process by grouping its lines. I tried to use the new streaming features, like return FileUtils.readLines(...) .parallelStream() .map(...) .collect(groupingBy(pair -> pair[0])); The problem is that, AFAIK, this generates a Map. Is there any way to have high level code like the one above that generates, for example, a Stream of Entries? UPDATE : What I'm looking for is something like python's itertools.groupby. My files are already sorted (by

NullPointerException in native java code while performing parallelStream.forEach(..)

孤街浪徒 提交于 2019-12-07 05:11:59
问题 I have the following exception (the stacktrace): java.lang.NullPointerException at sun.reflect.GeneratedConstructorAccessor171.newInstance(Unknown Source) ~[?:?] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_40] at java.lang.reflect.Constructor.newInstance(Constructor.java:422) ~[?:1.8.0_40] at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598) ~[?:1.8.0_40] at java.util.concurrent.ForkJoinTask

Should I use Stream API for simple iteration?

夙愿已清 提交于 2019-12-07 05:11:06
问题 Are there any benefits in using the new Stream API for simple iterations? Without Stream API: for (Map.Entry<String, String> entry : map.entrySet()) { doSomething(entry); } Using Stream API: map.entrySet().stream().forEach((entry) -> { doSomething(entry); }); Length and readability of code are about the same. Are there any important differences (e.g. in performance)? 回答1: The Streams API makes parallelism much easier to accomplish (although you'll only see the benefit with a large sized

Java split stream by predicate into stream of streams

天大地大妈咪最大 提交于 2019-12-07 05:00:45
问题 I have hundreds of large (6GB) gziped log files that I'm reading using GZIPInputStream s that I wish to parse. Suppose each one has the format: Start of log entry 1 ...some log details ...some log details ...some log details Start of log entry 2 ...some log details ...some log details ...some log details Start of log entry 3 ...some log details ...some log details ...some log details I'm streaming the gziped file contents line by line through BufferedReader.lines() . The stream looks like: [