jvm-hotspot

Happens-Before relation in Java Memory Model

给你一囗甜甜゛ 提交于 2019-12-19 02:52:28
问题 Regarding JLS ch17 Threads and Locks, it says "if one action happens-before another, then the first is visible to and ordered before the second"; I wonder: (1) What does it really mean by saying "ordered before"? Because even if action_a happens-before action_b, action_a can be executed after action_b in some implementation, right? (2) If action_a happens-before action_b, does it mean action_a MUST NOT see action_b? Or action_a may see or may not see action_b? (3) If action_a does NOT happen

Why is the Java G1 gc spending so much time scanning RS?

最后都变了- 提交于 2019-12-19 02:47:17
问题 I'm currently evaluating the G1 garbage collector and how it performs for our application. Looking at the gc-log, I noticed a lot of collections have very long "Scan RS" phases: 7968.869: [GC pause (mixed), 10.27831700 secs] [Parallel Time: 10080.8 ms] (...) [Scan RS (ms): 4030.4 4034.1 4032.0 4032.0 Avg: 4032.1, Min: 4030.4, Max: 4034.1, Diff: 3.7] [Object Copy (ms): 6038.5 6033.3 6036.7 6037.1 Avg: 6036.4, Min: 6033.3, Max: 6038.5, Diff: 5.2] (...) [Eden: 19680M(19680M)->0B(20512M)

Why using parallel streams in static initializer leads to not stable deadlock

戏子无情 提交于 2019-12-18 10:59:27
问题 CAUTION: it is not a duplicate, please read topic сarefully https://stackoverflow.com/users/3448419/apangin quote: The real question is why the code sometimes works when it should not. The issue reproduces even without lambdas. This makes me think there might be a JVM bug. In the comments of https://stackoverflow.com/a/53709217/2674303 I tried to find out reasons why code behaves differently from one start to another and participants of that discussion made me piece of of advice to create a

High cost of polymorphism in Java Hotspot server

牧云@^-^@ 提交于 2019-12-18 04:59:08
问题 When I run my timing test program in Java Hotspot client, I get consistent behavior. However, when I run it in Hotspot server, I get unexpected result. Essentially, the cost of polymorphism is unacceptably high in certain situations that I've tried to duplicate bellow. Is this a known issue/bug with Hotspot server, or am I doing something wrong? Test program and timing are given bellow: Intel i7, Windows 8 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Mine2: 0.387028831 <---

How can I see the code that HotSpot generates after optimizing? [duplicate]

时光毁灭记忆、已成空白 提交于 2019-12-18 04:11:42
问题 This question already has answers here : How to see JIT-compiled code in JVM? (7 answers) Closed 6 years ago . I'd like to have a better understanding of what optimizations HotSpot might generate for my Java code at run time. Is there a way to see the optimized code that HotSpot is using after it's been running for a while? 回答1: You will need to start the JVM with the options -XX:+PrintAssembly and -XX:UnlockDiagnosticVMOptions , but PrintAssembly requires the JVM to have the hsdis binary

What is a de-reflection optimization in HotSpot JIT and how does it implemented?

回眸只為那壹抹淺笑 提交于 2019-12-18 04:01:48
问题 Watching Towards a Universal VM presentation, I studied this slide, which lists all the optimisations that HotSpot JIT does: In the language-specific techniques section there is a de-reflection. I tried to find some information about it accross the Internet, but failed. I understood that this optimization eliminates reflection costs in some way, but I'm interested in details. Can someone clarify this, or give some useful links? 回答1: Yes, there is an optimization to reduce Reflection costs,

What does CompileThreshold, Tier2CompileThreshold, Tier3CompileThreshold and Tier4CompileThreshold control?

一笑奈何 提交于 2019-12-17 23:22:17
问题 HotSpot's tiered compilation uses the interpreter until a threshold of invocations (for methods) or iterations (for loops) triggers a client compilation with self-profiling. The client compilation is used until another threshold of invocations or iterations triggers a server compilation. Printing HotSpot's flags shows the following flag values with -XX:+TieredCompilation. intx CompileThreshold = 10000 {pd product} intx Tier2CompileThreshold = 0 {product} intx Tier3CompileThreshold = 2000

Adjusting GC Overhead Exceeded parameters

▼魔方 西西 提交于 2019-12-17 20:51:39
问题 I need my Oracle Hotspot to throw an exception java.lang.OutOfMemoryError: GC overhead limit exceeded much sooner than with the default parameters of UseGCOverheadLimit . By default, OOME occurs when more than 98% of the time is spent in GC and less than 2% of the heap is recovered (described http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#par_gc.oom). For instance, I need my JVM to throw OOME when more than 20% of the time is spent in GC. Unfortunately, the -XX

Disable Java JIT for a specific method/class?

牧云@^-^@ 提交于 2019-12-17 19:39:29
问题 I'm having an issue in my Java application where the JIT breaks the code. If I disable the JIT, everything works fine, but runs 10-20x slower. Is there any way to disable the JIT for a specific method or class? Edit: I'm using Ubuntu 10.10, getting the same results both with: OpenJDK Runtime Environment (IcedTea6 1.9) (6b20-1.9-0ubuntu1) OpenJDK 64-Bit Server VM (build 17.0-b16, mixed mode) and: Java(TM) SE Runtime Environment (build 1.6.0_16-b01) Java HotSpot(TM) 64-Bit Server VM (build 14.2

the timing of String Literal loaded into StringTable in Java HotSpot vm

时间秒杀一切 提交于 2019-12-17 16:53:21
问题 The Question came out when i was learning java.lang.String Java API. I found an article in Chinese. Java 中new String("字面量") 中 "字面量" 是何时进入字符串常量池的? it said, CONSTANT_String is lazy resolve in HotSpot VM, so String Literal is loaded into StringTable util it is used. And i found some relavant saying. jvms Chapter 5.4. Linking says For example, a Java Virtual Machine implementation may choose to resolve each symbolic reference in a class or interface individually when it is used ("lazy" or "late"