1.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 1.2 +++ b/src/share/vm/runtime/biasedLocking.hpp Wed Apr 27 01:25:04 2016 +0800 1.3 @@ -0,0 +1,195 @@ 1.4 +/* 1.5 + * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 1.6 + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 1.7 + * 1.8 + * This code is free software; you can redistribute it and/or modify it 1.9 + * under the terms of the GNU General Public License version 2 only, as 1.10 + * published by the Free Software Foundation. 1.11 + * 1.12 + * This code is distributed in the hope that it will be useful, but WITHOUT 1.13 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 1.14 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 1.15 + * version 2 for more details (a copy is included in the LICENSE file that 1.16 + * accompanied this code). 1.17 + * 1.18 + * You should have received a copy of the GNU General Public License version 1.19 + * 2 along with this work; if not, write to the Free Software Foundation, 1.20 + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 1.21 + * 1.22 + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 1.23 + * or visit www.oracle.com if you need additional information or have any 1.24 + * questions. 1.25 + * 1.26 + */ 1.27 + 1.28 +#ifndef SHARE_VM_RUNTIME_BIASEDLOCKING_HPP 1.29 +#define SHARE_VM_RUNTIME_BIASEDLOCKING_HPP 1.30 + 1.31 +#include "runtime/handles.hpp" 1.32 +#include "utilities/growableArray.hpp" 1.33 + 1.34 +// This class describes operations to implement Store-Free Biased 1.35 +// Locking. The high-level properties of the scheme are similar to 1.36 +// IBM's lock reservation, Dice-Moir-Scherer QR locks, and other biased 1.37 +// locking mechanisms. The principal difference is in the handling of 1.38 +// recursive locking which is how this technique achieves a more 1.39 +// efficient fast path than these other schemes. 1.40 +// 1.41 +// The basic observation is that in HotSpot's current fast locking 1.42 +// scheme, recursive locking (in the fast path) causes no update to 1.43 +// the object header. The recursion is described simply by stack 1.44 +// records containing a specific value (NULL). Only the last unlock by 1.45 +// a given thread causes an update to the object header. 1.46 +// 1.47 +// This observation, coupled with the fact that HotSpot only compiles 1.48 +// methods for which monitor matching is obeyed (and which therefore 1.49 +// can not throw IllegalMonitorStateException), implies that we can 1.50 +// completely eliminate modifications to the object header for 1.51 +// recursive locking in compiled code, and perform similar recursion 1.52 +// checks and throwing of IllegalMonitorStateException in the 1.53 +// interpreter with little or no impact on the performance of the fast 1.54 +// path. 1.55 +// 1.56 +// The basic algorithm is as follows (note, see below for more details 1.57 +// and information). A pattern in the low three bits is reserved in 1.58 +// the object header to indicate whether biasing of a given object's 1.59 +// lock is currently being done or is allowed at all. If the bias 1.60 +// pattern is present, the contents of the rest of the header are 1.61 +// either the JavaThread* of the thread to which the lock is biased, 1.62 +// or NULL, indicating that the lock is "anonymously biased". The 1.63 +// first thread which locks an anonymously biased object biases the 1.64 +// lock toward that thread. If another thread subsequently attempts to 1.65 +// lock the same object, the bias is revoked. 1.66 +// 1.67 +// Because there are no updates to the object header at all during 1.68 +// recursive locking while the lock is biased, the biased lock entry 1.69 +// code is simply a test of the object header's value. If this test 1.70 +// succeeds, the lock has been acquired by the thread. If this test 1.71 +// fails, a bit test is done to see whether the bias bit is still 1.72 +// set. If not, we fall back to HotSpot's original CAS-based locking 1.73 +// scheme. If it is set, we attempt to CAS in a bias toward this 1.74 +// thread. The latter operation is expected to be the rarest operation 1.75 +// performed on these locks. We optimistically expect the biased lock 1.76 +// entry to hit most of the time, and want the CAS-based fallthrough 1.77 +// to occur quickly in the situations where the bias has been revoked. 1.78 +// 1.79 +// Revocation of the lock's bias is fairly straightforward. We want to 1.80 +// restore the object's header and stack-based BasicObjectLocks and 1.81 +// BasicLocks to the state they would have been in had the object been 1.82 +// locked by HotSpot's usual fast locking scheme. To do this, we bring 1.83 +// the system to a safepoint and walk the stack of the thread toward 1.84 +// which the lock is biased. We find all of the lock records on the 1.85 +// stack corresponding to this object, in particular the first / 1.86 +// "highest" record. We fill in the highest lock record with the 1.87 +// object's displaced header (which is a well-known value given that 1.88 +// we don't maintain an identity hash nor age bits for the object 1.89 +// while it's in the biased state) and all other lock records with 0, 1.90 +// the value for recursive locks. When the safepoint is released, the 1.91 +// formerly-biased thread and all other threads revert back to 1.92 +// HotSpot's CAS-based locking. 1.93 +// 1.94 +// This scheme can not handle transfers of biases of single objects 1.95 +// from thread to thread efficiently, but it can handle bulk transfers 1.96 +// of such biases, which is a usage pattern showing up in some 1.97 +// applications and benchmarks. We implement "bulk rebias" and "bulk 1.98 +// revoke" operations using a "bias epoch" on a per-data-type basis. 1.99 +// If too many bias revocations are occurring for a particular data 1.100 +// type, the bias epoch for the data type is incremented at a 1.101 +// safepoint, effectively meaning that all previous biases are 1.102 +// invalid. The fast path locking case checks for an invalid epoch in 1.103 +// the object header and attempts to rebias the object with a CAS if 1.104 +// found, avoiding safepoints or bulk heap sweeps (the latter which 1.105 +// was used in a prior version of this algorithm and did not scale 1.106 +// well). If too many bias revocations persist, biasing is completely 1.107 +// disabled for the data type by resetting the prototype header to the 1.108 +// unbiased markOop. The fast-path locking code checks to see whether 1.109 +// the instance's bias pattern differs from the prototype header's and 1.110 +// causes the bias to be revoked without reaching a safepoint or, 1.111 +// again, a bulk heap sweep. 1.112 + 1.113 +// Biased locking counters 1.114 +class BiasedLockingCounters VALUE_OBJ_CLASS_SPEC { 1.115 + private: 1.116 + int _total_entry_count; 1.117 + int _biased_lock_entry_count; 1.118 + int _anonymously_biased_lock_entry_count; 1.119 + int _rebiased_lock_entry_count; 1.120 + int _revoked_lock_entry_count; 1.121 + int _fast_path_entry_count; 1.122 + int _slow_path_entry_count; 1.123 + 1.124 + public: 1.125 + BiasedLockingCounters() : 1.126 + _total_entry_count(0), 1.127 + _biased_lock_entry_count(0), 1.128 + _anonymously_biased_lock_entry_count(0), 1.129 + _rebiased_lock_entry_count(0), 1.130 + _revoked_lock_entry_count(0), 1.131 + _fast_path_entry_count(0), 1.132 + _slow_path_entry_count(0) {} 1.133 + 1.134 + int slow_path_entry_count(); // Compute this field if necessary 1.135 + 1.136 + int* total_entry_count_addr() { return &_total_entry_count; } 1.137 + int* biased_lock_entry_count_addr() { return &_biased_lock_entry_count; } 1.138 + int* anonymously_biased_lock_entry_count_addr() { return &_anonymously_biased_lock_entry_count; } 1.139 + int* rebiased_lock_entry_count_addr() { return &_rebiased_lock_entry_count; } 1.140 + int* revoked_lock_entry_count_addr() { return &_revoked_lock_entry_count; } 1.141 + int* fast_path_entry_count_addr() { return &_fast_path_entry_count; } 1.142 + int* slow_path_entry_count_addr() { return &_slow_path_entry_count; } 1.143 + 1.144 + bool nonzero() { return _total_entry_count > 0; } 1.145 + 1.146 + void print_on(outputStream* st); 1.147 + void print() { print_on(tty); } 1.148 +}; 1.149 + 1.150 + 1.151 +class BiasedLocking : AllStatic { 1.152 +private: 1.153 + static BiasedLockingCounters _counters; 1.154 + 1.155 +public: 1.156 + static int* total_entry_count_addr(); 1.157 + static int* biased_lock_entry_count_addr(); 1.158 + static int* anonymously_biased_lock_entry_count_addr(); 1.159 + static int* rebiased_lock_entry_count_addr(); 1.160 + static int* revoked_lock_entry_count_addr(); 1.161 + static int* fast_path_entry_count_addr(); 1.162 + static int* slow_path_entry_count_addr(); 1.163 + 1.164 + enum Condition { 1.165 + NOT_BIASED = 1, 1.166 + BIAS_REVOKED = 2, 1.167 + BIAS_REVOKED_AND_REBIASED = 3 1.168 + }; 1.169 + 1.170 + // This initialization routine should only be called once and 1.171 + // schedules a PeriodicTask to turn on biased locking a few seconds 1.172 + // into the VM run to avoid startup time regressions 1.173 + static void init(); 1.174 + 1.175 + // This provides a global switch for leaving biased locking disabled 1.176 + // for the first part of a run and enabling it later 1.177 + static bool enabled(); 1.178 + 1.179 + // This should be called by JavaThreads to revoke the bias of an object 1.180 + static Condition revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS); 1.181 + 1.182 + // These do not allow rebiasing; they are used by deoptimization to 1.183 + // ensure that monitors on the stack can be migrated 1.184 + static void revoke(GrowableArray<Handle>* objs); 1.185 + static void revoke_at_safepoint(Handle obj); 1.186 + static void revoke_at_safepoint(GrowableArray<Handle>* objs); 1.187 + 1.188 + static void print_counters() { _counters.print(); } 1.189 + static BiasedLockingCounters* counters() { return &_counters; } 1.190 + 1.191 + // These routines are GC-related and should not be called by end 1.192 + // users. GCs which do not do preservation of mark words do not need 1.193 + // to call these routines. 1.194 + static void preserve_marks(); 1.195 + static void restore_marks(); 1.196 +}; 1.197 + 1.198 +#endif // SHARE_VM_RUNTIME_BIASEDLOCKING_HPP