aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/kern_mutex.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Correct the predicates on which lockstat:::{thread,spin}-spin fire.Mark Johnston2017-07-311-2/+2
| | | | | | | | | | In particular, they should fire only if the lock was owned by another thread when we first attempted to acquire that lock. MFC after: 1 week Notes: svn path=/head/; revision=321744
* Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.Mark Johnston2017-06-191-2/+3
| | | | | | | | | | | | Most of the lock slowpaths assert that the calling thread isn't an idle thread. However, this may not be true if the system has panicked, and in some cases the assertion appears before a SCHEDULER_STOPPED() check. MFC after: 3 days Sponsored by: Dell EMC Isilon Notes: svn path=/head/; revision=320124
* mtx: fix whitespace damage in _mtx_trylock_flags_Mateusz Guzik2017-05-301-4/+4
| | | | | | | MFC after: 3 days Notes: svn path=/head/; revision=319167
* KDTRACE_HOOKS isn't guaranteed to be defined. Change to check to seeWarner Losh2017-02-241-3/+3
| | | | | | | | | if it is defined or not rather than if it is non-zero. Sponsored by: Netflix, Inc Notes: svn path=/head/; revision=314187
* mtx: microoptimize lockstat handling in spin mutexes and thread lockMateusz Guzik2017-02-231-19/+44
| | | | | | | | While here make the code compilablle on kernels with LOCK_PROFILING but without KDTRACE_HOOKS. Notes: svn path=/head/; revision=314185
* mtx: fix spin mutexes interaction with failed fcmpsetMateusz Guzik2017-02-201-0/+8
| | | | | | | While doing so move recursion support down to the fallback routine. Notes: svn path=/head/; revision=313996
* locks: make trylock routines check for 'unowned' valueMateusz Guzik2017-02-191-6/+11
| | | | | | | | | | Since fcmpset can fail without lock contention e.g. on arm, it was possible to get spurious failures when the caller was expecting the primitive to succeed. Reported by: mmel Notes: svn path=/head/; revision=313944
* locks: clean up trylock primitivesMateusz Guzik2017-02-181-10/+22
| | | | | | | In particular thius reduces accesses of the lock itself. Notes: svn path=/head/; revision=313928
* mtx: plug the 'opts' argument when not usedMateusz Guzik2017-02-181-2/+6
| | | | Notes: svn path=/head/; revision=313908
* mtx: get rid of file/line args from slow paths if they are unusedMateusz Guzik2017-02-171-1/+1
| | | | | | | | | | | | | This denotes changes which went in by accident in r313877. On most production kernels both said parameters are zeroed and have nothing reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change stops passing them by internal consumers which this is the case. Kernel modules use _flags variants which are not affected kbi-wise. Notes: svn path=/head/; revision=313878
* mtx: restrict r313875 to kernels without LOCK_PROFILINGMateusz Guzik2017-02-171-0/+14
| | | | Notes: svn path=/head/; revision=313877
* mtx: microoptimize lockstat handling in __mtx_lock_sleepMateusz Guzik2017-02-171-4/+9
| | | | | | | This saves a function call and multiple branches after the lock is acquired. Notes: svn path=/head/; revision=313875
* locks: let primitives for modules unlock without always goging to the slsow pathMateusz Guzik2017-02-171-0/+4
| | | | | | | | | It is only needed if the LOCK_PROFILING is enabled. It has to always check if the lock is about to be released which requires an avoidable read if the option is not specified.. Notes: svn path=/head/; revision=313855
* locks: remove SCHEDULER_STOPPED checks from primitives for modulesMateusz Guzik2017-02-171-6/+0
| | | | | | | | | | | They all fallback to the slow path if necessary and the check is there. This means a panicked kernel executing code from modules will be able to succeed doing actual lock/unlock, but this was already the case for core code which has said primitives inlined. Notes: svn path=/head/; revision=313853
* locks: tidy up unlock fallback pathsMateusz Guzik2017-02-091-7/+10
| | | | | | | | | | | Update comments to note these functions are reachable if lockstat is enabled. Check if the lock has any bits set before attempting unlock, which saves an unnecessary atomic operation. Notes: svn path=/head/; revision=313467
* locks: change backoff to exponentialMateusz Guzik2017-02-071-45/+9
| | | | | | | | | | | | | | | Previous implementation would use a random factor to spread readers and reduce chances of starvation. This visibly reduces effectiveness of the mechanism. Switch to the more traditional exponential variant. Try to limit starvation by imposing an upper limit of spins after which spinning is half of what other threads get. Note the mechanism is turned off by default. Reviewed by: kib (previous version) Notes: svn path=/head/; revision=313386
* locks: fix recursion support after recent changesMateusz Guzik2017-02-061-0/+2
| | | | | | | | | | | | When a relevant lockstat probe is enabled the fallback primitive is called with a constant signifying a free lock. This works fine for typical cases but breaks with recursion, since it checks if the passed value is that of the executing thread. Read the value if necessary. Notes: svn path=/head/; revision=313335
* mtx: fixup r313278, the assignemnt was supposed to go inside the loopMateusz Guzik2017-02-051-1/+1
| | | | Notes: svn path=/head/; revision=313279
* mtx: fix up _mtx_obtain_lock_fetch usage in thread lockMateusz Guzik2017-02-051-0/+1
| | | | | | | | Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED, callers have to do it on their own. Notes: svn path=/head/; revision=313278
* mtx: move lockstat handling out of inline primitivesMateusz Guzik2017-02-051-8/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lockstat requires checking if it is enabled and if so, calling a 6 argument function. Further, determining whether to call it on unlock requires pre-reading the lock value. This is problematic in at least 3 ways: - more branches in the hot path than necessary - additional cacheline ping pong under contention - bigger code Instead, check first if lockstat handling is necessary and if so, just fall back to regular locking routines. For this purpose a new macro is introduced (LOCKSTAT_PROFILE_ENABLED). LOCK_PROFILING uninlines all primitives. Fold in the current inline lock variant into the _mtx_lock_flags to retain the support. With this change the inline variants are not used when LOCK_PROFILING is defined and thus can ignore its existence. This results in: text data bss dec hex filename 22259667 1303208 4994976 28557851 1b3c21b kernel.orig 21797315 1303208 4994976 28095499 1acb40b kernel.patched i.e. about 3% reduction in text size. A remaining action is to remove spurious arguments for internal kernel consumers. Notes: svn path=/head/; revision=313275
* mtx: switch to fcmpsetMateusz Guzik2017-02-051-17/+15
| | | | | | | | | | | | | | | The found value is passed to locking routines in order to reduce cacheline accesses. mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures the routine can fail even if the lock could have been handled by the inline primitive. Discussed with: jhb Tested by: pho (previous version) Notes: svn path=/head/; revision=313269
* Sprinkle __read_mostly on backoff and lock profiling code.Mateusz Guzik2017-01-271-2/+2
| | | | | | | MFC after: 1 month Notes: svn path=/head/; revision=312890
* mtx: plug open-coded mtx_lock access missed in r311172Mateusz Guzik2017-01-041-1/+1
| | | | Notes: svn path=/head/; revision=311226
* Reduce lock accesses in thread lock similarly to r311172.Mateusz Guzik2017-01-031-6/+12
| | | | Notes: svn path=/head/; revision=311194
* mtx: reduce lock accessesMateusz Guzik2017-01-031-39/+50
| | | | | | | | | | | | | | | | | Instead of spuriously re-reading the lock value, read it once. This change also has a side effect of fixing a performance bug: on failed _mtx_obtain_lock, it was possible that re-read would find the lock is unowned, but in this case the primitive would make a trip through turnstile code. This is diff reduction to a variant which uses atomic_fcmpset. Discussed with: jhb (previous version) Tested by: pho (previous version) Notes: svn path=/head/; revision=311172
* Use a consistent snapshot of the lock state in owner_mtx().Mark Johnston2016-12-101-3/+6
| | | | | | | MFC after: 2 weeks Notes: svn path=/head/; revision=309784
* Make no assertions about mutex state when the scheduler is stopped.Eric van Gyzen2016-09-261-1/+1
| | | | | | | | | | This changes the assert path to match the lock and unlock paths. MFC after: 1 week Sponsored by: Dell EMC Notes: svn path=/head/; revision=306346
* locks: add backoff for spin mutexes and thread lockMateusz Guzik2016-09-091-13/+50
| | | | | | | Reviewed by: jhb Notes: svn path=/head/; revision=305671
* locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* caseMateusz Guzik2016-08-021-1/+3
| | | | | | | Reported by: Michael Butler <imb protected-networks.net> Notes: svn path=/head/; revision=303655
* Implement trivial backoff for locking primitives.Mateusz Guzik2016-08-011-9/+42
| | | | | | | | | | | | | | | | | | | | | | | | All current spinning loops retry an atomic op the first chance they get, which leads to performance degradation under load. One classic solution to the problem consists of delaying the test to an extent. This implementation has a trivial linear increment and a random factor for each attempt. For simplicity, this first thouch implementation only modifies spinning loops where the lock owner is running. spin mutexes and thread lock were not modified. Current parameters are autotuned on boot based on mp_cpus. Autotune factors are very conservative and are subject to change later. Reviewed by: kib, jhb Tested by: pho MFC after: 1 week Notes: svn path=/head/; revision=303643
* locks: change sleep_cnt and spin_cnt types to u_intMateusz Guzik2016-07-311-2/+2
| | | | | | | | | | | Both variables are uint64_t, but they only count spins or sleeps. All reasonable values which we can get here comfortably hit in 32-bit range. Suggested by: kib MFC after: 1 week Notes: svn path=/head/; revision=303584
* Implement mtx_trylock_spin(9).Konstantin Belousov2016-07-231-0/+28
| | | | | | | | | | | Discussed with: bde Reviewed by: jhb Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D7192 Notes: svn path=/head/; revision=303211
* Ensure that spinlock sections are balanced even after a panic.Mark Johnston2016-07-051-1/+8
| | | | | | | | | | | | | | | vpanic() uses spinlock_enter() to disable interrupts before dumping core. However, when the scheduler is stopped and INVARIANTS is not configured, thread_lock() does not acquire a spinlock section, while thread_unlock() releases one. This can result in interrupts staying enabled while the kernel dumps core, complicating post-mortem analysis of the crash. Approved by: re (gjb) MFC after: 1 week Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=302346
* Microoptimize locking primitives by avoiding unnecessary atomic ops.Mateusz Guzik2016-06-011-4/+9
| | | | | | | | | | | | | Inline version of primitives do an atomic op and if it fails they fallback to actual primitives, which immediately retry the atomic op. The obvious optimisation is to check if the lock is free and only then proceed to do an atomic op. Reviewed by: jhb, vangyzen Notes: svn path=/head/; revision=301157
* Remove the MUTEX_DEBUG kernel option.Mark Johnston2016-05-181-36/+0
| | | | | | | | | It has no counterpart among the other lock primitives and has been a no-op for years. Mutex consistency checks are generally done whenver INVARIANTS is enabled. Notes: svn path=/head/; revision=300106
* Guard the lockstat:::thread-spin probe with KDTRACE_HOOKS.Mark Johnston2016-05-181-0/+2
| | | | | | | X-MFC-With: r300103 Notes: svn path=/head/; revision=300104
* lockstat:::thread-spin should only fire after spinning for the lock.Mark Johnston2016-05-181-1/+2
| | | | | | | MFC after: 1 week Notes: svn path=/head/; revision=300103
* Don't modify curthread->td_locks unless INVARIANTS is enabled.Mark Johnston2015-08-021-4/+4
| | | | | | | | | | | | | This field is only used in a KASSERT that verifies that no locks are held when returning to user mode. Moreover, the td_locks accounting is only correct when LOCK_DEBUG > 0, which is implied by INVARIANTS. Reviewed by: jhb MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D3205 Notes: svn path=/head/; revision=286166
* Implement the lockstat provider using SDT(9) instead of the custom providerMark Johnston2015-07-191-10/+10
| | | | | | | | | | | in lockstat.ko. This means that lockstat probes now have typed arguments and will utilize SDT probe hot-patching support when it arrives. Reviewed by: gnn Differential Revision: https://reviews.freebsd.org/D2993 Notes: svn path=/head/; revision=285703
* Fix the !KDTRACE_HOOKS build.Mark Johnston2015-07-181-0/+2
| | | | | | | X-MFC-With: r285664 Notes: svn path=/head/; revision=285667
* Pass the lock object to lockstat_nsecs() and return immediately ifMark Johnston2015-07-181-9/+10
| | | | | | | | | | | | LO_NOPROFILE is set. Some timecounter handlers acquire a spin mutex, and we don't want to recurse if lockstat probes are enabled. PR: 201642 Reviewed by: avg MFC after: 3 days Notes: svn path=/head/; revision=285664
* several lockstat improvementsAndriy Gapon2015-06-121-9/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0. For spin events report time spent spinning, not a loop count. While loop count is much easier and cheaper to obtain it is hard to reason about the reported numbers, espcially for adaptive locks where both spinning and sleeping can happen. So, it's better to compare apples and apples. 1. Teach lockstat about FreeBSD rw locks. This is done in part by changing the corresponding probes and in part by changing what probes lockstat should expect. 2. Teach lockstat that rw locks are adaptive and can spin on FreeBSD. 3. Report lock acquisition events for successful rw try-lock operations. 4. Teach lockstat about FreeBSD sx locks. Reporting of events for those locks completely mirrors rw locks. 5. Report spin and block events before acquisition event. This is behavior documented for the upstream, so it makes sense to stick to it. Note that because of FreeBSD adaptive lock implementations both the spin and block events may be reported for the same acquisition while the upstream reports only one of them. Differential Revision: https://reviews.freebsd.org/D2727 Reviewed by: markj MFC after: 17 days Relnotes: yes Sponsored by: ClusterHQ Notes: svn path=/head/; revision=284297
* Add _NEW flag to mtx(9), sx(9), rmlock(9) and rwlock(9).Dmitry Chagin2014-12-131-1/+3
| | | | | | | | | | | A _NEW flag passed to _init_flags() to avoid check for double-init. Differential Revision: https://reviews.freebsd.org/D1208 Reviewed by: jhb, wblock MFC after: 1 Month Notes: svn path=/head/; revision=275751
* Disable recursion for the process spinlock.Konstantin Belousov2014-12-011-1/+1
| | | | | | | | | | Tested by: pho Discussed with: jhb Sponsored by: The FreeBSD Foundation MFC after: 1 month Notes: svn path=/head/; revision=275372
* The process spin lock currently has the following distinct uses:Konstantin Belousov2014-11-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Threads lifetime cycle, in particular, counting of the threads in the process, and interlocking with process mutex and thread lock. The main reason of this is that turnstile locks are after thread locks, so you e.g. cannot unlock blockable mutex (think process mutex) while owning thread lock. - Virtual and profiling itimers, since the timers activation is done from the clock interrupt context. Replace the p_slock by p_itimmtx and PROC_ITIMLOCK(). - Profiling code (profil(2)), for similar reason. Replace the p_slock by p_profmtx and PROC_PROFLOCK(). - Resource usage accounting. Need for the spinlock there is subtle, my understanding is that spinlock blocks context switching for the current thread, which prevents td_runtime and similar fields from changing (updates are done at the mi_switch()). Replace the p_slock by p_statmtx and PROC_STATLOCK(). The split is done mostly for code clarity, and should not affect scalability. Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Notes: svn path=/head/; revision=275121
* opt_global.h is included automatically in the build. No need toWarner Losh2014-11-181-1/+0
| | | | | | | | | explicitly include it in these places. Sponsored by: Netflix Notes: svn path=/head/; revision=274668
* Add a new thread state "spinning" to schedgraph and add tracepoints at theJohn Baldwin2014-11-041-0/+11
| | | | | | | start and stop of spinning waits in lock primitives. Notes: svn path=/head/; revision=274092
* - For kernel compiled only with KDTRACE_HOOKS and not any lock debuggingAttilio Rao2013-11-251-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | option, unbreak the lock tracing release semantic by embedding calls to LOCKSTAT_PROFILE_RELEASE_LOCK() direclty in the inlined version of the releasing functions for mutex, rwlock and sxlock. Failing to do so skips the lockstat_probe_func invokation for unlocking. - As part of the LOCKSTAT support is inlined in mutex operation, for kernel compiled without lock debugging options, potentially every consumer must be compiled including opt_kdtrace.h. Fix this by moving KDTRACE_HOOKS into opt_global.h and remove the dependency by opt_kdtrace.h for all files, as now only KDTRACE_FRAMES is linked there and it is only used as a compile-time stub [0]. [0] immediately shows some new bug as DTRACE-derived support for debug in sfxge is broken and it was never really tested. As it was not including correctly opt_kdtrace.h before it was never enabled so it was kept broken for a while. Fix this by using a protection stub, leaving sfxge driver authors the responsibility for fixing it appropriately [1]. Sponsored by: EMC / Isilon storage division Discussed with: rstone [0] Reported by: rstone [1] Discussed with: philip Notes: svn path=/head/; revision=258541
* Fix lc_lock/lc_unlock() support for rmlocks held in shared mode. WithDavide Italiano2013-09-201-8/+8
| | | | | | | | | | | | | | | | | | current lock classes KPI it was really difficult because there was no way to pass an rmtracker object to the lock/unlock routines. In order to accomplish the task, modify the aforementioned functions so that they can return (or pass as argument) an uinptr_t, which is in the rm case used to hold a pointer to struct rm_priotracker for current thread. As an added bonus, this fixes rm_sleep() in the rm shared case, which right now can communicate priotracker structure between lc_unlock()/lc_lock(). Suggested by: jhb Reviewed by: jhb Approved by: re (delphij) Notes: svn path=/head/; revision=255745
* Give mutex(9) the ability to recurse on a per-instance basis.Attilio Rao2013-08-091-6/+14
| | | | | | | | | | | | | Now the MTX_RECURSE flag can be passed to the mtx_*_flag() calls. This helps in cases we want to narrow down to specific calls the possibility to recurse for some locks. Sponsored by: EMC / Isilon storage division Reviewed by: jeff, alc Tested by: pho Notes: svn path=/head/; revision=254139