aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/kern_mutex.c
Commit message (Collapse)AuthorAgeFilesLines
* locks: run the extra NULL check only with INVARIANTSGleb Smirnoff2025-03-301-3/+1
| | | | | | | This reverts commit 73da0265c29c79641dab3e6b98452bd5afca01fb. This reverts commit 87ee63bac69dc49291f55590b8baa57cad6c7d85. Discussed with: mjg
* mtx: Include the mutex pointer in the panic message for destroyed locksJohn Baldwin2025-03-121-7/+9
| | | | | | Reviewed by: olce, kib, markj Sponsored by: AFRL, DARPA Differential Revision: https://reviews.freebsd.org/D49315
* mtx: Make idle thread assertions more robustJohn Baldwin2025-03-121-4/+4
| | | | | | | | | Just print the pointer to the mutex instead of the name in case the mutex is corrupted. Reviewed by: olce, kib Sponsored by: AFRL, DARPA Differential Revision: https://reviews.freebsd.org/D49314
* mtx: Avoid nested panics on lock class mismatch assertionsJohn Baldwin2025-03-121-10/+10
| | | | | | | | | | | | | | It is only (somewhat) safe to dereference lo_name if we know the mutex has a specific lock class that is incorrect, not if just has "some" incorrect lock class. In particular, in the case of memory overwritten with 0xdeadc0de, the lock class won't match either mutex type. However, trying to dereference lo_name via a 0xdeadc0de pointer triggers a nested panic building the panicstr which then prevents a crash dump. Reviewed by: olce, kib, markj Sponsored by: AFRL, DARPA Differential Revision: https://reviews.freebsd.org/D49313
* locks: Use %p to print uintptr_t valuesJohn Baldwin2024-11-141-1/+1
| | | | | | | | | Pointers are not the same shape as sizes on CHERI architectures. Cast to void * and print with %p instead. Obtained from: CheriBSD Sponsored by: AFRL, DARPA Differential Revision: https://reviews.freebsd.org/D47342
* locks: augment lock_class with lc_trylock methodGleb Smirnoff2024-10-241-0/+18
| | | | | | | Implement for mutex(9) and rwlock(9). Reviewed by: jtl Differential Revision: https://reviews.freebsd.org/D45745
* locks: add a runtime check for missing turnstileMateusz Guzik2024-07-111-1/+3
| | | | | | | | | There are sometimes bugs which result in the unlock fast path failing, which in turns causes a not-helpful crash report when dereferencing a NULL turnstile. Help debugging such cases by pointing out what happened along with some debug. Sponsored by: Rubicon Communications, LLC ("Netgate")
* mutex: add static qualifier to implementations previously declared staticGleb Smirnoff2024-06-201-7/+7
|
* SCHEDULER_STOPPED(): Rely on a global variableOlivier Certner2024-01-261-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A commit from 2012 (5d7380f8e34f0083, r228424) introduced 'td_stopsched', on the ground that a global variable would cause all CPUs to have a copy of it in their cache, and consequently of all other variables sharing the same cache line. This is really a problem only if that cache line sees relatively frequent modifications. This was unlikely to be the case back then because nearby variables are almost never modified as well. In any case, today we have a new tool at our disposal to ensure that this variable goes into a read-mostly section containing frequently-accessed variables ('__read_frequently'). Most of the cache lines covering this section are likely to always be in every CPU cache. This makes the second reason stated in the commit message (ensuring the field is in the same cache line as some lock-related fields, since these are accessed in close proximity) moot, as well as the second order effect of requiring an additional line to be present in the cache (the one containing the new 'scheduler_stopped' boolean, see below). From a pure logical point of view, whether the scheduler is stopped is a global state and is certainly not a per-thread quality. Consequently, remove 'td_stopsched', which immediately frees a byte in 'struct thread'. Currently, the latter's size (and layout) stays unchanged, but some of the later re-orderings will probably benefit from this removal. Available bytes at the original position for 'td_stopsched' have been made explicit with the addition of the '_td_pad0' member. Store the global state in the new 'scheduler_stopped' boolean, which is annotated with '__read_frequently'. Replace uses of SCHEDULER_STOPPED_TD() with SCHEDULER_STOPPER() and remove the former as it is now unnecessary. Reviewed by: markj, kib Approved by: markj (mentor) MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D43572
* thread: add td_wantedlockMateusz Guzik2023-10-221-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | This enables obtaining lock information threads are actively waiting for while sampling. Without the change one would only see a bunch of calls to lock_delay(), where the stacktrace often does not reveal what the lock might be. Note this is not the same as lock profiling, which only produces data for cases which wait for locks. struct thread already has a td_lockname field, but I did not use it because it has different semantics -- denotes when the thread is off cpu. At the same time it could not be converted to hold a lock_object pointer because non-curthread access would no longer be guaranteed to be safe -- by the time it reads the pointer the lock might have been taken, released and the object containing it freed. Sample usage with dtrace: rm /tmp/out.kern_stacks ; dtrace -x stackframes=100 -n 'profile-997 { @[curthread->td_wantedlock != NULL ? stringof(curthread->td_wantedlock->lo_name) : stringof("\n"), stack()] = count(); }' -o /tmp/out.kern_stacks This also facilitates addition of lock information to traces produced by hwpmc. Note: spinlocks are not supported at the moment. Sponsored by: Rubicon Communications, LLC ("Netgate")
* sys: Remove $FreeBSD$: one-line .c patternWarner Losh2023-08-161-2/+0
| | | | Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
* callout(9): Allow spin locks use with callout_init_mtx().Alexander Motin2021-09-031-2/+6
| | | | | | | | | | | | | | | Implement lock_spin()/unlock_spin() lock class methods, moving the assertion to _sleep() instead. Change assertions in callout(9) to allow spin locks for both regular and C_DIRECT_EXEC cases. In case of C_DIRECT_EXEC callouts spin locks are the only locks allowed actually. As the first use case allow taskqueue_enqueue_timeout() use on fast task queues. It actually becomes more efficient due to avoided extra context switches in callout(9) thanks to C_DIRECT_EXEC. MFC after: 2 weeks Reviewed by: hselasky Differential Revision: https://reviews.freebsd.org/D31778
* Fix lockstat:::thread-spin dtrace probe with LOCK_PROFILINGEric van Gyzen2021-08-021-0/+2
| | | | | | | | | | | The spinning start time is missing from the calculation due to a misplaced #endif. Return the #endif where it's supposed to be. Submitted by: Alexander Alexeev <aalexeev@isilon.com> Reviewed by: bdrewery, mjg MFC after: 1 week Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D31384
* lockprof: pass lock type as an argument instead of reading the spin flagMateusz Guzik2021-05-231-9/+11
|
* locks: push lock_delay_arg_init calls downMateusz Guzik2020-11-241-8/+8
| | | | | | | | Minor cleanup to skip doing them when recursing on locks and so that they can act on found lock value if need be. Notes: svn path=/head/; revision=367978
* mtx: add mtx_wait_unlockedMateusz Guzik2020-08-041-0/+29
| | | | Notes: svn path=/head/; revision=363871
* locks: fix a long standing bug for primitives with kdtrace but without spinningMateusz Guzik2020-07-231-1/+1
| | | | | | | | | | | | | In such a case the second argument to lock_delay_arg_init was NULL which was immediately causing a null pointer deref. Since the sructure is only used for spin count, provide a dedicate routine initializing it. Reported by: andrew Notes: svn path=/head/; revision=363451
* Mark more nodes as CTLFLAG_MPSAFE or CTLFLAG_NEEDGIANT (17 of many)Pawel Biernacki2020-02-261-2/+4
| | | | | | | | | | | | | | | | | | | r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are still not MPSAFE (or already are but aren’t properly marked). Use it in preparation for a general review of all nodes. This is non-functional change that adds annotations to SYSCTL_NODE and SYSCTL_PROC nodes using one of the soon-to-be-required flags. Mark all obvious cases as MPSAFE. All entries that haven't been marked as MPSAFE before are by default marked as NEEDGIANT Approved by: kib (mentor, blanket) Commented by: kib, gallatin, melifaro Differential Revision: https://reviews.freebsd.org/D23718 Notes: svn path=/head/; revision=358333
* Add KERNEL_PANICKED macro for use in place of direct panicstr testsMateusz Guzik2020-01-121-2/+2
| | | | Notes: svn path=/head/; revision=356655
* locks: add default delay structMateusz Guzik2020-01-051-0/+8
| | | | | | | Use it for all primitives. This makes everything fit in 8 bytes. Notes: svn path=/head/; revision=356375
* locks: convert delay times to u_shortMateusz Guzik2020-01-051-2/+2
| | | | | | | int is just a waste of space for this purpose. Notes: svn path=/head/; revision=356374
* mtx: eliminate recursion support from thread lockMateusz Guzik2019-12-161-22/+10
| | | | | | | | | | | | Now that it is not used after schedlock changes got merged. Note the unlock routine temporarily still checks for it on account of just using regular spin unlock. This is a prelude towards a general clean up. Notes: svn path=/head/; revision=355789
* schedlock 1/4Jeff Roberson2019-12-151-4/+17
| | | | | | | | | | | | | | | Eliminate recursion from most thread_lock consumers. Return from sched_add() without the thread_lock held. This eliminates unnecessary atomics and lock word loads as well as reducing the hold time for scheduler locks. This will eventually allow for lockless remote adds. Discussed with: kib Reviewed by: jhb Tested by: pho Differential Revision: https://reviews.freebsd.org/D22626 Notes: svn path=/head/; revision=355779
* INVARIANTS: treat LA_LOCKED as the same of LA_XLOCKED in mtx_assert.Xin LI2019-08-231-0/+15
| | | | | | | | | | | | | | | | | | | The Linux lockdep API assumes LA_LOCKED semantic in lockdep_assert_held(), meaning that either a shared lock or write lock is Ok. On the other hand, the timeout code uses lc_assert() with LA_XLOCKED, and we need both to work. For mutexes, because they can not be shared (this is unique among all lock classes, and it is unlikely that we would add new lock class anytime soon), it is easier to simply extend mtx_assert to handle LA_LOCKED there, despite the change itself can be viewed as a slight abstraction violation. Reviewed by: mjg, cem, jhb MFC after: 1 month Differential Revision: https://reviews.freebsd.org/D21362 Notes: svn path=/head/; revision=351417
* locks: plug warnings about unitialized variablesMateusz Guzik2018-11-131-2/+2
| | | | | | | | | | | | | | | | They only showed up after I redefined LOCKSTAT_ENABLED to 0. doing_lockprof in mutex.c is a real (but harmless) bug. Should the value be non-zero it will do checks for lock profiling which would otherwise be skipped. state in rwlock.c is a wart from the compiler, the value can't be used if lock profiling is not enabled. Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=340410
* Add a KPI for the delay while spinning on a spin lock.John Baldwin2018-11-051-1/+1
| | | | | | | | | | | | | Replace a call to DELAY(1) with a new cpu_lock_delay() KPI. Currently cpu_lock_delay() is defined to DELAY(1) on all platforms. However, platforms with a DELAY() implementation that uses spin locks should implement a custom cpu_lock_delay() doesn't use locks. Reviewed by: kib MFC after: 3 days Notes: svn path=/head/; revision=340164
* Remove an unused argument to turnstile_unpend.Mateusz Guzik2018-06-021-1/+1
| | | | | | | | PR: 228694 Submitted by: Julian Pszczołowski <julian.pszczolowski@gmail.com> Notes: svn path=/head/; revision=334546
* Drop KTR_CONTENTION.Mark Johnston2018-03-201-21/+0
| | | | | | | | | | | | | | It is incomplete, has not been adopted in the other locking primitives, and we have other means of measuring lock contention (lock_profiling, lockstat, KTR_LOCK). Drop it to slightly de-clutter the mutex code and free up a precious KTR class index. Reviewed by: jhb, mjg MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D14771 Notes: svn path=/head/; revision=331245
* locks: slightly depessimize lockstatMateusz Guzik2018-03-171-23/+39
| | | | | | | | | | | | | | | | | | | | | The slow path is always taken when lockstat is enabled. This induces rdtsc (or other) calls to get the cycle count even when there was no contention. Still go to the slow path to not mess with the fast path, but avoid the heavy lifting unless necessary. This reduces sys and real time during -j 80 buildkernel: before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total So note this is still a significant hit. LOCK_PROFILING results are not affected. Notes: svn path=/head/; revision=331109
* mtx: tidy up recursion handling in thread lockMateusz Guzik2018-03-041-7/+10
| | | | | | | | | | | | | | Normally after grabbing the lock it has to be verified we got the right one to begin with. However, if we are recursing, it must not change thus the check can be avoided. In particular this avoids a lock read for non-recursing case which found out the lock was changed. While here avoid an irq trip of this happens. Tested by: pho (previous version) Notes: svn path=/head/; revision=330418
* mtx: add debug assertions to mtx_spin_wait_unlockedMateusz Guzik2018-02-201-0/+8
| | | | Notes: svn path=/head/; revision=329666
* mtx: add mtx_spin_wait_unlockedMateusz Guzik2018-02-191-0/+17
| | | | | | | | | | | | | | The primitive can be used to wait for the lock to be released. Intended usage is for locks in structures which are about to be freed. The benefit is the avoided interrupt enable/disable trip + atomic op to grab the lock and shorter wait if the lock is held (since there is no worry someone will contend on the lock, re-reads can be more aggressive). Briefly discussed with: kib Notes: svn path=/head/; revision=329540
* mtx: use fcmpset to cover setting MTX_CONTESTEDMateusz Guzik2018-01-121-4/+3
| | | | Notes: svn path=/head/; revision=327875
* mtx: deduplicate indefinite wait check in spinlocks and thread lockMateusz Guzik2017-12-311-35/+31
| | | | Notes: svn path=/head/; revision=327395
* mtx: pre-read the lock value in thread_lock_flags_Mateusz Guzik2017-12-311-7/+9
| | | | | | | | | | | Since this function is effectively slow path, if we get here the lock is most likely already taken in which case it is cheaper to not blindly attempt the atomic op. While here move hwpmc probe out of the loop to match other primitives. Notes: svn path=/head/; revision=327394
* sys/kern: adoption of SPDX licensing ID tags.Pedro F. Giffuni2017-11-271-0/+2
| | | | | | | | | | | | | | | Mainly focus on files that use BSD 2-Clause license, however the tool I was using misidentified many licenses so this was mostly a manual - error prone - task. The Software Package Data Exchange (SPDX) group provides a specification to make it easier for automated tools to detect and summarize well known opensource licenses. We are gradually adopting the specification, noting that the tags are considered only advisory and do not, in any way, superceed or replace the license texts. Notes: svn path=/head/; revision=326271
* Add the missing lockstat check for thread lock.Mateusz Guzik2017-11-251-0/+7
| | | | Notes: svn path=/head/; revision=326200
* locks: pass the found lock value to unlock slow pathMateusz Guzik2017-11-221-6/+9
| | | | | | | | | This avoids an explicit read later. While here whack the cheaply obtainable 'tid' argument. Notes: svn path=/head/; revision=326107
* locks: remove the file + line argument from internal primitives when not usedMateusz Guzik2017-11-221-4/+10
| | | | | | | | | | | | | The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed) for many locks even in production kernels. While here whack the tid argument from wlock hard and xlock hard. There is no kbi change of any sort - "external" primitives still accept the pair. Notes: svn path=/head/; revision=326106
* locks: fix compilation issues without SMP or KDTRACE_HOOKSMateusz Guzik2017-11-171-2/+3
| | | | Notes: svn path=/head/; revision=325963
* mtx: add missing parts of the diff in r325920Mateusz Guzik2017-11-171-2/+2
| | | | | | | Fixes build breakage. Notes: svn path=/head/; revision=325925
* mtx: unlock before traversing threads to wake upMateusz Guzik2017-11-171-4/+5
| | | | | | | | | This shortens the lock hold time while not affecting corretness. All the woken up threads end up competing can lose the race against a completely unrelated thread getting the lock anyway. Notes: svn path=/head/; revision=325920
* mtx: implement thread lock fastpathMateusz Guzik2017-10-211-11/+61
| | | | | | | MFC after: 1 week Notes: svn path=/head/; revision=324836
* mtx: fix up UP build after r324778Mateusz Guzik2017-10-201-0/+6
| | | | | | | Reported by: Michael Butler Notes: svn path=/head/; revision=324803
* mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexesMateusz Guzik2017-10-201-6/+0
| | | | | | | | | | There is nothing panic-breaking to do in the unlock case and the lock case will fallback to the slow path doing the check already. MFC after: 1 week Notes: svn path=/head/; revision=324780
* mtx: clean up locking spin mutexesMateusz Guzik2017-10-201-7/+23
| | | | | | | | | | | 1) shorten the fast path by pushing the lockstat probe to the slow path 2) test for kernel panic only after it turns out we will have to spin, in particular test only after we know we are not recursing MFC after: 1 week Notes: svn path=/head/; revision=324778
* mtx: fix up owner_mtx after r324609Mateusz Guzik2017-10-141-1/+1
| | | | | | | Now that MTX_UNOWNED is 0 the test was alwayas false. Notes: svn path=/head/; revision=324613
* mtx: drop the tid argument from _mtx_lock_sleepMateusz Guzik2017-09-271-7/+10
| | | | | | | | | | | tid must be equal to curthread and the target routine was already reading it anyway, which is not a problem. Not passing it as a parameter allows for a little bit shorter code in callers. MFC after: 1 week Notes: svn path=/head/; revision=324041
* Annotate Giant with __exclusive_cache_lineMateusz Guzik2017-09-081-1/+1
| | | | Notes: svn path=/head/; revision=323306
* Sprinkle __read_frequently on few obvious places.Mateusz Guzik2017-09-061-2/+2
| | | | | | | | Note that some of annotated variables should probably change their types to something smaller, preferably bit-sized. Notes: svn path=/head/; revision=323236