aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/kern_mutex.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix r253823. Some WIP patches snuck in.Scott Long2013-07-301-14/+6
| | | | | | | Submitted by: zont Notes: svn path=/head/; revision=253824
* Create a knob, kern.ipc.sfreadahead, that allows one to tune the amount ofScott Long2013-07-301-6/+14
| | | | | | | | | | readahead that sendfile() will do. Default remains the same. Obtained from: Netflix MFC after: 3 days Notes: svn path=/head/; revision=253823
* A few mostly cosmetic nits to aid in debugging:John Baldwin2013-06-251-3/+3
| | | | | | | | | | | - Call lock_init() first before setting any lock_object fields in lock init routines. This way if the machine panics due to a duplicate init the lock's original state is preserved. - Somewhat similarly, don't decrement td_locks and td_slocks until after an unlock operation has completed successfully. Notes: svn path=/head/; revision=252212
* Fixup r240424: On entering KDB backends, the hijacked thread to runAttilio Rao2012-12-221-2/+2
| | | | | | | | | | | | | | | | interrupt context can still be idlethread. At that point, without the panic condition, it can still happen that idlethread then will try to acquire some locks to carry on some operations. Skip the idlethread check on block/sleep lock operations when KDB is active. Reported by: jh Tested by: jh MFC after: 1 week Notes: svn path=/head/; revision=244582
* Give mtx(9) the ability to crunch different type of structures, with theAttilio Rao2012-10-311-17/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | only constraint that they have a lock cookie named mtx_lock. This name, then, becames reserved from the struct that wants to use the mtx(9) KPI and other locking primitives cannot reuse it for their members. Namely such structs are the current struct mtx and the new struct mtx_padalign. The new structure will define an object which is the same as the same layout of a struct mtx but will be allocated in areas aligned to the cache line size and will be as big as a cache line. This is supposed to give higher performance for highly contented mutexes both spin or sleep (because of the adaptive spinning), where the cache line contention results in too much traffic on the system bus. The struct mtx_padalign can be used in a completely transparent way with the mtx(9) KPI. At the moment, a possibility to MFC the patch should be carefully evaluated because this patch breaks the low level KPI (not its representation though). Discussed with: jhb Reviewed by: jeff, andre Reviewed by: mdf (earlier version) Tested by: jimharris Notes: svn path=/head/; revision=242395
* Remove all the checks on curthread != NULL with the exception of some MDAttilio Rao2012-09-131-5/+0
| | | | | | | | | | | | | | trap checks (eg. printtrap()). Generally this check is not needed anymore, as there is not a legitimate case where curthread != NULL, after pcpu 0 area has been properly initialized. Reviewed by: bde, jhb MFC after: 1 week Notes: svn path=/head/; revision=240475
* Improve check coverage about idle threads.Attilio Rao2012-09-121-0/+6
| | | | | | | | | | | | | | | Idle threads are not allowed to acquire any lock but spinlocks. Deny any attempt to do so by panicing at the locking operation when INVARIANTS is on. Then, remove the check on blocking on a turnstile. The check in sleepqueues is left because they are not allowed to use tsleep() either which could happen still. Reviewed by: bde, jhb, kib MFC after: 1 week Notes: svn path=/head/; revision=240424
* Add software PMC support.Fabien Thomas2012-03-281-0/+15
| | | | | | | | | | | | | | | | New kernel events can be added at various location for sampling or counting. This will for example allow easy system profiling whatever the processor is with known tools like pmcstat(8). Simultaneous usage of software PMC and hardware PMC is possible, for example looking at the lock acquire failure, page fault while sampling on instructions. Sponsored by: NETASQ MFC after: 1 month Notes: svn path=/head/; revision=233628
* panic: add a switch and infrastructure for stopping other CPUs in SMP caseAndriy Gapon2011-12-111-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historical behavior of letting other CPUs merily go on is a default for time being. The new behavior can be switched on via kern.stop_scheduler_on_panic tunable and sysctl. Stopping of the CPUs has (at least) the following benefits: - more of the system state at panic time is preserved intact - threads and interrupts do not interfere with dumping of the system state Only one thread runs uninterrupted after panic if stop_scheduler_on_panic is set. That thread might call code that is also used in normal context and that code might use locks to prevent concurrent execution of certain parts. Those locks might be held by the stopped threads and would never be released. To work around this issue, it was decided that instead of explicit checks for panic context, we would rather put those checks inside the locking primitives. This change has substantial portions written and re-written by attilio and kib at various times. Other changes are heavily based on the ideas and patches submitted by jhb and mdf. bde has provided many insights into the details and history of the current code. The new behavior may cause problems for systems that use a USB keyboard for interfacing with system console. This is because of some unusual locking patterns in the ukbd code which have to be used because on one hand ukbd is below syscons, but on the other hand it has to interface with other usb code that uses regular mutexes/Giant for its concurrency protection. Dumping to USB-connected disks may also be affected. PR: amd64/139614 (at least) In cooperation with: attilio, jhb, kib, mdf Discussed with: arch@, bde Tested by: Eugene Grosbein <eugen@grosbein.net>, gnn, Steven Hartland <killing@multiplay.co.uk>, glebius, Andrew Boyer <aboyer@averesystems.com> (various versions of the patch) MFC after: 3 months (or never) Notes: svn path=/head/; revision=228424
* Introduce macro stubs in the mutex implementation that will be alwaysAttilio Rao2011-11-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | defined and will allow consumers, willing to provide options, file and line to locking requests, to not worry about options redefining the interfaces. This is typically useful when there is the need to build another locking interface on top of the mutex one. The introduced functions that consumers can use are: - mtx_lock_flags_ - mtx_unlock_flags_ - mtx_lock_spin_flags_ - mtx_unlock_spin_flags_ - mtx_assert_ - thread_lock_flags_ Spare notes: - Likely we can get rid of all the 'INVARIANTS' specification in the ppbus code by using the same macro as done in this patch (but this is left to the ppbus maintainer) - all the other locking interfaces may require a similar cleanup, where the most notable case is sx which will allow a further cleanup of vm_map locking facilities - The patch should be fully compatible with older branches, thus a MFC is previewed (infact it uses all the underlying mechanisms already present). Comments review by: eadler, Ben Kaduk Discussed with: kib, jhb MFC after: 1 month Notes: svn path=/head/; revision=227758
* Constify arguments for locking KPIs where possible.Pawel Jakub Dawidek2011-11-161-11/+12
| | | | | | | | | | This enables locking consumers to pass their own structures around as const and be able to assert locks embedded into those structures. Reviewed by: ed, kib, jhb Notes: svn path=/head/; revision=227588
* - Remove <machine/mutex.h>. Most of the headers were empty, and theJohn Baldwin2010-11-091-11/+11
| | | | | | | | | | | | | | | contents of the ones that were not empty were stale and unused. - Now that <machine/mutex.h> no longer exists, there is no need to allow it to override various helper macros in <sys/mutex.h>. - Rename various helper macros for low-level operations on mutexes to live in the _mtx_* or __mtx_* namespaces. While here, change the names to more closely match the real API functions they are backing. - Drop support for including <sys/mutex.h> in assembly source files. Suggested by: bde (1, 2) Notes: svn path=/head/; revision=215054
* Right now, WITNESS just blindly pipes all the output to theAttilio Rao2010-05-111-1/+1
| | | | | | | | | | | | | | | | (TOCONS | TOLOG) mask even when called from DDB points. That breaks several output, where the most notable is textdump output. Fix this by having configurable callbacks passed to witness_list_locks() and witness_display_spinlock() for printing out datas. Reported by: several broken textdump outputs Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com> MFC after: 7 days X-MFC: r207922 Notes: svn path=/head/; revision=207929
* - Fix a race in sched_switch() of sched_4bsd.Attilio Rao2010-01-231-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the case of the thread being on a sleepqueue or a turnstile, the sched_lock was acquired (without the aid of the td_lock interface) and the td_lock was dropped. This was going to break locking rules on other threads willing to access to the thread (via the td_lock interface) and modify his flags (allowed as long as the container lock was different by the one used in sched_switch). In order to prevent this situation, while sched_lock is acquired there the td_lock gets blocked. [0] - Merge the ULE's internal function thread_block_switch() into the global thread_lock_block() and make the former semantic as the default for thread_lock_block(). This means that thread_lock_block() will not disable interrupts when called (and consequently thread_unlock_block() will not re-enabled them when called). This should be done manually when necessary. Note, however, that ULE's thread_unblock_switch() is not reaped because it does reflect a difference in semantic due in ULE (the td_lock may not be necessarilly still blocked_lock when calling this). While asymmetric, it does describe a remarkable difference in semantic that is good to keep in mind. [0] Reported by: Kohji Okuno <okuno dot kohji at jp dot panasonic dot com> Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com> MFC: 2 weeks Notes: svn path=/head/; revision=202889
* Revert previous commit and add myself to the list of people who shouldPoul-Henning Kamp2009-09-081-6/+5
| | | | | | | know better than to commit with a cat in the area. Notes: svn path=/head/; revision=196970
* Add necessary include.Poul-Henning Kamp2009-09-081-5/+6
| | | | Notes: svn path=/head/; revision=196969
* * Change the scope of the ASSERT_ATOMIC_LOAD() from a generic check toAttilio Rao2009-08-171-2/+3
| | | | | | | | | | | | | | | | | | a pointer-fetching specific operation check. Consequently, rename the operation ASSERT_ATOMIC_LOAD_PTR(). * Fix the implementation of ASSERT_ATOMIC_LOAD_PTR() by checking directly alignment on the word boundry, for all the given specific architectures. That's a bit too strict for some common case, but it assures safety. * Add a comment explaining the scope of the macro * Add a new stub in the lockmgr specific implementation Tested by: marcel (initial version), marius Reviewed by: rwatson, jhb (comment specific review) Approved by: re (kib) Notes: svn path=/head/; revision=196334
* Add a new macro to test that a variable could be loaded atomically.Bjoern A. Zeeb2009-08-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | Check that the given variable is at most uintptr_t in size and that it is aligned. Note: ASSERT_ATOMIC_LOAD() uses ALIGN() to check for adequate alignment -- however, the function of ALIGN() is to guarantee alignment, and therefore may lead to stronger alignment enforcement than necessary for types that are smaller than sizeof(uintptr_t). Add checks to mtx, rw and sx locks init functions to detect possible breakage. This was used during debugging of the problem fixed with r196118 where a pointer was on an un-aligned address in the dpcpu area. In collaboration with: rwatson Reviewed by: rwatson Approved by: re (kib) Notes: svn path=/head/; revision=196226
* Remove extra cpu_spinwait() invocations. This should really only be usedJohn Baldwin2009-05-291-3/+0
| | | | | | | | | | in tight spin loops, not in these edge cases where we restart a much larger loop only a few times. Reviewed by: attilio Notes: svn path=/head/; revision=193037
* Tweak a few comments on adaptive spinning.John Baldwin2009-05-291-2/+5
| | | | Notes: svn path=/head/; revision=193035
* Add the OpenSolaris dtrace lockstat provider. The lockstat providerStacey Son2009-05-261-9/+70
| | | | | | | | | | | | | adds probes for mutexes, reader/writer and shared/exclusive locks to gather contention statistics and other locking information for dtrace scripts, the lockstat(1M) command and other potential consumers. Reviewed by: attilio jhb jb Approved by: gnn (mentor) Notes: svn path=/head/; revision=192853
* Remove an obsolete assertion. We always wake up all waiters when unlockingJohn Baldwin2009-05-201-2/+0
| | | | | | | a mutex and never set the lock cookie == MTX_CONTESTED. Notes: svn path=/head/; revision=192456
* - Wrap lock profiling state variables in #ifdef LOCK_PROFILING blocks.Jeff Roberson2009-03-151-7/+17
| | | | Notes: svn path=/head/; revision=189846
* - When a mutex is destroyed while locked we need to inform lock profilingJeff Roberson2009-03-141-0/+1
| | | | | | | that it has been released. Notes: svn path=/head/; revision=189789
* Teach WITNESS about the interlocks used with lockmgr. This removes a bunchJohn Baldwin2008-09-101-3/+3
| | | | | | | | | | | of spurious witness warnings since lockmgr grew witness support. Before this, every time you passed an interlock to a lockmgr lock WITNESS treated it as a LOR. Reviewed by: attilio Notes: svn path=/head/; revision=182914
* Various whitespace fixes.John Baldwin2008-09-101-9/+9
| | | | Notes: svn path=/head/; revision=182909
* Add KASSERT()'s to catch attempts to recurse on spin mutexes that aren'tJohn Baldwin2008-02-131-1/+9
| | | | | | | | | marked recursable either via mtx_lock_spin() or thread_lock(). MFC after: 1 week Notes: svn path=/head/; revision=176260
* Add a couple of assertions and KTR logging to thread_lock_flags() toJohn Baldwin2008-02-131-1/+7
| | | | | | | | | match mtx_lock_spin_flags(). MFC after: 1 week Notes: svn path=/head/; revision=176257
* - Re-implement lock profiling in such a way that it no longer breaksJeff Roberson2007-12-151-20/+6
| | | | | | | | | | | | | | | | | | | | | | | | | the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now. In collaboration with: Kip Macy Sponsored by: Nokia Notes: svn path=/head/; revision=174629
* Make ADAPTIVE_GIANT as the default in the kernel and remove the option.Attilio Rao2007-11-281-8/+0
| | | | | | | | | | | | | Currently, Giant is not too much contented so that it is ok to treact it like any other mutexes. Please don't forget to update your own custom config kernel files. Approved by: cognet, marcel (maintainers of arches where option is not enabled at the moment) Notes: svn path=/head/; revision=174005
* Simplify the adaptive spinning algorithm in rwlock and mutex:Attilio Rao2007-11-261-29/+41
| | | | | | | | | | | | | | | | | | | | | | | | currently, before to spin the turnstile spinlock is acquired and the waiters flag is set. This is not strictly necessary, so just spin before to acquire the spinlock and to set the flags. This will simplify a lot other functions too, as now we have the waiters flag set only if there are actually waiters. This should make wakeup/sleeping couplet faster under intensive mutex workload. This also fixes a bug in rw_try_upgrade() in the adaptive case, where turnstile_lookup() will recurse on the ts_lock lock that will never be really released [1]. [1] Reported by: jeff with Nokia help Tested by: pho, kris (earlier, bugged version of rwlock part) Discussed with: jhb [2], jeff MFC after: 1 week [2] John had a similar patch about 6.x and/or 7.x about mutexes probabilly Notes: svn path=/head/; revision=173960
* Expand lock class with the "virtual" function lc_assert which will offerAttilio Rao2007-11-181-0/+10
| | | | | | | | | | | | an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, so just panic in that case. This will be a base for more callout improvements. Ok'ed by: jhb, jeff Notes: svn path=/head/; revision=173733
* generally we are interested in what thread did something asJulian Elischer2007-11-141-1/+1
| | | | | | | | | opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead. Notes: svn path=/head/; revision=173600
* - Remove the global definition of sched_lock in mutex.h to breakJeff Roberson2007-07-181-2/+0
| | | | | | | | | | | | | | new code and third party modules which try to depend on it. - Initialize sched_lock in sched_4bsd.c. - Declare sched_lock in sparc64 pmap.c and assert that we're compiling with SCHED_4BSD to prevent accidental crashes from running ULE. This is the sole remaining file outside of the scheduler that uses the global sched_lock. Approved by: re Notes: svn path=/head/; revision=171488
* - Add the proper lock profiling calls to _thread_lock().Jeff Roberson2007-07-181-2/+8
| | | | | | | | Obtained from: kipmacy Approved by: re Notes: svn path=/head/; revision=171487
* Propagate volatile qualifier to make gcc4.2 happy.Matt Jacob2007-06-091-1/+1
| | | | Notes: svn path=/head/; revision=170465
* Remove the MUTEX_WAKE_ALL option and make it the default behaviour for ourAttilio Rao2007-06-081-37/+0
| | | | | | | | | mutexes. Currently we alredy force MUTEX_WAKE_ALL beacause of some problems with the !MUTEX_WAKE_ALL case (unavioidable priority inversion). Notes: svn path=/head/; revision=170441
* - Placing the 'volatile' on the right side of the * in the td_lockJeff Roberson2007-06-061-3/+3
| | | | | | | | | declaration removes the need for __DEVOLATILE(). Pointed out by: tegge Notes: svn path=/head/; revision=170358
* Fix a problem with not-preemptive kernels caming from mis-merging ofAttilio Rao2007-06-051-47/+0
| | | | | | | | | | existing code with the new thread_lock patch. This also cleans up a bit unlock operation for mutexes. Approved by: jhb, jeff(mentor) Notes: svn path=/head/; revision=170339
* Restore non-SMP build.Konstantin Belousov2007-06-051-1/+2
| | | | | | | Reviewed by: attilio Notes: svn path=/head/; revision=170327
* Commit 3/14 of sched_lock decomposition.Jeff Roberson2007-06-041-27/+122
| | | | | | | | | | | | | | | | | | | | - Add a per-turnstile spinlock to solve potential priority propagation deadlocks that are possible with thread_lock(). - The turnstile lock order is defined as the exact opposite of the lock order used with the sleep locks they represent. This allows us to walk in reverse order in priority_propagate and this is the only place we wish to multiply acquire turnstile locks. - Use the turnstile_chain lock to protect assigning mutexes to turnstiles. - Change the turnstile interface to pass back turnstile pointers to the consumers. This allows us to reduce some locking and makes it easier to cancel turnstile assignment while the turnstile chain lock is held. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each) Notes: svn path=/head/; revision=170295
* Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().John Baldwin2007-05-181-2/+0
| | | | Notes: svn path=/head/; revision=169675
* Teach 'show lock' to properly handle a destroyed mutex.John Baldwin2007-05-081-1/+5
| | | | Notes: svn path=/head/; revision=169393
* move lock_profile calls out of the macros and into kern_mutex.cKip Macy2007-04-031-9/+17
| | | | | | | add check for mtx_recurse == 0 when releasing sleep lock Notes: svn path=/head/; revision=168329
* - Simplify the #ifdef's for adaptive mutexes and rwlocks by conditionallyJohn Baldwin2007-03-221-4/+8
| | | | | | | | defining a macro earlier in the file. - Add NO_ADAPTIVE_RWLOCKS option to disable adaptive spinning for rwlocks. Notes: svn path=/head/; revision=167801
* Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,John Baldwin2007-03-211-68/+68
| | | | | | | rwlocks, and sx locks to 'lock_object'. Notes: svn path=/head/; revision=167787
* Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.John Baldwin2007-03-091-0/+40
| | | | | | | | | | | | | | | | These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked. Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions. Notes: svn path=/head/; revision=167368
* Use C99-style struct member initialization for lock classes.John Baldwin2007-03-091-6/+6
| | | | Notes: svn path=/head/; revision=167365
* lock stats updates need to be protected by the lockKip Macy2007-03-021-20/+5
| | | | Notes: svn path=/head/; revision=167163
* Evidently I've overestimated gcc's ability to peak inside inline functionsKip Macy2007-03-011-4/+8
| | | | | | | | and optimize away unused stack values. The 48 bytes that the lock_profile_object adds to the stack evidently has a measurable performance impact on certain workloads. Notes: svn path=/head/; revision=167136