| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This reverts commit 73da0265c29c79641dab3e6b98452bd5afca01fb.
This reverts commit 87ee63bac69dc49291f55590b8baa57cad6c7d85.
Discussed with: mjg
|
|
|
|
|
|
| |
Reviewed by: olce, kib, markj
Sponsored by: AFRL, DARPA
Differential Revision: https://reviews.freebsd.org/D49315
|
|
|
|
|
|
|
|
|
| |
Just print the pointer to the mutex instead of the name in case the
mutex is corrupted.
Reviewed by: olce, kib
Sponsored by: AFRL, DARPA
Differential Revision: https://reviews.freebsd.org/D49314
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is only (somewhat) safe to dereference lo_name if we know the mutex
has a specific lock class that is incorrect, not if just has "some"
incorrect lock class. In particular, in the case of memory
overwritten with 0xdeadc0de, the lock class won't match either mutex
type. However, trying to dereference lo_name via a 0xdeadc0de pointer
triggers a nested panic building the panicstr which then prevents a
crash dump.
Reviewed by: olce, kib, markj
Sponsored by: AFRL, DARPA
Differential Revision: https://reviews.freebsd.org/D49313
|
|
|
|
|
|
|
|
|
| |
Pointers are not the same shape as sizes on CHERI architectures. Cast
to void * and print with %p instead.
Obtained from: CheriBSD
Sponsored by: AFRL, DARPA
Differential Revision: https://reviews.freebsd.org/D47342
|
|
|
|
|
|
|
| |
Implement for mutex(9) and rwlock(9).
Reviewed by: jtl
Differential Revision: https://reviews.freebsd.org/D45745
|
|
|
|
|
|
|
|
|
| |
There are sometimes bugs which result in the unlock fast path failing,
which in turns causes a not-helpful crash report when dereferencing a
NULL turnstile. Help debugging such cases by pointing out what happened
along with some debug.
Sponsored by: Rubicon Communications, LLC ("Netgate")
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A commit from 2012 (5d7380f8e34f0083, r228424) introduced
'td_stopsched', on the ground that a global variable would cause all
CPUs to have a copy of it in their cache, and consequently of all other
variables sharing the same cache line.
This is really a problem only if that cache line sees relatively
frequent modifications. This was unlikely to be the case back then
because nearby variables are almost never modified as well. In any
case, today we have a new tool at our disposal to ensure that this
variable goes into a read-mostly section containing frequently-accessed
variables ('__read_frequently'). Most of the cache lines covering this
section are likely to always be in every CPU cache. This makes the
second reason stated in the commit message (ensuring the field is in the
same cache line as some lock-related fields, since these are accessed in
close proximity) moot, as well as the second order effect of requiring
an additional line to be present in the cache (the one containing the
new 'scheduler_stopped' boolean, see below).
From a pure logical point of view, whether the scheduler is stopped is
a global state and is certainly not a per-thread quality.
Consequently, remove 'td_stopsched', which immediately frees a byte in
'struct thread'. Currently, the latter's size (and layout) stays
unchanged, but some of the later re-orderings will probably benefit from
this removal. Available bytes at the original position for
'td_stopsched' have been made explicit with the addition of the
'_td_pad0' member.
Store the global state in the new 'scheduler_stopped' boolean, which is
annotated with '__read_frequently'.
Replace uses of SCHEDULER_STOPPED_TD() with SCHEDULER_STOPPER() and
remove the former as it is now unnecessary.
Reviewed by: markj, kib
Approved by: markj (mentor)
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D43572
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enables obtaining lock information threads are actively waiting for
while sampling. Without the change one would only see a bunch of calls
to lock_delay(), where the stacktrace often does not reveal what the
lock might be.
Note this is not the same as lock profiling, which only produces data
for cases which wait for locks.
struct thread already has a td_lockname field, but I did not use it
because it has different semantics -- denotes when the thread is off
cpu. At the same time it could not be converted to hold a lock_object
pointer because non-curthread access would no longer be guaranteed to be
safe -- by the time it reads the pointer the lock might have been taken,
released and the object containing it freed.
Sample usage with dtrace:
rm /tmp/out.kern_stacks ; dtrace -x stackframes=100 -n 'profile-997 { @[curthread->td_wantedlock != NULL ? stringof(curthread->td_wantedlock->lo_name) : stringof("\n"), stack()] = count(); }' -o /tmp/out.kern_stacks
This also facilitates addition of lock information to traces produced by
hwpmc.
Note: spinlocks are not supported at the moment.
Sponsored by: Rubicon Communications, LLC ("Netgate")
|
|
|
|
| |
Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement lock_spin()/unlock_spin() lock class methods, moving the
assertion to _sleep() instead. Change assertions in callout(9) to
allow spin locks for both regular and C_DIRECT_EXEC cases. In case of
C_DIRECT_EXEC callouts spin locks are the only locks allowed actually.
As the first use case allow taskqueue_enqueue_timeout() use on fast
task queues. It actually becomes more efficient due to avoided extra
context switches in callout(9) thanks to C_DIRECT_EXEC.
MFC after: 2 weeks
Reviewed by: hselasky
Differential Revision: https://reviews.freebsd.org/D31778
|
|
|
|
|
|
|
|
|
|
|
| |
The spinning start time is missing from the calculation due to a
misplaced #endif. Return the #endif where it's supposed to be.
Submitted by: Alexander Alexeev <aalexeev@isilon.com>
Reviewed by: bdrewery, mjg
MFC after: 1 week
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D31384
|
| |
|
|
|
|
|
|
|
|
| |
Minor cleanup to skip doing them when recursing on locks and so that
they can act on found lock value if need be.
Notes:
svn path=/head/; revision=367978
|
|
|
|
| |
Notes:
svn path=/head/; revision=363871
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In such a case the second argument to lock_delay_arg_init was NULL which was
immediately causing a null pointer deref.
Since the sructure is only used for spin count, provide a dedicate routine
initializing it.
Reported by: andrew
Notes:
svn path=/head/; revision=363451
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are
still not MPSAFE (or already are but aren’t properly marked).
Use it in preparation for a general review of all nodes.
This is non-functional change that adds annotations to SYSCTL_NODE and
SYSCTL_PROC nodes using one of the soon-to-be-required flags.
Mark all obvious cases as MPSAFE. All entries that haven't been marked
as MPSAFE before are by default marked as NEEDGIANT
Approved by: kib (mentor, blanket)
Commented by: kib, gallatin, melifaro
Differential Revision: https://reviews.freebsd.org/D23718
Notes:
svn path=/head/; revision=358333
|
|
|
|
| |
Notes:
svn path=/head/; revision=356655
|
|
|
|
|
|
|
| |
Use it for all primitives. This makes everything fit in 8 bytes.
Notes:
svn path=/head/; revision=356375
|
|
|
|
|
|
|
| |
int is just a waste of space for this purpose.
Notes:
svn path=/head/; revision=356374
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that it is not used after schedlock changes got merged.
Note the unlock routine temporarily still checks for it on account of just using
regular spin unlock.
This is a prelude towards a general clean up.
Notes:
svn path=/head/; revision=355789
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Eliminate recursion from most thread_lock consumers. Return from
sched_add() without the thread_lock held. This eliminates unnecessary
atomics and lock word loads as well as reducing the hold time for
scheduler locks. This will eventually allow for lockless remote adds.
Discussed with: kib
Reviewed by: jhb
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22626
Notes:
svn path=/head/; revision=355779
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Linux lockdep API assumes LA_LOCKED semantic in lockdep_assert_held(),
meaning that either a shared lock or write lock is Ok. On the other hand,
the timeout code uses lc_assert() with LA_XLOCKED, and we need both to
work.
For mutexes, because they can not be shared (this is unique among all lock
classes, and it is unlikely that we would add new lock class anytime soon),
it is easier to simply extend mtx_assert to handle LA_LOCKED there, despite
the change itself can be viewed as a slight abstraction violation.
Reviewed by: mjg, cem, jhb
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D21362
Notes:
svn path=/head/; revision=351417
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They only showed up after I redefined LOCKSTAT_ENABLED to 0.
doing_lockprof in mutex.c is a real (but harmless) bug. Should the
value be non-zero it will do checks for lock profiling which would
otherwise be skipped.
state in rwlock.c is a wart from the compiler, the value can't be
used if lock profiling is not enabled.
Sponsored by: The FreeBSD Foundation
Notes:
svn path=/head/; revision=340410
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace a call to DELAY(1) with a new cpu_lock_delay() KPI. Currently
cpu_lock_delay() is defined to DELAY(1) on all platforms. However,
platforms with a DELAY() implementation that uses spin locks should
implement a custom cpu_lock_delay() doesn't use locks.
Reviewed by: kib
MFC after: 3 days
Notes:
svn path=/head/; revision=340164
|
|
|
|
|
|
|
|
| |
PR: 228694
Submitted by: Julian Pszczołowski <julian.pszczolowski@gmail.com>
Notes:
svn path=/head/; revision=334546
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is incomplete, has not been adopted in the other locking primitives,
and we have other means of measuring lock contention (lock_profiling,
lockstat, KTR_LOCK). Drop it to slightly de-clutter the mutex code and
free up a precious KTR class index.
Reviewed by: jhb, mjg
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D14771
Notes:
svn path=/head/; revision=331245
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The slow path is always taken when lockstat is enabled. This induces
rdtsc (or other) calls to get the cycle count even when there was no
contention.
Still go to the slow path to not mess with the fast path, but avoid
the heavy lifting unless necessary.
This reduces sys and real time during -j 80 buildkernel:
before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total
after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total
disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total
So note this is still a significant hit.
LOCK_PROFILING results are not affected.
Notes:
svn path=/head/; revision=331109
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally after grabbing the lock it has to be verified we got the right one
to begin with. However, if we are recursing, it must not change thus the
check can be avoided. In particular this avoids a lock read for non-recursing
case which found out the lock was changed.
While here avoid an irq trip of this happens.
Tested by: pho (previous version)
Notes:
svn path=/head/; revision=330418
|
|
|
|
| |
Notes:
svn path=/head/; revision=329666
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The primitive can be used to wait for the lock to be released. Intended
usage is for locks in structures which are about to be freed.
The benefit is the avoided interrupt enable/disable trip + atomic op to
grab the lock and shorter wait if the lock is held (since there is no
worry someone will contend on the lock, re-reads can be more aggressive).
Briefly discussed with: kib
Notes:
svn path=/head/; revision=329540
|
|
|
|
| |
Notes:
svn path=/head/; revision=327875
|
|
|
|
| |
Notes:
svn path=/head/; revision=327395
|
|
|
|
|
|
|
|
|
|
|
| |
Since this function is effectively slow path, if we get here the lock is most
likely already taken in which case it is cheaper to not blindly attempt the
atomic op.
While here move hwpmc probe out of the loop to match other primitives.
Notes:
svn path=/head/; revision=327394
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
Notes:
svn path=/head/; revision=326271
|
|
|
|
| |
Notes:
svn path=/head/; revision=326200
|
|
|
|
|
|
|
|
|
| |
This avoids an explicit read later.
While here whack the cheaply obtainable 'tid' argument.
Notes:
svn path=/head/; revision=326107
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed)
for many locks even in production kernels.
While here whack the tid argument from wlock hard and xlock hard.
There is no kbi change of any sort - "external" primitives still accept the
pair.
Notes:
svn path=/head/; revision=326106
|
|
|
|
| |
Notes:
svn path=/head/; revision=325963
|
|
|
|
|
|
|
| |
Fixes build breakage.
Notes:
svn path=/head/; revision=325925
|
|
|
|
|
|
|
|
|
| |
This shortens the lock hold time while not affecting corretness.
All the woken up threads end up competing can lose the race against
a completely unrelated thread getting the lock anyway.
Notes:
svn path=/head/; revision=325920
|
|
|
|
|
|
|
| |
MFC after: 1 week
Notes:
svn path=/head/; revision=324836
|
|
|
|
|
|
|
| |
Reported by: Michael Butler
Notes:
svn path=/head/; revision=324803
|
|
|
|
|
|
|
|
|
|
| |
There is nothing panic-breaking to do in the unlock case and the lock
case will fallback to the slow path doing the check already.
MFC after: 1 week
Notes:
svn path=/head/; revision=324780
|
|
|
|
|
|
|
|
|
|
|
| |
1) shorten the fast path by pushing the lockstat probe to the slow path
2) test for kernel panic only after it turns out we will have to spin,
in particular test only after we know we are not recursing
MFC after: 1 week
Notes:
svn path=/head/; revision=324778
|
|
|
|
|
|
|
| |
Now that MTX_UNOWNED is 0 the test was alwayas false.
Notes:
svn path=/head/; revision=324613
|
|
|
|
|
|
|
|
|
|
|
| |
tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.
MFC after: 1 week
Notes:
svn path=/head/; revision=324041
|
|
|
|
| |
Notes:
svn path=/head/; revision=323306
|
|
|
|
|
|
|
|
| |
Note that some of annotated variables should probably change their types
to something smaller, preferably bit-sized.
Notes:
svn path=/head/; revision=323236
|