summaryrefslogtreecommitdiff
path: root/sys/kern/kern_synch.c
Commit message (Collapse)AuthorAgeFilesLines
...
* slight stylisations to take into account recent code changes.Julian Elischer2002-07-241-7/+3
| | | | Notes: svn path=/head/; revision=100647
* Fix a reversed test.Julian Elischer2002-07-171-7/+15
| | | | | | | | | | | Fix some style nits. Fix a KASSERT message. Add/fix some comments. Submitted by: bde@freebsd.org Notes: svn path=/head/; revision=100262
* Add a KASSERT() to assert that td_critnest is == 1 when mi_switch() isJohn Baldwin2002-07-171-0/+2
| | | | | | | called. Notes: svn path=/head/; revision=100210
* Allow alphas to do crashdumps: Refuse to run anything in choosethread()Andrew Gallatin2002-07-171-4/+4
| | | | | | | | | | | | | after a panic which is not an interrupt thread, or the thread which caused the panic. Also, remove panicstr checks from msleep() and from cv_wait() in order to allow threads to go to sleep and yeild the cpu to the panicing thread, or to an interrupt thread which might be doing the crashdump. Reviewed by: jhb (and it was mostly his idea too) Notes: svn path=/head/; revision=100209
* Thinking about it I came to the conclusion that the KSE states were incorrectlyJulian Elischer2002-07-141-13/+4
| | | | | | | | | | | | | | | | | formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run queue. THREAD: Attached to a thread and slaved to whatever state the thread is in. This means that most places where we were adjusting kse state can go away as it is just moving around because the thread is.. The only places we need to adjust the KSE state is in transition to and from the idle and run queues. Reviewed by: jhb@freebsd.org Notes: svn path=/head/; revision=99942
* oops, state cannot be two different values at once..Julian Elischer2002-07-141-1/+1
| | | | | | | use || instead of && Notes: svn path=/head/; revision=99937
* Re-enable the idle page-zeroing code. Remove all IPIs from the idleMatthew Dillon2002-07-121-0/+7
| | | | | | | | | | | | | | | | | | | | page-zeroing code as well as from the general page-zeroing code and use a lazy tlb page invalidation scheme based on a callback made at the end of mi_switch. A number of people came up with this idea at the same time so credit belongs to Peter, John, and Jake as well. Two-way SMP buildworld -j 5 tests (second run, after stabilization) 2282.76 real 2515.17 user 704.22 sys before peter's IPI commit 2266.69 real 2467.50 user 633.77 sys after peter's commit 2232.80 real 2468.99 user 615.89 sys after this commit Reviewed by: peter, jhb Approved by: peter Notes: svn path=/head/; revision=99890
* make this repect ps_sigintr if there is a pre-existing signalJulian Elischer2002-07-061-1/+0
| | | | | | | | | or suspension request. Submitted by: David Xu Notes: svn path=/head/; revision=99488
* Fix at least one of the things wrong with signalsJulian Elischer2002-07-061-6/+9
| | | | | | | | | ^Z should work a lot better now. Submitted by: peter@freebsd.org Notes: svn path=/head/; revision=99480
* Try clean up some of the mess that resulted from layers and layersJulian Elischer2002-07-031-2/+1
| | | | | | | | | of p4 merges from -current as things started getting different. Corroborated by: Similar patches just mailed by BDE. Notes: svn path=/head/; revision=99337
* When going back to SLEEP state, make sure ourJulian Elischer2002-07-021-0/+1
| | | | | | | State is correctly marked so. Notes: svn path=/head/; revision=99248
* Part 1 of KSE-IIIJulian Elischer2002-06-291-80/+195
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* more caddr_t removal.Alfred Perlstein2002-06-291-4/+4
| | | | Notes: svn path=/head/; revision=99012
* I Noticed a defect in the way wakeup() scans the tailq. Tor noticed anMatthew Dillon2002-06-241-3/+8
| | | | | | | | | | even worse defect in wakeup_one(). This patch cleans up both. Submitted by: tegge MFC after: 3 days Notes: svn path=/head/; revision=98714
* - Catch up to new ktrace API.John Baldwin2002-06-071-7/+5
| | | | | | | - ktrace trace points in msleep() and cv_wait() no longer need Giant. Notes: svn path=/head/; revision=97995
* CURSIG() is not a macro so rename it cursig().Julian Elischer2002-05-291-6/+6
| | | | | | | Obtained from: KSE tree Notes: svn path=/head/; revision=97526
* Minor nit: get p pointer in msleep() from td->td_proc (whereJohn Baldwin2002-05-231-1/+1
| | | | | | | td == curthread) rather than from curproc. Notes: svn path=/head/; revision=97158
* Remove __P.Alfred Perlstein2002-03-191-5/+5
| | | | Notes: svn path=/head/; revision=92723
* Fix a gcc-3.1+ warning.Peter Wemm2002-03-191-0/+1
| | | | | | | | | | | | | | warning: deprecated use of label at end of compound statement ie: you cannot do this anymore: switch(foo) { .... default: } Notes: svn path=/head/; revision=92666
* Convert p->p_runtime and PCPU(switchtime) to bintime format.Poul-Henning Kamp2002-02-221-17/+6
| | | | Notes: svn path=/head/; revision=91066
* In a threaded world, differnt priorirites become properties ofJulian Elischer2002-02-111-22/+26
| | | | | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin) Notes: svn path=/head/; revision=90538
* Change the preemption code for software interrupt thread schedules andJohn Baldwin2002-01-051-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha Notes: svn path=/head/; revision=88900
* Modify the critical section API as follows:John Baldwin2001-12-181-3/+0
| | | | | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha Notes: svn path=/head/; revision=88088
* Add/correct description for some sysctl variables where it was missing.Luigi Rizzo2001-12-161-1/+2
| | | | | | | | | | | The description field is unused in -stable, so the MFC there is equivalent to a comment. It can be done at any time, i am just setting a reminder in 45 days when hopefully we are past 4.5-release. MFC after: 45 days Notes: svn path=/head/; revision=88019
* Assert that Giant is not held in mi_switch() unless the process stateJohn Baldwin2001-10-231-0/+4
| | | | | | | is SMTX or SRUN. Notes: svn path=/head/; revision=85368
* Introduce some jitter to the timing of the samples that determineIan Dowse2001-10-201-4/+15
| | | | | | | | | | | | | | | the system load average. Previously, the load average measurement was susceptible to synchronisation with processes that run at regular intervals such as the system bufdaemon process. Each interval is now chosen at random within the range of 4 to 6 seconds. This large variation is chosen so that over the shorter 5-minute load average timescale there is a good dispersion of samples across the 5-second sample period (the time to perform 60 5-second samples now has a standard deviation of approx 4.5 seconds). Notes: svn path=/head/; revision=85237
* Move the code that computes the system load average from vm_meter.cIan Dowse2001-10-201-3/+49
| | | | | | | | | | | | | | | to kern_synch.c in preparation for adding some jitter to the inter-sample time. Note that the "vm.loadavg" sysctl still lives in vm_meter.c which isn't the right place, but it is appropriate for the current (bad) name of that sysctl. Suggested by: jhb (some time ago) Reviewed by: bde Notes: svn path=/head/; revision=85227
* GC some #if 0'd code.John Baldwin2001-09-211-8/+2
| | | | Notes: svn path=/head/; revision=83787
* Whitespace and spelling fixes.John Baldwin2001-09-211-2/+2
| | | | Notes: svn path=/head/; revision=83786
* KSE Milestone 2Julian Elischer2001-09-121-167/+220
| | | | | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha Notes: svn path=/head/; revision=83366
* Make yield() MPSAFE.Matthew Dillon2001-09-011-1/+6
| | | | | | | | | Synchronize syscalls.master with all MPSAFE changes to date. Synchronize new syscall generation follows because yield() will panic if it is out of sync with syscalls.master. Notes: svn path=/head/; revision=82711
* Release the sched_lock before bombing out in mi_switch() via db_error().John Baldwin2001-08-211-1/+3
| | | | | | | | This makes things slightly easier if you call a function that calls mi_switch() as it keeps the locking before and after closer. Notes: svn path=/head/; revision=82117
* Add a hook to mi_switch() to abort via db_error() if we attempt toJohn Baldwin2001-08-211-0/+12
| | | | | | | | | perform a context switch from DDB. Consulting from: bde Notes: svn path=/head/; revision=82096
* - Fix a bug in the previous workaround for the tsleep/endtsleep race.John Baldwin2001-08-211-2/+5
| | | | | | | | | | | | | | | | | | | callout_stop() would fail in two cases: 1) The timeout was currently executing, and 2) The timeout had already executed. We only needed to work around the race for 1). We caught some instances of 2) via the PS_TIMEOUT flag, however, if endtsleep() fired after the process had been woken up but before it had resumed execution, PS_TIMEOUT would not be set, but callout_stop() would fail, so we would block the process until endtsleep() resumed it. Except that endtsleep() had already run and couldn't resume it. This adds a new flag PS_TIMOFAIL to indicate the case of 2) when PS_TIMEOUT isn't set. - Implement this race fix for condition variables as well. Tested by: sos Notes: svn path=/head/; revision=82085
* - Close races with signals and other AST's being triggered while we are inJohn Baldwin2001-08-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | the process of exiting the kernel. The ast() function now loops as long as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with preemption disabled so that any further AST's that arrive via an interrupt will be delayed until the low-level MD code returns to user mode. - Use u_int's to store the tick counts for profiling purposes so that we do not need sched_lock just to read p_sticks. This also closes a problem where the call to addupc_task() could screw up the arithmetic due to non-atomic reads of p_sticks. - Axe need_proftick(), aston(), astoff(), astpending(), need_resched(), clear_resched(), and resched_wanted() in favor of direct bit operations on p_sflag. - Fix up locking with sched_lock some. In addupc_intr(), use sched_lock to ensure pr_addr and pr_ticks are updated atomically with setting PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing PS_OWEUPC. We also do not grab the lock just to test a flag. - Simplify the handling of Giant in ast() slightly. Reviewed by: bde (mostly) Notes: svn path=/head/; revision=81493
* Work around a race between msleep() and endtsleep() where it was possibleJohn Baldwin2001-08-101-3/+23
| | | | | | | | | | | | for endtsleep() to be executing when msleep() resumed, for endtsleep() to spin on sched_lock long enough for the other process to loop on msleep() and sleep again resulting in endtsleep() waking up the "wrong" msleep. Obtained from: BSD/OS Notes: svn path=/head/; revision=81482
* Style nit: covert a couple of if (p_wchan) tests to if (p_wchan != NULL).John Baldwin2001-08-101-3/+3
| | | | Notes: svn path=/head/; revision=81479
* - Remove asleep(), await(), and M_ASLEEP.John Baldwin2001-08-101-181/+1
| | | | | | | | | | | | - Callers of asleep() and await() have been converted to calling tsleep(). The only caller outside of M_ASLEEP was the ata driver, which called both asleep() and await() with spl-raised, so there was no need for the asleep() and await() pair. M_ASLEEP was unused. Reviewed by: jasone, peter Notes: svn path=/head/; revision=81397
* Use 'p' instead of the potentially more expensive 'curproc' inside ofJohn Baldwin2001-08-021-5/+5
| | | | | | | mi_switch(). Notes: svn path=/head/; revision=81072
* Apply the cluebat to myself and undo the await() -> mawait() rename. TheJohn Baldwin2001-07-311-31/+15
| | | | | | | | | | | | | | | | | | | | | | | asleep() and await() functions split the functionality of msleep() up into two halves. Only the asleep() half (which is what puts the process on the sleep queue) actually needs the lock usually passed to msleep() held to prevent lost wakeups. await() does not need the lock held, so the lock can be released prior to calling await() and does not need to be passed in to the await() function. Typical usage of these functions would be as follows: mtx_lock(&foo_mtx); ... do stuff ... asleep(&foo_cond, PRIxx, "foowt", hz); ... mtx_unlock&foo_mtx); ... await(-1, -1); Inspired by: dillon on the couch at Usenix Notes: svn path=/head/; revision=80766
* Add a safety belt to mawait() for the (cold || panicstr) case identical toJohn Baldwin2001-07-311-0/+12
| | | | | | | | | | the one in msleep() such that we return immediately rather than blocking. Submitted by: peter Prodded by: sheldonh Notes: svn path=/head/; revision=80761
* Backout mwakeup, etc.Jake Burkholder2001-07-061-13/+4
| | | | Notes: svn path=/head/; revision=79343
* Implement mwakeup, mwakeup_one, cv_signal_drop and cv_broadcast_drop.Jake Burkholder2001-07-041-4/+13
| | | | | | | | | | | | These take an additional mutex argument, which is dropped before any processes are made runnable. This can avoid contention on the mutex if the processes would immediately acquire it, and is done in such a way that wakeups will not be lost. Reviewed by: jhb Notes: svn path=/head/; revision=79172
* Remove commented-out garbage that skipped updating schedcpu() stats forJohn Baldwin2001-07-031-2/+0
| | | | | | | ithreads in SWAIT. Notes: svn path=/head/; revision=79132
* Just check p_oncpu when determining if a process is executing or not.John Baldwin2001-07-031-4/+1
| | | | | | | | | | We already did this in the SMP case, and it is now maintained in the UP case as well, and makes the code slightly more readable. Note that curproc is always executing, thus the p != curproc test does not need to be performed if the p_oncpu check is made. Notes: svn path=/head/; revision=79131
* Axe spl's that are covered by the sched_lock (and have been for quiteJohn Baldwin2001-07-031-30/+4
| | | | | | | some time.) Notes: svn path=/head/; revision=79130
* Include the wait message and channel for msleep() in the KTR tracepoint.John Baldwin2001-07-031-1/+2
| | | | Notes: svn path=/head/; revision=79128
* Remove bogus need_resched() of the current CPU in roundrobin().John Baldwin2001-07-031-3/+6
| | | | | | | | | | We don't actually need to force a context switch of the current process. The act of firing the event triggers a context switch to softclock() and then switching back out again which is equivalent to a preemption, thus no further work is needed on the local CPU. Notes: svn path=/head/; revision=79126
* Make the schedlock saved critical section state a per-thread property.John Baldwin2001-06-301-0/+3
| | | | Notes: svn path=/head/; revision=79003
* - Lock CURSIG() with the proc lock to close the signal race with psignal.John Baldwin2001-06-221-99/+67
| | | | | | | | | | | | | | | | | | | - Grab Giant around ktrace points. - Clean up KTR_PROC tracepoints to not display the value of sched_lock.mtx_lock as it isn't really needed anymore and just obfuscates the messages. - Add a few if conditions to replace gotos. - Ensure that every msleep KTR event ends up with a matching msleep resume KTR event (this was broken when we didn't do a mi_switch()). - Only note via ktrace that we resumed from a switch once rather than twice in several places in msleep(). - Remove spl's rom asleep and await as the proc lock and sched_lock provide all the needed locking. - In mawait() add in a needed ktrace point for noting that we are about to switch out. Notes: svn path=/head/; revision=78638