summaryrefslogtreecommitdiff
path: root/sys/kern/subr_trap.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Minor style nits in a comment.John Baldwin2002-10-011-1/+1
| | | | Notes: svn path=/head/; revision=104303
* Various style fixups.John Baldwin2002-10-011-6/+10
| | | | | | | Submitted by: bde (mostly) Notes: svn path=/head/; revision=104297
* Actually clear PS_XCPU in ast() when we handle it.John Baldwin2002-10-011-1/+1
| | | | | | | | Submitted by: bde Pointy hat to: jhb Notes: svn path=/head/; revision=104296
* - Add a new per-process flag PS_XCPU to indicate that at least one threadJohn Baldwin2002-09-301-0/+14
| | | | | | | | | | | has exceeded its CPU time limit. - In mi_switch(), set PS_XCPU when the CPU time limit is exceeded. - Perform actual CPU time limit exceeded work in ast() when PS_XCPU is set. Requested by: many Notes: svn path=/head/; revision=104240
* First half of implementation of ksiginfo, signal queues, and such. ThisJuli Mallett2002-09-301-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | gets signals operating based on a TailQ, and is good enough to run X11, GNOME, and do job control. There are some intricate parts which could be more refined to match the sigset_t versions, but those require further evaluation of directions in which our signal system can expand and contract to fit our needs. After this has been in the tree for a while, I will make in kernel API changes, most notably to trapsignal(9) and sendsig(9), to use ksiginfo more robustly, such that we can actually pass information with our (queued) signals to the userland. That will also result in using a struct ksiginfo pointer, rather than a signal number, in a lot of kern_sig.c, to refer to an individual pending signal queue member, but right now there is no defined behaviour for such. CODAFS is unfinished in this regard because the logic is unclear in some places. Sponsored by: New Gold Technology Reviewed by: bde, tjr, jake [an older version, logic similar] Notes: svn path=/head/; revision=104233
* slightly clean up the thread_userret() and thread_consider_upcall() calls.Julian Elischer2002-09-231-3/+4
| | | | | | | | | | also some slight changes for TDF_BOUND testing and small style changes Should ONLY affect KSE programs Submitted by: davidxu Notes: svn path=/head/; revision=103838
* Spell proprly properly:Robert Watson2002-08-221-1/+1
| | | | | | | | | | failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() Notes: svn path=/head/; revision=102266
* Revert removal of cred_free_thread(): It is used to ensure that a thread'sJonathan Mini2002-07-111-0/+3
| | | | | | | | | | credentials are not improperly borrowed when the thread is not current in the kernel. Requested by: jhb, alfred Notes: svn path=/head/; revision=99753
* Don't slow every syscall and trap by doing locks and stuff if theJulian Elischer2002-07-101-3/+7
| | | | | | | | 'stop' bits are not set. This is a temporary thing.. I think this code probably needs to be rewritten anyhow. Notes: svn path=/head/; revision=99714
* Part 1 of KSE-IIIJulian Elischer2002-06-291-5/+32
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* Remove unused diagnostic function cread_free_thread().Jonathan Mini2002-06-241-3/+0
| | | | | | | Approved by: alfred Notes: svn path=/head/; revision=98727
* We no longer need to acqure Giant in ast() for ktrpsig() in postsig() nowJohn Baldwin2002-06-071-2/+0
| | | | | | | that ktrace no longer needs Giant. Notes: svn path=/head/; revision=98000
* CURSIG() is not a macro so rename it cursig().Julian Elischer2002-05-291-1/+1
| | | | | | | Obtained from: KSE tree Notes: svn path=/head/; revision=97526
* Moved signal handling and rescheduling from userret() to ast() so thatBruce Evans2002-04-041-22/+46
| | | | | | | | | | | | | | they aren't in the usual path of execution for syscalls and traps. The main complication for this is that we have to set flags to control ast() everywhere that changes the signal mask. Avoid locking in userret() in most of the remaining cases. Submitted by: luoqi (first part only, long ago, reorganized by me) Reminded by: dillon Notes: svn path=/head/; revision=93793
* Style fixes purposefully left out of last commit. I checked the kse treeJake Burkholder2002-03-291-43/+43
| | | | | | | and didn't see any changes that this conflicts with. Notes: svn path=/head/; revision=93390
* Remove abuse of intr_disable/restore in MI code by moving the loop in ast()Jake Burkholder2002-03-291-13/+1
| | | | | | | | | | | back into the calling MD code. The MD code must ensure no races between checking the astpening flag and returning to usermode. Submitted by: peter (ia64 bits) Tested on: alpha (peter, jeff), i386, ia64 (peter), sparc64 Notes: svn path=/head/; revision=93389
* Remove last two abuses of cpu_critical_{enter,exit} in the MI code.Warner Losh2002-03-211-5/+5
| | | | | | | Reviewed by: jake, jhb, rwatson Notes: svn path=/head/; revision=92858
* Change the way we ensure td_ucred is NULL if DIAGNOSTIC is defined.John Baldwin2002-03-201-30/+3
| | | | | | | | | | | | | | | | | Instead of caching the ucred reference, just go ahead and eat the decerement and increment of the refcount. Now that Giant is pushed down into crfree(), we no longer have to get Giant in the common case. In the case when we are actually free'ing the ucred, we would normally free it on the next kernel entry, so the cost there is not new, just in a different place. This also removse td_cache_ucred from struct thread. This is still only done #ifdef DIAGNOSTIC. [ missed this file in the previous commit ] Tested on: i386, alpha Notes: svn path=/head/; revision=92825
* Make this compile.Jake Burkholder2002-02-231-1/+1
| | | | | | | Pointy hat to: julian Notes: svn path=/head/; revision=91103
* Add some DIAGNOSTIC code.Julian Elischer2002-02-221-6/+30
| | | | | | | | | | | | | | | While in userland, keep the thread's ucred reference in a shadow field so that the usual place to store it is NULL. If DIAGNOSTIC is not set, the thread ucred is kept valid until the next kernel entry, at which time it is checked against the process cred and possibly corrected. Produces a BIG speedup in kernels with INVARIANTS set. (A previous commit corrected it for the non INVARIANTS case already) Reviewed by: dillon@freebsd.org Notes: svn path=/head/; revision=91090
* If the credential on an incoming thread is correct, don't botherJulian Elischer2002-02-171-3/+4
| | | | | | | | | | | reaquiring it. In the same vein, don't bother dropping the thread cred when goinf ot userland. We are guaranteed to nned it when we come back, (which we are guaranteed to do). Reviewed by: jhb@freebsd.org, bde@freebsd.org (slightly different version) Notes: svn path=/head/; revision=90748
* In a threaded world, differnt priorirites become properties ofJulian Elischer2002-02-111-1/+1
| | | | | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin) Notes: svn path=/head/; revision=90538
* Changed the type of pcb_flags from u_char to u_int and adjusted things.Bruce Evans2002-01-171-1/+1
| | | | | | | | This removes the only atomic operation on a char type in the entire kernel. Notes: svn path=/head/; revision=89466
* Change the preemption code for software interrupt thread schedules andJohn Baldwin2002-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha Notes: svn path=/head/; revision=88900
* Axe a stale comment. Holding sched_lock across both setrunqueue() andJohn Baldwin2002-01-041-8/+0
| | | | | | | mi_switch() is sufficient. Notes: svn path=/head/; revision=88875
* - Change all callers of addupc_task() to check PS_PROFIL explicitly andJohn Baldwin2001-12-181-7/+12
| | | | | | | | | | | remove the check from addupc_task(). It would need sched_lock while testing the flag anyways. - Always read sticks while holding sched_lock using a temporary variable where needed. - Always init prticks to 0 in ast() to quiet a warning. Notes: svn path=/head/; revision=88119
* Modify the critical section API as follows:John Baldwin2001-12-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha Notes: svn path=/head/; revision=88088
* Add a per-thread ucred reference for syscalls and synchronous traps fromJohn Baldwin2001-10-261-3/+11
| | | | | | | | | | | | userland. The per thread ucred reference is immutable and thus needs no locks to be read. However, until all the proc locking associated with writes to p_ucred are completed, it is still not safe to use the per-thread reference. Tested on: x86 (SMP), alpha, sparc64 Notes: svn path=/head/; revision=85525
* Remove a bogus comment. "atomic" doesn't mean that the operation is doneJohn Baldwin2001-09-211-1/+0
| | | | | | | | | | as a physical atomic operation. That would require the code to use the atomic API, which it does not. Instead, the operation is made psuedo atomic (hence the quotes) by use of the lock to protect clearing all of the flags in question. Notes: svn path=/head/; revision=83788
* KSE Milestone 2Julian Elischer2001-09-121-18/+27
| | | | | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha Notes: svn path=/head/; revision=83366
* Remove the MPSAFE keyword from the parser for syscalls.master.Matthew Dillon2001-08-301-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead introduce the [M] prefix to existing keywords. e.g. MSTD is the MP SAFE version of STD. This is prepatory for a massive Giant lock pushdown. The old MPSAFE keyword made syscalls.master too messy. Begin comments MP-Safe procedures with the comment: /* * MPSAFE */ This comments means that the procedure may be called without Giant held (The procedure itself may still need to obtain Giant temporarily to do its thing). sv_prepsyscall() is now MP SAFE and assumed to be MP SAFE sv_transtrap() is now MP SAFE and assumed to be MP SAFE ktrsyscall() and ktrsysret() are now MP SAFE (Giant Pushdown) trapsignal() is now MP SAFE (Giant Pushdown) Places which used to do the if (mtx_owned(&Giant)) mtx_unlock(&Giant) test in syscall[2]() in */*/trap.c now do not. Instead they explicitly unlock Giant if they previously obtained it, and then assert that it is no longer held to catch broken system calls. Rebuild syscall tables. Notes: svn path=/head/; revision=82585
* - Close races with signals and other AST's being triggered while we are inJohn Baldwin2001-08-101-65/+69
| | | | | | | | | | | | | | | | | | | | | | | | | the process of exiting the kernel. The ast() function now loops as long as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with preemption disabled so that any further AST's that arrive via an interrupt will be delayed until the low-level MD code returns to user mode. - Use u_int's to store the tick counts for profiling purposes so that we do not need sched_lock just to read p_sticks. This also closes a problem where the call to addupc_task() could screw up the arithmetic due to non-atomic reads of p_sticks. - Axe need_proftick(), aston(), astoff(), astpending(), need_resched(), clear_resched(), and resched_wanted() in favor of direct bit operations on p_sflag. - Fix up locking with sched_lock some. In addupc_intr(), use sched_lock to ensure pr_addr and pr_ticks are updated atomically with setting PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing PS_OWEUPC. We also do not grab the lock just to test a flag. - Simplify the handling of Giant in ast() slightly. Reviewed by: bde (mostly) Notes: svn path=/head/; revision=81493
* postsig() currently requires Giant to be held. Giant is held properly atMatthew Dillon2001-07-041-0/+2
| | | | | | | | the first postsig() call, but not always held at the second place, resulting in an occassional panic. Notes: svn path=/head/; revision=79222
* Grab Giant around postsig() since sendsig() can call into the vm toJohn Baldwin2001-07-031-0/+2
| | | | | | | grow the stack and we already needed Giant for KTRACE. Notes: svn path=/head/; revision=79125
* Move ast() and userret() to sys/kern/subr_trap.c now that they are MI.John Baldwin2001-06-291-1162/+17
| | | | Notes: svn path=/head/; revision=78983
* Add a new MI pointer to the process' trapframe p_frame instead of usingJohn Baldwin2001-06-291-3/+3
| | | | | | | | | various differently named pointers buried under p_md. Reviewed by: jake (in principle) Notes: svn path=/head/; revision=78962
* Grab Giant around trap_pfault() for now.John Baldwin2001-06-291-0/+4
| | | | Notes: svn path=/head/; revision=78946
* - Grab the proc lock around CURSIG and postsig(). Don't release the procJohn Baldwin2001-06-221-3/+4
| | | | | | | | | | | lock until after grabbing the sched_lock to avoid CURSIG racing with psignal. - Don't grab Giant for addupc_task() as it isn't needed. Reported by: tegge (signal race), bde (addupc_task a while back) Notes: svn path=/head/; revision=78636
* Don't hold sched_lock across addupc_task().John Baldwin2001-06-061-1/+1
| | | | | | | | Reported by: David Taylor <davidt@yadt.co.uk> Submitted by: bde Notes: svn path=/head/; revision=77796
* Don't acquire Giant just to call trap_fatal(), we are about to panicJohn Baldwin2001-05-231-4/+0
| | | | | | | | anyway so we'd rather see the printf's then block if the system is hosed. Notes: svn path=/head/; revision=77097
* Convert npx interrupts into traps instead of vice versa. This is muchBruce Evans2001-05-221-0/+22
| | | | | | | | | | | simpler for npx exceptions that start as traps (no assembly required...) and works better for npx exceptions that start as interrupts (there is no longer a problem for nested interrupts). Submitted by: original (pre-SMPng) version by luoqi Notes: svn path=/head/; revision=77015
* Introduce a global lock for the vm subsystem (vm_mtx).Alfred Perlstein2001-05-191-6/+7
| | | | | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb Notes: svn path=/head/; revision=76827
* Remove unneeded includes of sys/ipl.h and machine/ipl.h.John Baldwin2001-05-151-1/+0
| | | | Notes: svn path=/head/; revision=76650
* Simplify the vm fault trap handling code a bit by using if-else instead ofJohn Baldwin2001-05-111-29/+15
| | | | | | | | duplicating code in the then case and then using a goto to jump around the else case. Notes: svn path=/head/; revision=76494
* Overhaul of the SMP code. Several portions of the SMP kernel support haveJohn Baldwin2001-04-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | been made machine independent and various other adjustments have been made to support Alpha SMP. - It splits the per-process portions of hardclock() and statclock() off into hardclock_process() and statclock_process() respectively. hardclock() and statclock() call the *_process() functions for the current process so that UP systems will run as before. For SMP systems, it is simply necessary to ensure that all other processors execute the *_process() functions when the main clock functions are triggered on one CPU by an interrupt. For the alpha 4100, clock interrupts are delievered in a staggered broadcast fashion, so we simply call hardclock/statclock on the boot CPU and call the *_process() functions on the secondaries. For x86, we call statclock and hardclock as usual and then call forward_hardclock/statclock in the MD code to send an IPI to cause the AP's to execute forwared_hardclock/statclock which then call the *_process() functions. - forward_signal() and forward_roundrobin() have been reworked to be MI and to involve less hackery. Now the cpu doing the forward sets any flags, etc. and sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically return so that they can execute ast() and don't bother with setting the astpending or needresched flags themselves. This also removes the loop in forward_signal() as sched_lock closes the race condition that the loop worked around. - need_resched(), resched_wanted() and clear_resched() have been changed to take a process to act on rather than assuming curproc so that they can be used to implement forward_roundrobin() as described above. - Various other SMP variables have been moved to a MI subr_smp.c and a new header sys/smp.h declares MI SMP variables and API's. The IPI API's from machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h. - The globaldata_register() and globaldata_find() functions as well as the SLIST of globaldata structures has become MI and moved into subr_smp.c. Also, the globaldata list is only available if SMP support is compiled in. Reviewed by: jake, peter Looked over by: eivind Notes: svn path=/head/; revision=76078
* - Release Giant a bit earlier on syscall exit.John Baldwin2001-03-071-20/+14
| | | | | | | | | - Don't try to grab Giant before postsig() in userret() as it is no longer needed. - Don't grab Giant before psignal() in ast() but get the proc lock instead. Notes: svn path=/head/; revision=73931
* - Rename the lcall system call handler from Xsyscall to Xlcall_syscallJake Burkholder2001-02-251-3/+3
| | | | | | | | | | | | to be more like Xint0x80_syscall and less like c function syscall(). - Reduce code duplication between the int0x80 and lcall handlers by shuffling the elfags into the right place, saving the sizeof the instruction in tf_err and jumping into the common int0x80 code. Reviewed by: peter Notes: svn path=/head/; revision=73001
* The p_md.md_regs member of proc is used in signal handling to referenceJohn Baldwin2001-02-221-0/+1
| | | | | | | | | | | | | | | | | | the the original trapframe of the syscall, trap, or interrupt that entered the kernel. Before SMPng, ast's were handled via a psuedo trap at the end of doerti. With the SMPng commit, ast's were broken out into a separate ast() function that was called from doreti to match the behavior of other architectures. Unfortunately, when this was done, the p_md.md_regs member of curproc was not updateda in ast(), thus when signals are handled by userret() after an interrupt that returns to userland, we end up using a stale trapframe that will result in the registers from the old trapframe overwriting the real trapframe and smashing all the registers right before we return to usermode. The saved %cs:%eip from where we were in usermode are saved in the trapframe for example. Notes: svn path=/head/; revision=72917
* - Change ast() to take a pointer to a trapframe like other architectures.John Baldwin2001-02-221-7/+7
| | | | | | | | | | | - Don't use an atomic operation to update cnt.v_soft in ast(). This is the only place the variable is written to, and sched_lock is always held when it is written, so it is already protected and the mutex release of sched_lock asserts a memory barrier that ensures the value will be updated in a timely fashion. Notes: svn path=/head/; revision=72911
* - Use TRAPF_PC() on the alpha to acess the PC in the trap frame.John Baldwin2001-02-221-3/+2
| | | | | | | | | | - Don't hold sched_lock around addupc_task() as this apparently breaks profiling badly due to sched_lock being held across copyin(). Reported by: bde (2) Notes: svn path=/head/; revision=72900