summaryrefslogtreecommitdiff
path: root/sys/kern/kern_fork.c
Commit message (Collapse)AuthorAgeFilesLines
* Change the process flags P_KSES to be P_THREADED.Julian Elischer2003-02-271-3/+3
| | | | | | | This is just a cosmetic change but I've been meaning to do it for about a year. Notes: svn path=/head/; revision=111585
* Remove the PL_SHAREMOD flag from struct plimit, which could have beenTim J. Robbins2003-02-201-10/+3
| | | | | | | | | used to share resource limits between rfork threads, but never was. Removing it makes resource limit locking much simpler -- only the current process can change the contents of the structure that p_limit points to. Notes: svn path=/head/; revision=111163
* Back out M_* changes, per decision of the TRB.Warner Losh2003-02-191-3/+3
| | | | | | | Approved by: trb Notes: svn path=/head/; revision=111119
* - Split the struct kse into struct upcall and struct kse. struct kse willJeff Roberson2003-02-171-2/+0
| | | | | | | | | | soon be visible only to schedulers. This greatly simplifies much the KSE code. Submitted by: davidxu Notes: svn path=/head/; revision=111028
* Avoid file lock leakage when linuxthreads port or rfork is used:Tor Egge2003-02-151-0/+7
| | | | | | | | | | | | | | | - Mark the process leader as having an advisory lock - Check if process leader is marked as having advisory lock when closing file - Check that file is still open after lock has been obtained - Don't allow file descriptor table sharing between processes with different leaders PR: 10265 Reviewed by: alfred Notes: svn path=/head/; revision=110962
* Reversion of commit by Davidxu plus fixes since applied.Julian Elischer2003-02-011-0/+2
| | | | | | | | | | | I'm not convinced there is anything major wrong with the patch but them's the rules.. I am using my "David's mentor" hat to revert this as he's offline for a while. Notes: svn path=/head/; revision=110190
* Move UPCALL related data structure out of kse, introduce a newDavid Xu2003-01-261-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | data structure called kse_upcall to manage UPCALL. All KSE binding and loaning code are gone. A thread owns an upcall can collect all completed syscall contexts in its ksegrp, turn itself into UPCALL mode, and takes those contexts back to userland. Any thread without upcall structure has to export their contexts and exit at user boundary. Any thread running in user mode owns an upcall structure, when it enters kernel, if the kse mailbox's current thread pointer is not NULL, then when the thread is blocked in kernel, a new UPCALL thread is created and the upcall structure is transfered to the new UPCALL thread. if the kse mailbox's current thread pointer is NULL, then when a thread is blocked in kernel, no UPCALL thread will be created. Each upcall always has an owner thread. Userland can remove an upcall by calling kse_exit, when all upcalls in ksegrp are removed, the group is atomatically shutdown. An upcall owner thread also exits when process is in exiting state. when an owner thread exits, the upcall it owns is also removed. KSE is a pure scheduler entity. it represents a virtual cpu. when a thread is running, it always has a KSE associated with it. scheduler is free to assign a KSE to thread according thread priority, if thread priority is changed, KSE can be moved from one thread to another. When a ksegrp is created, there is always N KSEs created in the group. the N is the number of physical cpu in the current system. This makes it is possible that even an userland UTS is single CPU safe, threads in kernel still can execute on different cpu in parallel. Userland calls kse_create to add more upcall structures into ksegrp to increase concurrent in userland itself, kernel is not restricted by number of upcalls userland provides. The code hasn't been tested under SMP by author due to lack of hardware. Reviewed by: julian Notes: svn path=/head/; revision=109877
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.Alfred Perlstein2003-01-211-3/+3
| | | | | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT. Notes: svn path=/head/; revision=109623
* fdcopy() only needs a filedesc pointer.Alfred Perlstein2003-01-011-2/+2
| | | | Notes: svn path=/head/; revision=108522
* Since fdshare() and fdinit() only operate on filedescs, make themAlfred Perlstein2003-01-011-4/+4
| | | | | | | | | | | take pointers to filedesc structures instead of threads. This makes it more clear that they do not do any voodoo with the thread/proc or anything other than the filedesc passed in or returned. Remove some XXX KSE's as this resolves the issue. Notes: svn path=/head/; revision=108520
* Add code to ddb to allow backtracing an arbitrary thread.Julian Elischer2002-12-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | (show thread {address}) Remove the IDLE kse state and replace it with a change in the way threads sahre KSEs. Every KSE now has a thread, which is considered its "owner" however a KSE may also be lent to other threads in the same group to allow completion of in-kernel work. n this case the owner remains the same and the KSE will revert to the owner when the other work has been completed. All creations of upcalls etc. is now done from kse_reassign() which in turn is called from mi_switch or thread_exit(). This means that special code can be removed from msleep() and cv_wait(). kse_release() does not leave a KSE with no thread any more but converts the existing thread into teh KSE's owner, and sets it up for doing an upcall. It is just inhibitted from being scheduled until there is some reason to do an upcall. Remove all trace of the kse_idle queue since it is no-longer needed. "Idle" KSEs are now on the loanable queue. Notes: svn path=/head/; revision=108338
* Unbreak the KSE code. Keep track of zobie threads using the Per-CPU storageJulian Elischer2002-12-101-2/+8
| | | | | | | | | | | during the context switch. Rearrange thread cleanups to avoid problems with Giant. Clean threads when freed or when recycled. Approved by: re (jhb) Notes: svn path=/head/; revision=107719
* Introduce p_label, extensible security label storage for the MAC frameworkRobert Watson2002-11-201-0/+5
| | | | | | | | | | | | | | | | | | | | | | in struct proc. While the process label is actually stored in the struct ucred pointed to by p_ucred, there is a need for transient storage that may be used when asynchronous (deferred) updates need to be performed on the "real" label for locking reasons. Unlike other label storage, this label has no locking semantics, relying on policies to provide their own protection for the label contents, meaning that a policy leaf mutex may be used, avoiding lock order issues. This permits policies that act based on historical process behavior (such as audit policies, the MAC Framework port of LOMAC, etc) can update process properties even when many existing locks are held without violating the lock order. No currently committed policies implement use of this label storage. Approved by: re Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories Notes: svn path=/head/; revision=107105
* We leaked a process lock reference in the event an RFTHREAD processRobert Watson2002-11-181-1/+2
| | | | | | | | | | leader wasn't exiting during a fork; instead, do remember to release the lock avoiding lock order reversals and recursion panic. Reported by: "Joel M. Baldwin" <qumqats@outel.org> Notes: svn path=/head/; revision=107061
* Do not lock the process when calling fdfree() (this would have recursed onJohn Baldwin2002-10-181-4/+0
| | | | | | | | a non-recursive lock, the proc lock, before) since we don't need it to change p_fd. Notes: svn path=/head/; revision=105410
* - Add a new global mutex 'ppeers_lock' to protect the p_peers list ofJohn Baldwin2002-10-151-38/+50
| | | | | | | | | | | | | | | | | | | | | | processes forked with RFTHREAD. - Use a goto to a label for common code when exiting from fork1() in case of an error. - Move the RFTHREAD linkage setup code later in fork since the ppeers_lock cannot be locked while holding a proc lock. Handle the race of a task leader exiting and killing its peers while a peer is forking a new child. In that case, go ahead and let the peer process proceed normally as the parent is about to kill it. However, the task leader may have already gone to sleep to wait for the peers to die, so the new child process may not receive a SIGKILL from the task leader. Rather than try to destruct the new child process, just go ahead and send it a SIGKILL directly and add it to the p_peers list. This ensures that the task leader will wait until both the peer process doing the fork() and the new child process have received their KILL signals and exited. Discussed with: truckman (earlier versions) Notes: svn path=/head/; revision=105141
* - Create a new scheduler api that is defined in sys/sched.hJeff Roberson2002-10-121-6/+7
| | | | | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch Notes: svn path=/head/; revision=104964
* Round out the facilty for a 'bound' thread to loan out its KSEJulian Elischer2002-10-091-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | in specific situations. The owner thread must be blocked, and the borrower can not proceed back to user space with the borrowed KSE. The borrower will return the KSE on the next context switch where teh owner wants it back. This removes a lot of possible race conditions and deadlocks. It is consceivable that the borrower should inherit the priority of the owner too. that's another discussion and would be simple to do. Also, as part of this, the "preallocatd spare thread" is attached to the thread doing a syscall rather than the KSE. This removes the need to lock the scheduler when we want to access it, as it's now "at hand". DDB now shows a lot mor info for threaded proceses though it may need some optimisation to squeeze it all back into 80 chars again. (possible JKH project) Upcalls are now "bound" threads, but "KSE Lending" now means that other completing syscalls can be completed using that KSE before the upcall finally makes it back to the UTS. (getting threads OUT OF THE KERNEL is one of the highest priorities in the KSE system.) The upcall when it happens will present all the completed syscalls to the KSE for selection. Notes: svn path=/head/; revision=104695
* Some kernel threads try to do significant work, and the default KSTACK_PAGESScott Long2002-10-021-4/+9
| | | | | | | | | | | | | | | | doesn't give them enough stack to do much before blowing away the pcb. This adds MI and MD code to allow the allocation of an alternate kstack who's size can be speficied when calling kthread_create. Passing the value 0 prevents the alternate kstack from being created. Note that the ia64 MD code is missing for now, and PowerPC was only partially written due to the pmap.c being incomplete there. Though this patch does not modify anything to make use of the alternate kstack, acpi and usb are good candidates. Reviewed by: jake, peter, jhb Notes: svn path=/head/; revision=104354
* Back our kernel support for reliable signal queues.Juli Mallett2002-10-011-1/+0
| | | | | | | Requested by: rwatson, phk, and many others Notes: svn path=/head/; revision=104306
* First half of implementation of ksiginfo, signal queues, and such. ThisJuli Mallett2002-09-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | gets signals operating based on a TailQ, and is good enough to run X11, GNOME, and do job control. There are some intricate parts which could be more refined to match the sigset_t versions, but those require further evaluation of directions in which our signal system can expand and contract to fit our needs. After this has been in the tree for a while, I will make in kernel API changes, most notably to trapsignal(9) and sendsig(9), to use ksiginfo more robustly, such that we can actually pass information with our (queued) signals to the userland. That will also result in using a struct ksiginfo pointer, rather than a signal number, in a lot of kern_sig.c, to refer to an individual pending signal queue member, but right now there is no defined behaviour for such. CODAFS is unfinished in this regard because the logic is unclear in some places. Sponsored by: New Gold Technology Reviewed by: bde, tjr, jake [an older version, logic similar] Notes: svn path=/head/; revision=104233
* Add kernel support needed for the KSE-aware libpthread:Jonathan Mini2002-09-161-2/+0
| | | | | | | | | | | | | | | - Use ucontext_t's to store KSE thread state. - Synthesize state for the UTS upon each upcall, rather than saving and copying a trapframe. - Deliver signals to KSE-aware processes via upcall. - Rename kse mailbox structure fields to be more BSD-like. - Store the UTS's stack in struct proc in a stack_t. Reviewed by: bde, deischen, julian Approved by: -arch Notes: svn path=/head/; revision=103410
* Allocate KSEs and KSEGRPs separatly and remove them from the proc structure.Julian Elischer2002-09-151-5/+3
| | | | | | | | | | | | next step is to allow > 1 to be allocated per process. This would give multi-processor threads. (when the rest of the infrastructure is in place) While doing this I noticed libkvm and sys/kern/kern_proc.c:fill_kinfo_proc are diverging more than they should.. corrective action needed soon. Notes: svn path=/head/; revision=103367
* Completely redo thread states.Julian Elischer2002-09-111-0/+1
| | | | | | | Reviewed by: davidxu@freebsd.org Notes: svn path=/head/; revision=103216
* Use UMA as a complex object allocator.Julian Elischer2002-09-061-33/+3
| | | | | | | | | | | | | | | | | | The process allocator now caches and hands out complete process structures *including substructures* . i.e. it get's the process structure with the first thread (and soon KSE) already allocated and attached, all in one hit. For the average non threaded program (non KSE that is) the allocated thread and its stack remain attached to the process, even when the process is unused and in the process cache. This saves having to allocate and attach it later, effectively bringing us (hopefully) close to the efficiency of pre-KSE systems where these were a single structure. Reviewed by: davidxu@freebsd.org, peter@freebsd.org Notes: svn path=/head/; revision=103002
* s/SGNL/SIG/David Xu2002-09-051-1/+1
| | | | | | | | | | | | | s/SNGL/SINGLE/ s/SNGLE/SINGLE/ Fix abbreviation for P_STOPPED_* etc flags, in original code they were inconsistent and difficult to distinguish between them. Approved by: julian (mentor) Notes: svn path=/head/; revision=102950
* slight cleanup of single-threading code for KSE processesJulian Elischer2002-08-221-0/+9
| | | | Notes: svn path=/head/; revision=102292
* Move code block added in 1.157 to a safer part of fork1().Matthew N. Dodd2002-08-071-9/+9
| | | | | | | Submitted by: jake Notes: svn path=/head/; revision=101457
* Kernel modifications necessary to allow to follow fork()ed children.Matthew N. Dodd2002-08-041-0/+10
| | | | | | | | PR: bin/25587 (in part) MFC after: 3 weeks Notes: svn path=/head/; revision=101284
* Update docs to reflect change in count of procs reserved for rootMike Silbersack2002-07-301-1/+1
| | | | | | | | | | | from 1 to 10. PR: kern/40515 Submitted by: David Schultz <dschultz@uclink.Berkeley.EDU> MFC after: 1 day Notes: svn path=/head/; revision=100908
* Wire the sysctl output buffer before grabbing any locks to preventDon Lewis2002-07-281-0/+1
| | | | | | | | | | SYSCTL_OUT() from blocking while locks are held. This should only be done when it would be inconvenient to make a temporary copy of the data and defer calling SYSCTL_OUT() until after the locks are released. Notes: svn path=/head/; revision=100831
* part of a greater patch set..Julian Elischer2002-07-141-1/+1
| | | | | | | | | | | | | | 1/ don't need to set td_state to TDS_RUNNING in fork_return. it's already set in choosethread(). 2/ Set a child process state to "normal" as opposed to "new" when we allow it to be put on the run queue. Allows child to receive signals from the parent if the parent runs first and tries to immediatly signal he child. Submitted by: (part 2) Thomas Moestl <tmoestl@gmx.net> Notes: svn path=/head/; revision=99945
* Thinking about it I came to the conclusion that the KSE states were incorrectlyJulian Elischer2002-07-141-3/+1
| | | | | | | | | | | | | | | | | formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run queue. THREAD: Attached to a thread and slaved to whatever state the thread is in. This means that most places where we were adjusting kse state can go away as it is just moving around because the thread is.. The only places we need to adjust the KSE state is in transition to and from the idle and run queues. Reviewed by: jhb@freebsd.org Notes: svn path=/head/; revision=99942
* Revert removal of cred_free_thread(): It is used to ensure that a thread'sJonathan Mini2002-07-111-0/+3
| | | | | | | | | | credentials are not improperly borrowed when the thread is not current in the kernel. Requested by: jhb, alfred Notes: svn path=/head/; revision=99753
* Part 1 of KSE-IIIJulian Elischer2002-06-291-21/+54
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* Remove unused diagnostic function cread_free_thread().Jonathan Mini2002-06-241-3/+0
| | | | | | | Approved by: alfred Notes: svn path=/head/; revision=98727
* - Proper locking for p_tracep and p_traceflag.John Baldwin2002-06-071-7/+7
| | | | | | | - Catch up to new ktrace API. Notes: svn path=/head/; revision=97998
* - Protect randompid and nprocs with the allproc_lock.John Baldwin2002-05-021-101/+122
| | | | | | | | | | | | - Reorder fork1() to do malloc() and other blocking operations prior to acquiring the needed process locks. - The new process inherit's the credentials of curthread, not the credentials of the old process. - Document a really weird race that will come up with KSE allows multiple kernel threads per process. Notes: svn path=/head/; revision=95938
* Lock proctree_lock instead of pgrpsess_lock.John Baldwin2002-04-161-2/+2
| | | | Notes: svn path=/head/; revision=94861
* Whitespace changes to wrap long lines.John Baldwin2002-04-091-4/+8
| | | | Notes: svn path=/head/; revision=94303
* Change callers of mtx_init() to pass in an appropriate lock type name. InJohn Baldwin2002-04-041-1/+1
| | | | | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64 Notes: svn path=/head/; revision=93818
* Fix leakage of p_pgrp lock.Seigo Tanimura2002-04-021-0/+4
| | | | Notes: svn path=/head/; revision=93679
* Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()Matthew Dillon2002-04-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | and cpu_critical_exit() and moves associated critical prototypes into their own header file, <arch>/<arch>/critical.h, which is only included by the three MI source files that need it. Backout and re-apply improperly comitted syntactical cleanups made to files that were still under active development. Backout improperly comitted program structure changes that moved localized declarations to the top of two procedures. Partially re-apply one of the program structure changes to move 'mask' into an intermediate block rather then in three separate sub-blocks to make the code more readable. Re-integrate bug fixes that Jake made to the sparc64 code. Note: In general, developers should not gratuitously move declarations out of sub-blocks. They are where they are for reasons of structure, grouping, readability, compiler-localizability, and to avoid developer-introduced bugs similar to several found in recent years in the VFS and VM code. Reviewed by: jake Notes: svn path=/head/; revision=93607
* Make the reference counting of 'struct pargs' SMP safe.Alfred Perlstein2002-03-271-2/+1
| | | | | | | | | | | | There is still some locations where the PROC lock should be held in order to prevent inconsistent views from outside (like the proc->p_fd fix for kern/vfs_syscalls.c:checkdirs()) that can be fixed later. Submitted by: Jonathan Mini <mini@haikugeek.com> Notes: svn path=/head/; revision=93295
* Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locksJeff Roberson2002-03-271-1/+1
| | | | | | | | | | | | | | with this flag. Remove the dup_list and dup_ok code from subr_witness. Now we just check for the flag instead of doing string compares. Also, switch the process lock, process group lock, and uma per cpu locks over to this interface. The original mechanism did not work well for uma because per cpu lock names are unique to each zone. Approved by: jhb Notes: svn path=/head/; revision=93273
* Compromise for critical*()/cpu_critical*() recommit. Cleanup the interruptMatthew Dillon2002-03-271-3/+7
| | | | | | | | | | | | | | | | | | | | | | disablement assumptions in kern_fork.c by adding another API call, cpu_critical_fork_exit(). Cleanup the td_savecrit field by moving it from MI to MD. Temporarily move cpu_critical*() from <arch>/include/cpufunc.h to <arch>/<arch>/critical.c (stage-2 will clean this up). Implement interrupt deferral for i386 that allows interrupts to remain enabled inside critical sections. This also fixes an IPI interlock bug, and requires uses of icu_lock to be enclosed in a true interrupt disablement. This is the stage-1 commit. Stage-2 will occur after stage-1 has stabilized, and will move cpu_critical*() into its own header file(s) + other things. This commit may break non-i386 architectures in trivial ways. This should be temporary. Reviewed by: core Approved by: core Notes: svn path=/head/; revision=93264
* Add a change mirroring that made to kern/subr_trap.c and others.Benno Rice2002-03-211-9/+3
| | | | | | | | | | This makes kernel builds with DIAGNOSTIC work again. Apparently forgotten by: jhb Might want to be checked by: jhb Notes: svn path=/head/; revision=92852
* Remove references to vm_zone.h and switch over to the new uma API.Jeff Roberson2002-03-201-2/+2
| | | | | | | | Also, remove maxsockets. If you look carefully you'll notice that the old zone allocator never honored this anyway. Notes: svn path=/head/; revision=92751
* revert last commit temporarily due to whining on the lists.Matthew Dillon2002-02-261-8/+1
| | | | Notes: svn path=/head/; revision=91328
* STAGE-1 of 3 commit - allow (but do not require) interrupts to remainMatthew Dillon2002-02-261-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enabled in critical sections and streamline critical_enter() and critical_exit(). This commit allows an architecture to leave interrupts enabled inside critical sections if it so wishes. Architectures that do not wish to do this are not effected by this change. This commit implements the feature for the I386 architecture and provides a sysctl, debug.critical_mode, which defaults to 1 (use the feature). For now you can turn the sysctl on and off at any time in order to test the architectural changes or track down bugs. This commit is just the first stage. Some areas of the code, specifically the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will be cleaned up in the STAGE-2 commit when the critical_*() functions are moved entirely into MD files. The following changes have been made: * critical_enter() and critical_exit() for I386 now simply increment and decrement curthread->td_critnest. They no longer disable hard interrupts. When critical_exit() decrements the counter to 0 it effectively calls a routine to deal with whatever interrupts were deferred during the time the code was operating in a critical section. Other architectures are unaffected. * fork_exit() has been conditionalized to remove MD assumptions for the new code. Old code will still use the old MD assumptions in regards to hard interrupt disablement. In STAGE-2 this will be turned into a subroutine call into MD code rather then hardcoded in MI code. The new code places the burden of entering the critical section in the trampoline code where it belongs. * I386: interrupts are now enabled while we are in a critical section. The interrupt vector code has been adjusted to deal with the fact. If it detects that we are in a critical section it currently defers the interrupt by adding the appropriate bit to an interrupt mask. * In order to accomplish the deferral, icu_lock is required. This is i386-specific. Thus icu_lock can only be obtained by mainline i386 code while interrupts are hard disabled. This change has been made. * Because interrupts may or may not be hard disabled during a context switch, cpu_switch() can no longer simply assume that PSL_I will be in a consistent state. Therefore, it now saves and restores eflags. * FAST INTERRUPT PROVISION. Fast interrupts are currently deferred. The intention is to eventually allow them to operate either while we are in a critical section or, if we are able to restrict the use of sched_lock, while we are not holding the sched_lock. * ICU and APIC vector assembly for I386 cleaned up. The ICU code has been cleaned up to match the APIC code in regards to format and macro availability. Additionally, the code has been adjusted to deal with deferred interrupts. * Deferred interrupts use a per-cpu boolean int_pending, and masks ipending, spending, and fpending. Being per-cpu variables it is not currently necessary to lock; bus cycles modifying them. Note that the same mechanism will enable preemption to be incorporated as a true software interrupt without having to further hack up the critical nesting code. * Note: the old critical_enter() code in kern/kern_switch.c is currently #ifdef to be compatible with both the old and new methodology. In STAGE-2 it will be moved entirely to MD code. Performance issues: One of the purposes of this commit is to enhance critical section performance, specifically to greatly reduce bus overhead to allow the critical section code to be used to protect per-cpu caches. These caches, such as Jeff's slab allocator work, can potentially operate very quickly making the effective savings of the new critical section code's performance very significant. The second purpose of this commit is to allow architectures to enable certain interrupts while in a critical section. Specifically, the intention is to eventually allow certain FAST interrupts to operate rather then defer. The third purpose of this commit is to begin to clean up the critical_enter()/critical_exit()/cpu_critical_enter()/ cpu_critical_exit() API which currently has serious cross pollution in MI code (in fork_exit() and ast() for example). The fourth purpose of this commit is to provide a framework that allows kernel-preempting software interrupts to be implemented cleanly. This is currently used for two forward interrupts in I386. Other architectures will have the choice of using this infrastructure or building the functionality directly into critical_enter()/ critical_exit(). Finally, this commit is designed to greatly improve the flexibility of various architectures to manage critical section handling, software interrupts, preemption, and other highly integrated architecture-specific details. Notes: svn path=/head/; revision=91315