summaryrefslogtreecommitdiff
path: root/sys/kern/subr_trap.c
Commit message (Collapse)AuthorAgeFilesLines
* - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread withJeff Roberson2003-03-311-4/+5
| | | | | | | | | | a follow on commit to kern_sig.c - signotify() now operates on a thread since unmasked pending signals are stored in the thread. - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK. Notes: svn path=/head/; revision=112888
* - Change trapsignal() to accept a thread and not a proc.Jeff Roberson2003-03-311-1/+1
| | | | | | | | | | - Change all consumers to pass in a thread. Right now this does not cause any functional changes but it will be important later when signals can be delivered to specific threads. Notes: svn path=/head/; revision=112883
* Fix signal delivering bug for threaded process.David Xu2003-03-111-2/+8
| | | | Notes: svn path=/head/; revision=112077
* Replace calls to WITNESS_SLEEP() and witness_list() with equivalent callsJohn Baldwin2003-03-041-4/+1
| | | | | | | to WITNESS_WARN(). Notes: svn path=/head/; revision=111883
* Change the process flags P_KSES to be P_THREADED.Julian Elischer2003-02-271-2/+2
| | | | | | | This is just a cosmetic change but I've been meaning to do it for about a year. Notes: svn path=/head/; revision=111585
* - Add a new function, thread_signal_add(), that is called from postsig toJeff Roberson2003-02-171-1/+8
| | | | | | | | | | | add a signal to a mailbox's pending set. - Add a new function, thread_signal_upcall(), this causes the current thread to upcall so that we can deliver pending signals. Reviewed by: mini Notes: svn path=/head/; revision=111033
* Move a bunch of flags from the KSE to the thread.Julian Elischer2003-02-171-9/+8
| | | | | | | | | | | I was in two minds as to where to put them in the first case.. I should have listenned to the other mind. Submitted by: parts by davidxu@ Reviewed by: jeff@ mini@ Notes: svn path=/head/; revision=111032
* - Move ke_sticks, ke_iticks, ke_uticks, ke_uu, ke_su, and ke_iu back intoJeff Roberson2003-02-171-2/+2
| | | | | | | | | | the proc. These counters are only examined through calcru. Submitted by: davidxu Tested on: x86, alpha, UP/SMP Notes: svn path=/head/; revision=111024
* Reversion of commit by Davidxu plus fixes since applied.Julian Elischer2003-02-011-32/+24
| | | | | | | | | | | I'm not convinced there is anything major wrong with the patch but them's the rules.. I am using my "David's mentor" hat to revert this as he's offline for a while. Notes: svn path=/head/; revision=110190
* Use a local variable to store the number of ticks that elapsed inTim J. Robbins2003-01-311-2/+3
| | | | | | | | kernel mode instead of (unintentionally) using the global `ticks'. This error completely broke profiling. Notes: svn path=/head/; revision=110140
* Move UPCALL related data structure out of kse, introduce a newDavid Xu2003-01-261-24/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | data structure called kse_upcall to manage UPCALL. All KSE binding and loaning code are gone. A thread owns an upcall can collect all completed syscall contexts in its ksegrp, turn itself into UPCALL mode, and takes those contexts back to userland. Any thread without upcall structure has to export their contexts and exit at user boundary. Any thread running in user mode owns an upcall structure, when it enters kernel, if the kse mailbox's current thread pointer is not NULL, then when the thread is blocked in kernel, a new UPCALL thread is created and the upcall structure is transfered to the new UPCALL thread. if the kse mailbox's current thread pointer is NULL, then when a thread is blocked in kernel, no UPCALL thread will be created. Each upcall always has an owner thread. Userland can remove an upcall by calling kse_exit, when all upcalls in ksegrp are removed, the group is atomatically shutdown. An upcall owner thread also exits when process is in exiting state. when an owner thread exits, the upcall it owns is also removed. KSE is a pure scheduler entity. it represents a virtual cpu. when a thread is running, it always has a KSE associated with it. scheduler is free to assign a KSE to thread according thread priority, if thread priority is changed, KSE can be moved from one thread to another. When a ksegrp is created, there is always N KSEs created in the group. the N is the number of physical cpu in the current system. This makes it is possible that even an userland UTS is single CPU safe, threads in kernel still can execute on different cpu in parallel. Userland calls kse_create to add more upcall structures into ksegrp to increase concurrent in userland itself, kernel is not restricted by number of upcalls userland provides. The code hasn't been tested under SMP by author due to lack of hardware. Reviewed by: julian Notes: svn path=/head/; revision=109877
* Add code to ddb to allow backtracing an arbitrary thread.Julian Elischer2002-12-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | (show thread {address}) Remove the IDLE kse state and replace it with a change in the way threads sahre KSEs. Every KSE now has a thread, which is considered its "owner" however a KSE may also be lent to other threads in the same group to allow completion of in-kernel work. n this case the owner remains the same and the KSE will revert to the owner when the other work has been completed. All creations of upcalls etc. is now done from kse_reassign() which in turn is called from mi_switch or thread_exit(). This means that special code can be removed from msleep() and cv_wait(). kse_release() does not leave a KSE with no thread any more but converts the existing thread into teh KSE's owner, and sets it up for doing an upcall. It is just inhibitted from being scheduled until there is some reason to do an upcall. Remove all trace of the kse_idle queue since it is no-longer needed. "Idle" KSEs are now on the loanable queue. Notes: svn path=/head/; revision=108338
* To reduce per-return overhead of userret(), call intoRobert Watson2002-11-081-4/+7
| | | | | | | | | | | | | | | | | | mac_thread_userret() only if PS_MACPEND is set in the process AST mask. This avoids the cost of the entry point in the common case, but requires policies interested in the userret event to set the flag (protected by the scheduler lock) if they do want the event. Since all the policies that we're working with which use mac_thread_userret() use the entry point only selectively to perform operations deferred for locking reasons, this maintains the desired semantics. Approved by: re Requested by: bde Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories Notes: svn path=/head/; revision=106655
* iBack out david's last commit. the suspension code needs to be calledJulian Elischer2002-10-261-2/+13
| | | | | | | for non KSE processes too. Notes: svn path=/head/; revision=105974
* Move suspension checking code from userret() into thread_userret().David Xu2002-10-261-13/+2
| | | | Notes: svn path=/head/; revision=105972
* - Create a new scheduler api that is defined in sys/sched.hJeff Roberson2002-10-121-14/+4
| | | | | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch Notes: svn path=/head/; revision=104964
* - Move p_cpulimit to struct proc from struct plimit and protect it withJohn Baldwin2002-10-091-3/+4
| | | | | | | | | | | | | | | | | | sched_lock. This means that we no longer access p_limit in mi_switch() and the p_limit pointer can be protected by the proc lock. - Remove PRS_ZOMBIE check from CPU limit test in mi_switch(). PRS_ZOMBIE processes don't call mi_switch(), and even if they did there is no longer the danger of p_limit being NULL (which is what the original zombie check was added for). - When we bump the current processes soft CPU limit in ast(), just bump the private p_cpulimit instead of the shared rlimit. This fixes an XXX for some value of fix. There is still a (probably benign) bug in that this code doesn't check that the new soft limit exceeds the hard limit. Inspired by: bde (2) Notes: svn path=/head/; revision=104719
* Access td->td_kse inside sched_lock.Juli Mallett2002-10-021-2/+2
| | | | | | | Submitted by: julian Notes: svn path=/head/; revision=104383
* De-obfuscate local use of members of 'struct thread', for which we haveJuli Mallett2002-10-021-3/+4
| | | | | | | local variables, and group assignment. Notes: svn path=/head/; revision=104378
* Add a new MAC entry point, mac_thread_userret(td), which permits policyRobert Watson2002-10-021-0/+6
| | | | | | | | | | | | | | | | | | | modules to perform MAC-related events when a thread returns to user space. This is required for policies that have floating process labels, as it's not always possible to acquire the process lock at arbitrary points in the stack during system call processing; process labels might represent traditional authentication data, process history information, or other data. LOMAC will use this entry point to perform the process label update prior to the thread returning to userspace, when plugged into the MAC framework. Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories Notes: svn path=/head/; revision=104338
* Back our kernel support for reliable signal queues.Juli Mallett2002-10-011-3/+1
| | | | | | | Requested by: rwatson, phk, and many others Notes: svn path=/head/; revision=104306
* Minor style nits in a comment.John Baldwin2002-10-011-1/+1
| | | | Notes: svn path=/head/; revision=104303
* Various style fixups.John Baldwin2002-10-011-6/+10
| | | | | | | Submitted by: bde (mostly) Notes: svn path=/head/; revision=104297
* Actually clear PS_XCPU in ast() when we handle it.John Baldwin2002-10-011-1/+1
| | | | | | | | Submitted by: bde Pointy hat to: jhb Notes: svn path=/head/; revision=104296
* - Add a new per-process flag PS_XCPU to indicate that at least one threadJohn Baldwin2002-09-301-0/+14
| | | | | | | | | | | has exceeded its CPU time limit. - In mi_switch(), set PS_XCPU when the CPU time limit is exceeded. - Perform actual CPU time limit exceeded work in ast() when PS_XCPU is set. Requested by: many Notes: svn path=/head/; revision=104240
* First half of implementation of ksiginfo, signal queues, and such. ThisJuli Mallett2002-09-301-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | gets signals operating based on a TailQ, and is good enough to run X11, GNOME, and do job control. There are some intricate parts which could be more refined to match the sigset_t versions, but those require further evaluation of directions in which our signal system can expand and contract to fit our needs. After this has been in the tree for a while, I will make in kernel API changes, most notably to trapsignal(9) and sendsig(9), to use ksiginfo more robustly, such that we can actually pass information with our (queued) signals to the userland. That will also result in using a struct ksiginfo pointer, rather than a signal number, in a lot of kern_sig.c, to refer to an individual pending signal queue member, but right now there is no defined behaviour for such. CODAFS is unfinished in this regard because the logic is unclear in some places. Sponsored by: New Gold Technology Reviewed by: bde, tjr, jake [an older version, logic similar] Notes: svn path=/head/; revision=104233
* slightly clean up the thread_userret() and thread_consider_upcall() calls.Julian Elischer2002-09-231-3/+4
| | | | | | | | | | also some slight changes for TDF_BOUND testing and small style changes Should ONLY affect KSE programs Submitted by: davidxu Notes: svn path=/head/; revision=103838
* Spell proprly properly:Robert Watson2002-08-221-1/+1
| | | | | | | | | | failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() failed to set signal flags proprly for ast() Notes: svn path=/head/; revision=102266
* Revert removal of cred_free_thread(): It is used to ensure that a thread'sJonathan Mini2002-07-111-0/+3
| | | | | | | | | | credentials are not improperly borrowed when the thread is not current in the kernel. Requested by: jhb, alfred Notes: svn path=/head/; revision=99753
* Don't slow every syscall and trap by doing locks and stuff if theJulian Elischer2002-07-101-3/+7
| | | | | | | | 'stop' bits are not set. This is a temporary thing.. I think this code probably needs to be rewritten anyhow. Notes: svn path=/head/; revision=99714
* Part 1 of KSE-IIIJulian Elischer2002-06-291-5/+32
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* Remove unused diagnostic function cread_free_thread().Jonathan Mini2002-06-241-3/+0
| | | | | | | Approved by: alfred Notes: svn path=/head/; revision=98727
* We no longer need to acqure Giant in ast() for ktrpsig() in postsig() nowJohn Baldwin2002-06-071-2/+0
| | | | | | | that ktrace no longer needs Giant. Notes: svn path=/head/; revision=98000
* CURSIG() is not a macro so rename it cursig().Julian Elischer2002-05-291-1/+1
| | | | | | | Obtained from: KSE tree Notes: svn path=/head/; revision=97526
* Moved signal handling and rescheduling from userret() to ast() so thatBruce Evans2002-04-041-22/+46
| | | | | | | | | | | | | | they aren't in the usual path of execution for syscalls and traps. The main complication for this is that we have to set flags to control ast() everywhere that changes the signal mask. Avoid locking in userret() in most of the remaining cases. Submitted by: luoqi (first part only, long ago, reorganized by me) Reminded by: dillon Notes: svn path=/head/; revision=93793
* Style fixes purposefully left out of last commit. I checked the kse treeJake Burkholder2002-03-291-43/+43
| | | | | | | and didn't see any changes that this conflicts with. Notes: svn path=/head/; revision=93390
* Remove abuse of intr_disable/restore in MI code by moving the loop in ast()Jake Burkholder2002-03-291-13/+1
| | | | | | | | | | | back into the calling MD code. The MD code must ensure no races between checking the astpening flag and returning to usermode. Submitted by: peter (ia64 bits) Tested on: alpha (peter, jeff), i386, ia64 (peter), sparc64 Notes: svn path=/head/; revision=93389
* Remove last two abuses of cpu_critical_{enter,exit} in the MI code.Warner Losh2002-03-211-5/+5
| | | | | | | Reviewed by: jake, jhb, rwatson Notes: svn path=/head/; revision=92858
* Change the way we ensure td_ucred is NULL if DIAGNOSTIC is defined.John Baldwin2002-03-201-30/+3
| | | | | | | | | | | | | | | | | Instead of caching the ucred reference, just go ahead and eat the decerement and increment of the refcount. Now that Giant is pushed down into crfree(), we no longer have to get Giant in the common case. In the case when we are actually free'ing the ucred, we would normally free it on the next kernel entry, so the cost there is not new, just in a different place. This also removse td_cache_ucred from struct thread. This is still only done #ifdef DIAGNOSTIC. [ missed this file in the previous commit ] Tested on: i386, alpha Notes: svn path=/head/; revision=92825
* Make this compile.Jake Burkholder2002-02-231-1/+1
| | | | | | | Pointy hat to: julian Notes: svn path=/head/; revision=91103
* Add some DIAGNOSTIC code.Julian Elischer2002-02-221-6/+30
| | | | | | | | | | | | | | | While in userland, keep the thread's ucred reference in a shadow field so that the usual place to store it is NULL. If DIAGNOSTIC is not set, the thread ucred is kept valid until the next kernel entry, at which time it is checked against the process cred and possibly corrected. Produces a BIG speedup in kernels with INVARIANTS set. (A previous commit corrected it for the non INVARIANTS case already) Reviewed by: dillon@freebsd.org Notes: svn path=/head/; revision=91090
* If the credential on an incoming thread is correct, don't botherJulian Elischer2002-02-171-3/+4
| | | | | | | | | | | reaquiring it. In the same vein, don't bother dropping the thread cred when goinf ot userland. We are guaranteed to nned it when we come back, (which we are guaranteed to do). Reviewed by: jhb@freebsd.org, bde@freebsd.org (slightly different version) Notes: svn path=/head/; revision=90748
* In a threaded world, differnt priorirites become properties ofJulian Elischer2002-02-111-1/+1
| | | | | | | | | different entities. Make it so. Reviewed by: jhb@freebsd.org (john baldwin) Notes: svn path=/head/; revision=90538
* Changed the type of pcb_flags from u_char to u_int and adjusted things.Bruce Evans2002-01-171-1/+1
| | | | | | | | This removes the only atomic operation on a char type in the entire kernel. Notes: svn path=/head/; revision=89466
* Change the preemption code for software interrupt thread schedules andJohn Baldwin2002-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mutex releases to not require flags for the cases when preemption is not allowed: The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.) I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly. Reviewed by: peter Tested on: i386, alpha Notes: svn path=/head/; revision=88900
* Axe a stale comment. Holding sched_lock across both setrunqueue() andJohn Baldwin2002-01-041-8/+0
| | | | | | | mi_switch() is sufficient. Notes: svn path=/head/; revision=88875
* - Change all callers of addupc_task() to check PS_PROFIL explicitly andJohn Baldwin2001-12-181-7/+12
| | | | | | | | | | | remove the check from addupc_task(). It would need sched_lock while testing the flag anyways. - Always read sticks while holding sched_lock using a temporary variable where needed. - Always init prticks to 0 in ast() to quiet a warning. Notes: svn path=/head/; revision=88119
* Modify the critical section API as follows:John Baldwin2001-12-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha Notes: svn path=/head/; revision=88088
* Add a per-thread ucred reference for syscalls and synchronous traps fromJohn Baldwin2001-10-261-3/+11
| | | | | | | | | | | | userland. The per thread ucred reference is immutable and thus needs no locks to be read. However, until all the proc locking associated with writes to p_ucred are completed, it is still not safe to use the per-thread reference. Tested on: x86 (SMP), alpha, sparc64 Notes: svn path=/head/; revision=85525
* Remove a bogus comment. "atomic" doesn't mean that the operation is doneJohn Baldwin2001-09-211-1/+0
| | | | | | | | | | as a physical atomic operation. That would require the code to use the atomic API, which it does not. Instead, the operation is made psuedo atomic (hence the quotes) by use of the lock to protect clearing all of the flags in question. Notes: svn path=/head/; revision=83788