summaryrefslogtreecommitdiff
path: root/sys/kern/kern_thread.c
Commit message (Collapse)AuthorAgeFilesLines
* o Refine kse_thr_interrupt to allow it to handle different commands.David Xu2003-07-171-62/+61
| | | | | | | | | | o Remove TDF_NOSIGPOST. o Add a member td_waitset to proc structure, it will be used for sigwait. Tested by: deischen Notes: svn path=/head/; revision=117704
* If initial thread is still a bound thread, don't change its signal mask.David Xu2003-07-151-1/+1
| | | | Notes: svn path=/head/; revision=117637
* Rename thread_siginfo to cpu_thread_siginfoDavid Xu2003-07-151-1/+1
| | | | Notes: svn path=/head/; revision=117607
* kse_thr_interrupt should target the thread, specifically.Mike Makonnen2003-07-041-1/+1
| | | | | | | Requested by: davidxu Notes: svn path=/head/; revision=117212
* Signals sent specifically to a particular thread mustMike Makonnen2003-07-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | be delivered to that thread, regardless of whether it has it masked or not. Previously, if the targeted thread had the signal masked, it would be put on the processes' siglist. If another thread has the signal umasked or unmasks it before the target, then the thread it was intended for would never receive it. This patch attempts to solve the problem by requiring callers of tdsignal() to say whether the signal is for the thread or for the process. If it is for the process, then normal processing occurs and any thread that has it unmasked can receive it. But if it is destined for a specific thread, it is put on that thread's pending list regardless of whether it is currently masked or not. The new behaviour still needs more work, though. If the signal is reposted for some reason it is always posted back to the thread that handled it because the information regarding the target of the signal has been lost by then. Reviewed by: jdp, jeff, bde (style) Notes: svn path=/head/; revision=117205
* Fix typo.David Xu2003-06-301-1/+1
| | | | Notes: svn path=/head/; revision=117069
* Don't use fuword() and suword() on struct members of type int. ThisMarcel Moolenaar2003-06-281-4/+4
| | | | | | | | | | | | | | | happens to work on 32-bit platforms as sizeof(long)=sizeof(int), but wrecks all kinds of havoc (garbage reads, corrupting writes and misaligned loads/stores) on 64-bit architectures. The fix for now is to use fuword32() and suword32() and change the type of the applicable int fields to int32. This is to make it explicit that we depend on these fields being 32-bit. We may want to revisit this later. Reviewed by: deischen Notes: svn path=/head/; revision=117000
* o Change kse_thr_interrupt to allow send a signal to a specified thread,David Xu2003-06-281-56/+121
| | | | | | | | | | | | | | | | | | | | or unblock a thread in kernel, and allow UTS to specify whether syscall should be restarted. o Add ability for UTS to monitor signal comes in and removed from process, the flag PS_SIGEVENT is used to indicate the events. o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with this flag set to wait for above signal event. o For SA based thread, kernel masks all signal in its signal mask, let UTS to use kse_thr_interrupt interrupt a thread, and install a signal frame in userland for the thread. o Add a tm_syncsig in thread mailbox, when a hardware trap occurs, it is used to deliver synchronous signal to userland, and upcall is schedule, so UTS can process the synchronous signal for the thread. Reviewed by: julian (mentor) Notes: svn path=/head/; revision=116963
* cpu_set_upcall_kse needs to access userspace, release schedule lockDavid Xu2003-06-201-4/+10
| | | | | | | | | | | before calling it for bound thread. To avoid this problem, change thread_schedule_upcall to not put new thread on run queue, let caller do it, so we can tweak the new thread before setting it to run. Reported by: pho Notes: svn path=/head/; revision=116607
* Forgot to commit code to disable creating a bound thread in sameDavid Xu2003-06-161-0/+2
| | | | | | | | | group again except first kse_create syscall. Noticed by: julian Notes: svn path=/head/; revision=116452
* Reset ncpus to 1 for bound thread group since there is only oneDavid Xu2003-06-161-1/+3
| | | | | | | | | thread in such group. Change message text from kse_rel to kserel, it is better displayed in top. Notes: svn path=/head/; revision=116440
* 1. Add code to support bound thread. when blocked, a bound thread neverDavid Xu2003-06-151-55/+63
| | | | | | | | | | schedules an upcall. Signal delivering to a bound thread is same as non-threaded process. This is intended to be used by libpthread to implement PTHREAD_SCOPE_SYSTEM thread. 2. Simplify kse_release() a bit, remove sleep loop. Notes: svn path=/head/; revision=116401
* 1. Migrate TDF_UPCALLING from td_flags to td_pflags.David Xu2003-06-151-16/+6
| | | | | | | | 2. Add a flag TDF_SA, it will be used to distinguish SA based thread from bound thread. Notes: svn path=/head/; revision=116372
* Rename P_THREADED to P_SA. P_SA means a process is using schedulerDavid Xu2003-06-151-6/+6
| | | | | | | activations. Notes: svn path=/head/; revision=116361
* Migrate the thread stack management functions from the machine-dependentAlan Cox2003-06-141-2/+3
| | | | | | | | | | | | | | | | | | | to the machine-independent parts of the VM. At the same time, this introduces vm object locking for the non-i386 platforms. Two details: 1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES. The different machine-dependent implementations used various combinations of KSTACK_GUARD and KSTACK_GUARD_PAGES. To disable guard page, set KSTACK_GUARD_PAGES to 0. 2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new. In 5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed to vm_page_alloc() or vm_page_grab(). Notes: svn path=/head/; revision=116355
* Fix error in my last commit. Correctly maintain p_maxthrwaits and unlockDavid Xu2003-06-111-5/+8
| | | | | | | sched_lock. Notes: svn path=/head/; revision=116184
* Use __FBSDID().David E. O'Brien2003-06-111-2/+3
| | | | Notes: svn path=/head/; revision=116182
* If there are signals delivered to current thread, breaks out of loop,David Xu2003-06-101-4/+3
| | | | | | | | | | userret() will be called again by ast() and thread_userret() will be called again by userret(). Reported by: tegge Notes: svn path=/head/; revision=116138
* thread_signal_add now is called with ps_mtx held, unlock it beforeDavid Xu2003-06-061-3/+5
| | | | | | | calling copyin. Notes: svn path=/head/; revision=115884
* Change the second (and last) argument of cpu_set_upcall(). PreviouslyMarcel Moolenaar2003-06-041-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | we were passing in a void* representing the PCB of the parent thread. Now we pass a pointer to the parent thread itself. The prime reason for this change is to allow cpu_set_upcall() to copy (parts of) the trapframe instead of having it done in MI code in each caller of cpu_set_upcall(). Copying the trapframe cannot always be done with a simply bcopy() or may not always be optimal that way. On ia64 specifically the trapframe contains information that is specific to an entry into the kernel and can only be used by the corresponding exit from the kernel. A trapframe copied verbatim from another frame is in most cases useless without some additional normalization. Note that this change removes the assignment to td->td_frame in some implementations of cpu_set_upcall(). The assignment is redundant. A previous call to cpu_thread_setup() already did the exact same assignment. An added benefit of removing the redundant assignment is that we can now change td_pcb without nasty side-effects. This change officially marks the ability on ia64 for 1:1 threading. Not tested on: amd64, powerpc Compile & boot tested on: alpha, sparc64 Functionally tested on: i386, ia64 Notes: svn path=/head/; revision=115858
* Remove un-needed code.Julian Elischer2003-06-041-49/+27
| | | | | | | | | | | Don't copyin() data we are about to overwrite. Add a flag to tell userland that KSE is officially "DONE" with the mailbox and has gone away. Obtained from: davidxu@ Notes: svn path=/head/; revision=115790
* Remove the ia64 hackery in threadinit() that was needed to work aroundMarcel Moolenaar2003-06-011-14/+0
| | | | | | | | | | the lameness of the kstack code. The EPC overhaul de-lame-ified the kstack code by removing the need for contigmalloc(). We can now allocate stacks using malloc(). We probably want to make the stacks swappable as well so that we can make it MI. But that's another story. Notes: svn path=/head/; revision=115600
* Remove unused variable(s).Poul-Henning Kamp2003-05-311-2/+0
| | | | | | | Found by: FlexeLint Notes: svn path=/head/; revision=115549
* Revamp of the syscall path, exception and context handling. TheMarcel Moolenaar2003-05-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | prime objectives are: o Implement a syscall path based on the epc inststruction (see sys/ia64/ia64/syscall.s). o Revisit the places were we need to save and restore registers and define those contexts in terms of the register sets (see sys/ia64/include/_regset.h). Secundairy objectives: o Remove the requirement to use contigmalloc for kernel stacks. o Better handling of the high FP registers for SMP systems. o Switch to the new cpu_switch() and cpu_throw() semantics. o Add a good unwinder to reconstruct contexts for the rare cases we need to (see sys/contrib/ia64/libuwx) Many files are affected by this change. Functionally it boils down to: o The EPC syscall doesn't preserve registers it does not need to preserve and places the arguments differently on the stack. This affects libc and truss. o The address of the kernel page directory (kptdir) had to be unstaticized for use by the nested TLB fault handler. The name has been changed to ia64_kptdir to avoid conflicts. The renaming affects libkvm. o The trapframe only contains the special registers and the scratch registers. For syscalls using the EPC syscall path no scratch registers are saved. This affects all places where the trapframe is accessed. Most notably the unaligned access handler, the signal delivery code and the debugger. o Context switching only partly saves the special registers and the preserved registers. This affects cpu_switch() and triggered the move to the new semantics, which additionally affects cpu_throw(). o The high FP registers are either in the PCB or on some CPU. context switching for them is done lazily. This affects trap(). o The mcontext has room for all registers, but not all of them have to be defined in all cases. This mostly affects signal delivery code now. The *context syscalls are as of yet still unimplemented. Many details went into the removal of the requirement to use contigmalloc for kernel stacks. The details are mostly CPU specific and limited to exception_save() and exception_restore(). The few places where we create, destroy or switch stacks were mostly simplified by not having to construct physical addresses and additionally saving the virtual addresses for later use. Besides more efficient context saving and restoring, which of course yields a noticable speedup, this also fixes the dreaded SMP bootup problem as a side-effect. The details of which are still not fully understood. This change includes all the necessary backward compatibility code to have it handle older userland binaries that use the break instruction for syscalls. Support for break-based syscalls has been pessimized in favor of a clean implementation. Due to the overall better performance of the kernel, this will still be notived as an improvement if it's noticed at all. Approved by: re@ (jhb) Notes: svn path=/head/; revision=115084
* Fix compiling problem, p_tracee is in my local repository forDavid Xu2003-05-011-3/+0
| | | | | | | threaded process debugging, not ready for this time. Notes: svn path=/head/; revision=114400
* Drop Giant lock before suspended, pick up it after resumed.David Xu2003-05-011-1/+5
| | | | | | | | thread_suspend_check() is used in exit1() which still needs Giant lock. Notes: svn path=/head/; revision=114398
* AMD64 uses the new-style cpu_switch()/cpu_throw() calling conventions.Peter Wemm2003-04-301-1/+1
| | | | Notes: svn path=/head/; revision=114336
* Increase some default values.David Xu2003-04-301-2/+2
| | | | Notes: svn path=/head/; revision=114268
* unlock sched_lock at right time.David Xu2003-04-271-1/+1
| | | | Notes: svn path=/head/; revision=114106
* Add an argument to get_mcontext() which specified whether theDaniel Eischen2003-04-251-15/+1
| | | | | | | | | | | | | | | | | | | | | | | syscall return values should be cleared. The system calls getcontext() and swapcontext() want to return 0 on success but these contexts can be switched to at a later time so the return values need to be cleared in the saved register sets. Other callers of get_mcontext() would normally want the context without clearing the return values. Remove the i386-specific context saving from the KSE code. get_mcontext() is not i386-specific any more. Fix a bad pointer in the alpha get_mcontext() code. The context was being bcopy()'d from &td->tf_frame, but tf_frame is itself a pointer, so the thread was being copied instead. Spotted by jake. Glanced at by: jake Reviewed by: bde (months ago) Notes: svn path=/head/; revision=113998
* - Protect p_numthreads with the sched_lock.John Baldwin2003-04-231-13/+14
| | | | | | | | - Protect p_singlethread with both the sched_lock and the proc lock. - Protect p_suspcount with the proc lock. Notes: svn path=/head/; revision=113920
* - Mark the kse_purge_group() and kse_purge() definitions static to matchJohn Baldwin2003-04-221-5/+3
| | | | | | | | | | their prototypes. - Remove sched_lock locking from kse_purge() as all callers already lock the sched_lock before calling it. - Hold the proc lock slightly longer to protect P_SHOULDSTOP(). Notes: svn path=/head/; revision=113864
* Fix lock order reversal problem.David Xu2003-04-211-3/+2
| | | | Notes: svn path=/head/; revision=113795
* Introduce two flags to control upcall behaviour:David Xu2003-04-211-51/+36
| | | | | | | | | | | | | | | | | | | | | | o KMF_NOUPCALL Ask kse_release to not return to userland upcall entry, but instead direct returns to userland by using current thread's stack and return address on stack. This flags is intended to be used by UTS in critical region to wait another UTS thread to leave critical region, by using kse_release with this flag to avoid spinnng and burning CPU. Also this flags can be used by UTS to poll completed context when there is nothing to do in userland and needn't restart from its entry like normal upcall. o KMF_NOCOMPLETED Ask kernel to not bring completed thread contexts back to userland when doing upcall, this flags is intend to be used with above flag when an upcall thread is in critical region and can not process completed contexts at that time. Tested by: deischen Notes: svn path=/head/; revision=113793
* Test next upcall time correctly.David Xu2003-04-191-1/+1
| | | | Notes: svn path=/head/; revision=113708
* Use correct thread pointer.David Xu2003-04-191-1/+1
| | | | Notes: svn path=/head/; revision=113705
* Use the proc lock to protect p_singlethread and a P_WEXIT test. ThisJohn Baldwin2003-04-181-2/+2
| | | | | | | | fixes a couple of potential KSE panics on non-i386 arch's that weren't holding the proc lock when calling thread_exit(). Notes: svn path=/head/; revision=113686
* Add a thread_unlink() and use it.Julian Elischer2003-04-181-11/+14
| | | | | | | | It could also be used twice in kern_thr.c but that's owned by jeff so I'l let him change it when he's next there. Notes: svn path=/head/; revision=113641
* Protect td_sigmask with the proc lock.John Baldwin2003-04-171-0/+2
| | | | Notes: svn path=/head/; revision=113626
* Move the _oncpu entry from the KSE to the thread.Julian Elischer2003-04-101-0/+1
| | | | | | | | The entry in the KSE still exists but it's purpose will change a bit when we add the ability to lock a KSE to a cpu. Notes: svn path=/head/; revision=113339
* Inherit blocked thread's context for upcall thread.David Xu2003-04-081-7/+5
| | | | Notes: svn path=/head/; revision=113244
* Commit a partial lazy thread switch mechanism for i386. it isn't as lazyPeter Wemm2003-04-021-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as it could be and can do with some more cleanup. Currently its under options LAZY_SWITCH. What this does is avoid %cr3 reloads for short context switches that do not involve another user process. ie: we can take an interrupt, switch to a kthread and return to the user without explicitly flushing the tlb. However, this isn't as exciting as it could be, the interrupt overhead is still high and too much blocks on Giant still. There are some debug sysctls, for stats and for an on/off switch. The main problem with doing this has been "what if the process that you're running on exits while we're borrowing its address space?" - in this case we use an IPI to give it a kick when we're about to reclaim the pmap. Its not compiled in unless you add the LAZY_SWITCH option. I want to fix a few more things and get some more feedback before turning it on by default. This is NOT a replacement for Bosko's lazy interrupt stuff. This was more meant for the kthread case, while his was for interrupts. Mine helps a little for interrupts, but his helps a lot more. The stats are enabled with options SWTCH_OPTIM_STATS - this has been a pseudo-option for years, I just added a bunch of stuff to it. One non-trivial change was to select a new thread before calling cpu_switch() in the first place. This allows us to catch the silly case of doing a cpu_switch() to the current process. This happens uncomfortably often. This simplifies a bit of the asm code in cpu_switch (no longer have to call choosethread() in the middle). This has been implemented on i386 and (thanks to jake) sparc64. The others will come soon. This is actually seperate to the lazy switch stuff. Glanced at by: jake, jhb Notes: svn path=/head/; revision=112993
* - Borrow the KSE single threading code for exec and exit. We use the checkJeff Roberson2003-04-011-3/+7
| | | | | | | | | | | if (p->p_numthreads > 1) and not a flag because action is only necessary if there are other threads. The rest of the system has no need to identify thr threaded processes. - In kern_thread.c use thr_exit1() instead of thread_exit() if P_THREADED is not set. Notes: svn path=/head/; revision=112910
* - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread withJeff Roberson2003-03-311-4/+11
| | | | | | | | | | a follow on commit to kern_sig.c - signotify() now operates on a thread since unmasked pending signals are stored in the thread. - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK. Notes: svn path=/head/; revision=112888
* Check for the PS_NEEDSIGCHK flag in the right flags field.John Baldwin2003-03-281-1/+1
| | | | Notes: svn path=/head/; revision=112750
* Adjust code for userland preemptive. Userland can set a quantum inDavid Xu2003-03-191-24/+45
| | | | | | | | | | | | | kse_mailbox to schedule an upcall, this is useful for userland timeout routine, for example pthread_cond_timedwait(). Also extract upcall scheduling code from kse_reassign and create a new function called thread_switchout to include these code. Reviewed by: julain Notes: svn path=/head/; revision=112397
* Export current time when returning from never blocked syscall.David Xu2003-03-141-1/+8
| | | | Notes: svn path=/head/; revision=112222
* Lock proc lock before changing p_flag.David Xu2003-03-111-0/+2
| | | | Notes: svn path=/head/; revision=112078
* Fix signal delivering bug for threaded process.David Xu2003-03-111-6/+4
| | | | Notes: svn path=/head/; revision=112077
* Fix threaded process job control bug. SMP tested.David Xu2003-03-111-28/+16
| | | | | | | Reviewed by: julian Notes: svn path=/head/; revision=112071