summaryrefslogtreecommitdiff
path: root/lib/libc/stdlib/malloc.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r185514 (by jasone):Konstantin Belousov2009-05-031-11/+37
| | | | | | | | | | Fix a lock order reversal bug that could cause deadlock during fork(2). Reported and tested by: makc Approved by: re (kensmith) Notes: svn path=/stable/7/; revision=191767
* Mostly synchronize lib/libthr and sys/kern/kern_umtx.c with the codeKonstantin Belousov2009-03-241-0/+11
| | | | | | | | | | | | | | | | | | | | | | from HEAD. Since libkse is still built on RELENG_7, pthread_cleanup_push/pop are left as the functions, but the support code in libthr is present for the macro versions. Malloc in RELENG_7 does not require thread exit hook, but I decided to add empty handler for it, instead of removing callback from thr_exit(). No mergeinfo since this change is prepared by patching libthr and then bringing in required missed bits. Requested by: bms Reviewed by: davidxu Tested by: bms, Mykola Dzham <i levsha org ua> Approved by: re (kensmith) Notes: svn path=/stable/7/; revision=190393
* MFC:Jason Evans2008-11-101-12/+17
| | | | | | | | | | | Revert to preferring mmap(2) over sbrk(2) when mapping memory, due to potential extreme contention in the kernel for multi-threaded applications on SMP systems. Approved by: re (kib) Notes: svn path=/stable/7/; revision=184819
* MFC allocator improvements and fixes:Jason Evans2008-08-161-400/+341
| | | | | | | | | | | | | | * Enhance the chunk map to support run coalescing, and substantially reduce the number of red-black tree operations performed. * Remove unused code. * Fix arena_run_reg_dalloc() to use the entire precomputed division table. * Improve lock preemption performance for hyperthreaded CPUs. Notes: svn path=/stable/7/; revision=181788
* MFC allocator improvements and fixes:Jason Evans2008-06-161-161/+208
| | | | | | | | | | | | | | | | | | | | * Implement more compact red-black trees, thus reducing memory usage by ~0.5-1%. * Add a separate tree to track dirty-page-containing chunks, thus improving worst case allocation performance. * Fix a deadlock in base_alloc() for the error (OOM) path. * Catch integer overflow for huge allocations when using sbrk(2). * Fix bit vector initialization for run headers. This fix has no practical impact for correct programs. Incorrect programs will potentially experience allocation failures rather than memory corruption, both of which are "undefined behavior". Notes: svn path=/stable/7/; revision=179836
* MFC: Merge malloc(3) improvements and fixes. The highlights are:Jason Evans2008-03-071-1224/+2166
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Avoid re-zeroing memory in calloc() when possible. * Use pthread mutexes where possible instead of libc "spinlocks", and actually spin some during contention before blocking. * Implement dynamic load balancing of thread-->arena mapping. * Avoid floating point math in order to avoid increased context switch overhead for applications that otherwise would not use floating point math. * Restructure how sbrk() and mmap() are used to acquire memory mappings. This provides a way to force malloc to only use sbrk(), which can be useful in the context of resource limits. * Reduce the number of mmap() calls typically necessary when allocating a chunk. * Track dirty unused pages so that they can be purged if they exceed a threshold. * Try to realloc() large objects in place. * Manage page runs with trees instead of chunk maps, which allows logarithmic-time run allocation. Notes: svn path=/stable/7/; revision=176922
* Turn on MALLOC_PRODUCTION which turns off some stuff used for debuggingKen Smith2007-10-111-1/+1
| | | | | | | | | | support. Reminded by: kris Approved by: re (implicit) Notes: svn path=/stable/7/; revision=172538
* Fix junk/zero filling for realloc(). Junk filling was missing in one case,Jason Evans2007-06-151-36/+48
| | | | | | | | | and zero filling was broken in a way that could cause memory corruption. Update comments. Notes: svn path=/head/; revision=170796
* Use size_t instead of unsigned for pagesize-related values, in order toJason Evans2007-03-291-4/+8
| | | | | | | | | | | | avoid downcasting issues. In particular, this change fixes posix_memalign(3) for alignments greater than 2^31 on LP64 systems. Make sure that NDEBUG is always set to be compatible with MALLOC_DEBUG. [1] Reported by: [1] Lee Hyo geol <hyogeollee@gmail.com> Notes: svn path=/head/; revision=168029
* Remove the run promotion/demotion machinery. Replace it with red-blackJason Evans2007-03-281-430/+219
| | | | | | | | | | | | | | | | | | | | | | | | | | trees that track all non-full runs for each bin. Use the red-black trees to be able to guarantee that each new allocation is placed in the lowest address available in any non-full run. This change completes the transition to allocating from low addresses in order to reduce the retention of sparsely used chunks. If the run in current use by a bin becomes empty, deallocate the run rather than retaining it for later use. The previous behavior had the tendency to spread empty runs across multiple chunks, thus preventing the release of chunks that were completely unused. Generalize base_chunk_alloc() (and rename it to base_pages_alloc()) to handle allocation sizes larger than the chunk size, so that it is possible to support chunk sizes that are smaller than an arena object. Reduce the minimum chunk size from 64kB to 8kB. Optimize tracking of addresses for deleted chunks. Fix a statistics bug for huge allocations. Notes: svn path=/head/; revision=168003
* Fix some subtle bugs for posix_memalign() having to do with integerJason Evans2007-03-241-18/+43
| | | | | | | | | | | rounding and overflow. Carefully document what the various overflow tests actually detect. The bugs mostly canceled out, such that the worst possible failure cases resulted in non-fatal over-allocations. Notes: svn path=/head/; revision=167872
* Fix posix_memalign() for large objects. Now that runs are extents ratherJason Evans2007-03-231-151/+297
| | | | | | | | | | | than binary buddies, the alignment guarantees are weaker, which requires a more complex aligned allocation algorithm, similar to that used for alignment greater than the chunk size. Reported by: matteo Notes: svn path=/head/; revision=167853
* Use extents rather than binary buddies to track free pages withinJason Evans2007-03-231-323/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | chunks. This allows runs to be any multiple of the page size. The primary advantage is that large objects are no longer constrained to be 2^n pages, which can dramatically decrease internal fragmentation for large objects. This also allows the sizes for runs that back small objects to be more finely tuned. Free runs are searched for linearly using the chunk page map (with the help of some heuristic optimizations). This changes the allocation policy from "first best fit" to "first fit". A prototype red-black tree implementation for tracking free runs that implemented "first best fit" did not cause a measurable speed or memory usage difference for realistic chunk sizes (though of course it is possible to construct benchmarks that favor one allocation policy over another). Refine the handling of fullness constraints for small runs to be more tunable. Restructure the per chunk page map to contain only two fields per entry, rather than four. Also, increase each entry from 4 to 8 bytes, since it allows for 32-bit integers, without increasing the number of chunk header pages. Relax the maximum chunk size constraint. This is of no practical interest; it is merely fallout from the chunk page map restructuring. Revamp statistics gathering and reporting to be faster, clearer and more informative. Statistics gathering is fast enough now to have little to no impact on application speed, but it still requires approximately two extra pages of memory per arena (per process). This memory overhead may be acceptable for most systems, but we still need to leave statistics gathering disabled by default in RELENG branches. Rename NO_MALLOC_EXTRAS to MALLOC_PRODUCTION in order to make its intent clearer (i.e. it should be defined in RELENG branches). Notes: svn path=/head/; revision=167828
* Avoid using vsnprintf(3) unless MALLOC_STATS is defined, in order toJason Evans2007-03-201-152/+233
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoid substantial potential bloat for static binaries that do not otherwise use any printf(3)-family functions. [1] Rearrange arena_run_t so that the region bitmask can be minimally sized according to constraints related to each bin's size class. Previously, the region bitmask was the same size for all run headers, which wasted a measurable amount of memory. Rather than making runs for small objects as large as possible, make runs as small as possible such that header overhead stays below a certain bound. There are two exceptions that override the header overhead bound: 1) If the bound is impossible to honor, it is relaxed on a per-size-class basis. Since there is one bit of header overhead per object (plus a constant), it is impossible to achieve a header overhead less than or equal to 1/(# of bits per object). For the current setting of maximum 0.5% header overhead, this relaxation comes into play for {2, 4, 8, 16}-byte objects, for which header overhead is (on 64-bit systems) {7.1, 4.3, 2.2, 1.2}%, respectively. 2) There is still a cap on small run size, still set to 64kB. This comes into play for {1024, 2048}-byte objects, for which header overhead is {1.6, 3.1}%, respectively. In practice, this reduces the run sizes, which makes worst case low-water memory usage due to fragmentation less bad. It also reduces worst case high-water run fragmentation due to non-full runs, but this is only a constant improvement (most important to small short-lived processes). Reduce the default chunk size from 2MB to 1MB. Benchmarks indicate that the external fragmentation reduction makes 1MB the new sweet spot (as small as possible without adversely affecting performance). Reported by: [1] kientzle Notes: svn path=/head/; revision=167733
* Modify chunk_alloc() to prefer mmap()ed memory over sbrk()ed memory.Jason Evans2007-02-221-36/+40
| | | | | | | | | | | | | | This has no impact unless USE_BRK is defined (32-bit platforms), in which case user allocations are allocated via mmap() if at all possible, in order to avoid the possibility of unreclaimable chunks in the data segment. Fix an obscure bug in base_alloc() that could have allowed undefined behavior if an application were to use sbrk() in conjunction with a USE_BRK-enabled malloc. Notes: svn path=/head/; revision=166890
* Fix a utrace(2)-related bug in calloc(3).Jason Evans2007-01-311-44/+56
| | | | | | | | | Integrate various pedantic cleanups. Submitted by: Andrew Doran <ad@netbsd.org> Notes: svn path=/head/; revision=166375
* Implement chunk allocation/deallocation hysteresis by caching one spareJason Evans2006-12-231-51/+86
| | | | | | | | | | | chunk per arena, rather than immediately deallocating all unused chunks. This fixes a potential performance issue when allocating/deallocating an object of size (4kB..1MB] in a loop. Reported by: davidxu Notes: svn path=/head/; revision=165473
* Change the way base allocation is done for internal malloc dataJason Evans2006-09-081-56/+93
| | | | | | | | | | structures, in order to avoid the possibility of attempted recursive lock acquisition for chunks_mtx. Reported by: Slawa Olhovchenkov <slw@zxy.spb.ru> Notes: svn path=/head/; revision=162163
* Enable TLS on PowerPC.Marcel Moolenaar2006-09-011-1/+0
| | | | Notes: svn path=/head/; revision=161831
* Enable TLS on ia64.Marcel Moolenaar2006-09-011-1/+0
| | | | Notes: svn path=/head/; revision=161803
* Correctly handle the case in calloc(num, size) whereColin Percival2006-08-131-1/+1
| | | | | | | | | | | | | | (size_t)(num * size) == 0 but both num and size are nonzero. Reported by: Ilja van Sprundel Approved by: jasone Security: Integer overflow; calloc was allocating 1 byte in response to a request for a multiple of 2^32 (or 2^64) bytes instead of returning NULL. Notes: svn path=/head/; revision=161263
* Define NO_TLS on PowerPC.Marcel Moolenaar2006-08-091-0/+1
| | | | | | | See also: PR ia64/91846 Notes: svn path=/head/; revision=161131
* Conditionally expand the size_invs lookup table in arena_run_reg_dalloc()Jason Evans2006-07-271-1/+12
| | | | | | | | | | | so that architectures with a quantum of 8 (rather than 16) work. Restore arm's quantum to 8. Submitted by: jmg Notes: svn path=/head/; revision=160761
* Use 4 as QUANTUM_2POW_MIN on arm as it is on any other architecture, to avoidOlivier Houchard2006-07-271-1/+1
| | | | | | | triggering an assertion later. Notes: svn path=/head/; revision=160751
* Fix cpp logic in arena_malloc() to adjust size when assertions are enabled,Jason Evans2006-07-271-23/+19
| | | | | | | | | | | even if stats gathering is disabled. [1] Remove 'size' parameter from several functions that do not use it. Reported by: [1] ache Notes: svn path=/head/; revision=160736
* Use some math tricks in arena_run_reg_dalloc() to avoid actual division, asJason Evans2006-07-011-83/+90
| | | | | | | | | | | | | | | | | | | | well as avoiding a switch statement. This change has no significant impact to performance when branch prediction is successful at predicting the sizes of objects passed to free(), but in the case that the object sizes are semi-random, this change has the potential to prevent many branch prediction misses, thus improving performance substantially. Take advantage of alignment guarantees in ipalloc(), and pad object sizes to something less than a power of two when possible. This has the potential to substantially reduce internal fragmentation for objects allocated via posix_memalign(). Avoid an unnecessary pow2_ceil() call in arena_ralloc(). Submitted by: djam8193ah@hotmail.com Notes: svn path=/head/; revision=160066
* Make the behavior of malloc(0) standards-compliant by getting rid of nil,Jason Evans2006-06-301-48/+46
| | | | | | | | | | | | and instead creating a small allocation for each malloc(0) call. The optional SysV compatibility behavior remains unchanged. Add a couple of assertions. Fix a couple of typos in error message strings. Notes: svn path=/head/; revision=160055
* Add a missing case for the switch statement in arena_run_reg_dalloc(). [1]Jason Evans2006-06-201-8/+20
| | | | | | | | | | Fix a leak in chunk_dealloc(). [2] Reported by: [1] djam8193ah@hotmail.com, [2] Ville-Pertti Keinonen <will@exomi.com> Notes: svn path=/head/; revision=159798
* Increase the minimum chunk size by a power of two (32kB --> 64kB, assumingJason Evans2006-05-101-2/+2
| | | | | | | | | | | 4kB pages), in order to avoid dangerous rounding error when calculating fullness limits during run promotion/demotion. Convert a structure bitfield to a normal field in areana_run_t. This should have been changed along with the other fields in revision 1.120. Notes: svn path=/head/; revision=158383
* Change the semantics of brk_max to dynamically deal with data segmentJason Evans2006-04-271-71/+83
| | | | | | | | | | | | | | | | | | bounds. [1] Modify logic for utilizing the data segment, such that it is possible to create huge allocations there. Shrink the data segment when deallocating a chunk, if it is at the end of the data segment. Rename chunk_size to csize in huge_malloc(), in order to avoid masking a static variable of the same name. [1] Reported by: Paul Allen <nospam@ugcs.caltech.edu> Notes: svn path=/head/; revision=158062
* Add an unreachable return statement, in order to avoid a compiler warningJason Evans2006-04-051-0/+1
| | | | | | | | | for non-standard optimization levels. Reported by: Michael Zach <zach@webges.com> Notes: svn path=/head/; revision=157539
* Only initialize the first per-chunk page map element for free runs. ThisJason Evans2006-04-051-31/+16
| | | | | | | makes run split/coalesce operations of complexity lg(n) rather than n. Notes: svn path=/head/; revision=157532
* Add init_lock, and use it to protect against allocator initializationJason Evans2006-04-041-8/+21
| | | | | | | | | | | races. This isn't currently necessary for libpthread or libthr, but without it external threads libraries like the linuxthreads port are not safe to use. Reported by: ganbold@micom.mng.net Notes: svn path=/head/; revision=157498
* Refactor per-run bitmap manipulation functions so that bitmap offsets onlyJason Evans2006-04-041-69/+131
| | | | | | | | | | | | | | | | | | | | have to be calculated once per allocator operation. Make nil const. Update various comments. Remove/avoid division where possible. For the one division operation that remains in the critical path, add a switch statement that has a case for each small size class, and do division with a constant divisor in each case. This allows the compiler to generate optimized code that does not use hardware division [1]. Obtained from: peter [1] Notes: svn path=/head/; revision=157463
* Optimize runtime performance, primary using the following techniques:Jason Evans2006-03-301-285/+294
| | | | | | | | | | | | | | | | | | * Avoid choosing an arena until it's certain that an arena is needed for allocation. * Convert division/multiplication to bitshifting where possible. * Avoid accessing TLS variables in single-threaded code. * Reduce the amount of pointer dereferencing. * Move lock acquisition in critical paths to only protect the the code that requires synchronization, and completely remove locking where possible. Notes: svn path=/head/; revision=157310
* Add malloc_usable_size(3).Jason Evans2006-03-281-0/+20
| | | | | | | Discussed with: arch@ Notes: svn path=/head/; revision=157236
* Allow the 'n' option to decrease the number of arenas below the default,Jason Evans2006-03-261-2/+16
| | | | | | | | to as little as one arena. Also, limit the number of arenas to avoid a potential invariant violation in base_alloc(). Notes: svn path=/head/; revision=157162
* Add comments and reformat/rearrange code. There are no significantJason Evans2006-03-261-208/+224
| | | | | | | functional changes in this commit. Notes: svn path=/head/; revision=157161
* Convert TINY_MIN_2POW from a cpp macro to tiny_min_2pow (a variable), andJason Evans2006-03-241-21/+37
| | | | | | | | | | | | | | | | | | determine its value at run time according to other relevant values. This avoids the creation of runs that are incompletely utilized, as long as pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW setting). Increase the size of several structure bitfields in arena_run_t in order to avoid integer overflow in the case that a run's header does not overlap with the space that is usable as application allocation regions. Given the tiny_min_2pow change, this fix has no additional impact unless pagesize is >32kB. Reported by: kris Notes: svn path=/head/; revision=157106
* Add USE_BRK-specific code in malloc_init_hard() to allow the firstJason Evans2006-03-241-65/+110
| | | | | | | | | | | | | | | | | | | | | | internally used chunk to start at the beginning of the heap, rather than at a chunk-aligned address. This reduces mapped memory somewhat for 32-bit architectures. Add the arena_run_link_t type and use it wherever a run object is only used as a ring 'header'. This saves approximately 40 kB of memory per arena. Remove an obsolete (no longer used) code path from base_alloc(), which supported the internal allocation of objects larger than the chunk size. Enhance chunk_dealloc() to cache chunk addresses for all deallocated chunks. This has no impact for most programs, but has the potential to reduce VM map fragmentation for programs that use huge allocations. Notes: svn path=/head/; revision=157070
* Separate completely full runs from runs that are merely almost full, soJason Evans2006-03-201-61/+71
| | | | | | | | | | that no linear searching is necessary if we resort to allocating from a run that is known to be mostly full. There are pathological edge cases that could have caused severely degraded performance, and this change fixes that. Notes: svn path=/head/; revision=156902
* Optimize realloc() to reallocate in place if the old and new sizes areJason Evans2006-03-191-105/+167
| | | | | | | | | | | | | | close enough to each other that reallocation would allocate a new region of the same size. This improves the performance of repeated incremental reallocations by up to three orders of magnitude. [1] Fix arena_new() to properly constrain run size if a small chunk size was specified during runtime configuration. Suggested by: se [1] Notes: svn path=/head/; revision=156890
* Modify allocation policy, in order to avoid excessive fragmentation forJason Evans2006-03-171-2453/+1018
| | | | | | | | | | | | | | | | | | allocation patterns that involve a relatively even mixture of many different size classes. Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up using an address-ordered first best fit policy, VM map fragmentation is much less likely, which makes smaller chunks not as much of a risk. This reduces the virtual memory size of most applications. Remove redzones, since program buffer overruns are no longer as likely to corrupt malloc data structures. Remove the C MALLOC_OPTIONS flag, and add H and S. Notes: svn path=/head/; revision=156800
* Fix calculation of the number of arenas to use on multi-processor systems.Jason Evans2006-02-041-1/+1
| | | | Notes: svn path=/head/; revision=155272
* Remove unwarranted uses of 'goto'.Jason Evans2006-01-271-203/+153
| | | | Notes: svn path=/head/; revision=154890
* Add NO_MALLOC_EXTRAS, so that various extra features that can causeJason Evans2006-01-271-3/+16
| | | | | | | | | | | | performance degradation can be disabled via something like the following in /etc/malloc.conf: CFLAGS+=-DNO_MALLOC_EXTRAS Suggested by: deischen Notes: svn path=/head/; revision=154887
* Fix the type of a statistics counter (unsigned --> unsigned long).Jason Evans2006-01-271-1/+1
| | | | Notes: svn path=/head/; revision=154886
* Clean up statistics gathering and printing.Jason Evans2006-01-271-71/+64
| | | | Notes: svn path=/head/; revision=154882
* Optimize arena_bin_pop() to reduce the number of separator operations.Jason Evans2006-01-261-13/+10
| | | | | | | | | | | Remove the block of code that tries to use delayed regions in LIFO order, since from a policy perspective, it conflicts with LRU caching of newly coalesced regions in arena_undelay(). There are numerous policy alternatives, and it isn't readily obvious which (if any) is superior; this change at least has the virtue of being consistent with policy. Notes: svn path=/head/; revision=154853
* Remove a redundant variable assignment in arena_reg_frag_alloc().Jason Evans2006-01-251-1/+0
| | | | Notes: svn path=/head/; revision=154798