summaryrefslogtreecommitdiff
path: root/sys/vm/uma_int.h
Commit message (Expand)AuthorAgeFilesLines
* vm: clean up empty lines in .c and .h filesMateusz Guzik2020-09-011-2/+2
* memstat_kvm_uma: fix reading of uma_zone_domain structuresEric van Gyzen2020-08-281-4/+4
* Clarify some language. Favor primary where both master and primary wereJeff Roberson2020-06-201-2/+2
* Clean up uma_int.h a bit.Mark Johnston2020-03-071-25/+7
* Use per-domain locks for the bucket cache.Jeff Roberson2020-02-191-43/+44
* Reduce lock hold time in keg_drain().Mark Johnston2020-02-111-1/+2
* uma: remove UMA_ZFLAG_CACHEONLY flagRyan Libby2020-02-061-3/+1
* uma: add UMA_ZONE_CONTIG, and a default contig_allocRyan Libby2020-02-041-0/+1
* Use STAILQ instead of TAILQ for bucket lists. We only need FIFO behaviorJeff Roberson2020-02-041-2/+2
* Implement a safe memory reclamation feature that is tightly coupled with UMA.Jeff Roberson2020-01-311-4/+6
* uma: split slabzone into two sizesRyan Libby2020-01-141-10/+11
* uma: unify layout paths and improve efficiencyRyan Libby2020-01-091-0/+3
* uma: reorganize flagsRyan Libby2020-01-091-45/+59
* Fix uma boot pages calculations on NUMA machines that also don't haveJeff Roberson2020-01-061-4/+7
* Fix an assertion introduced in r356348. On architectures withoutJeff Roberson2020-01-041-2/+2
* UMA NUMA flag day. UMA_ZONE_NUMA was a source of confusion. Make the namesJeff Roberson2020-01-041-2/+3
* Sort cross-domain frees into per-domain buckets before inserting theseJeff Roberson2020-01-041-0/+9
* Use per-domain keg locks. This provides both a lock and separate spaceJeff Roberson2020-01-041-17/+19
* Use a separate lock for the zone and keg. This provides concurrencyJeff Roberson2020-01-041-13/+16
* Use atomics for the zone limit and sleeper count. This relies on theJeff Roberson2020-01-041-7/+17
* Further reduce the cacheline footprint of fast allocations by duplicatingJeff Roberson2019-12-251-0/+37
* Optimize fast path allocations by storing bucket headers in the per-cpuJeff Roberson2019-12-251-9/+27
* uma dbg: flexible size for slab debug bitset tooRyan Libby2019-12-141-5/+19
* Revert r355706 & r355710Ryan Libby2019-12-131-17/+5
* uma dbg: flexible size for slab debug bitset tooRyan Libby2019-12-131-5/+17
* uma: pretty print zone flags sysctlRyan Libby2019-12-111-0/+26
* Use a variant slab structure for offpage zones. This saves space inJeff Roberson2019-12-081-26/+61
* Use a precise bit count for the slab free items in UMA. This significantlyJeff Roberson2019-12-021-19/+13
* Handle large mallocs by going directly to kmem. Taking a detour throughJeff Roberson2019-11-291-10/+20
* Garbage collect the mostly unused us_keg field. Use appropriately namedJeff Roberson2019-11-281-5/+15
* Implement a sysctl tree for uma zones to assist in debugging and provideJeff Roberson2019-11-281-4/+7
* uma: trash memory when ctor/dtor supplied tooRyan Libby2019-11-271-0/+1
* Extend uma_reclaim() to permit different reclamation targets.Mark Johnston2019-09-011-3/+5
* Add two new kernel options to control memory locality on NUMA hardware.Jeff Roberson2019-08-061-0/+2
* UMA: unsign some variables related to allocation in hash_alloc().Pedro F. Giffuni2019-02-121-3/+3
* Now that there is only one way to allocate a slab, remove uz_slab method.Gleb Smirnoff2019-02-071-2/+1
* Whitespace.Gleb Smirnoff2019-01-161-1/+1
* Fix compilation failures on different arches that have vm_machdep.c notGleb Smirnoff2019-01-151-0/+1
* Make uz_allocs, uz_frees and uz_fails counter(9). This removes someGleb Smirnoff2019-01-151-3/+3
* o Move zone limit from keg level up to zone level. This means that nowGleb Smirnoff2019-01-151-35/+31
* For not offpage zones the slab is placed at the end of page. Keg's uk_pgoffGleb Smirnoff2018-11-281-2/+10
* Add accounting to per-domain UMA full bucket caches.Mark Johnston2018-11-131-1/+6
* Add an #include required after r339686.Mark Johnston2018-10-241-0/+1
* Use a vm_domainset iterator in keg_fetch_slab().Mark Johnston2018-10-241-1/+1
* Either "free" or "allocated" is misleading here, since an itemGleb Smirnoff2018-08-241-1/+1
* Fix comment. The actual meaning of ub_cnt is the opposite.Gleb Smirnoff2018-08-231-1/+1
* Sort uma_zone fields according to 64 byte cache line with adjacent lineJeff Roberson2018-06-231-21/+28
* Align UMA data to 128 byte cacheline sizeJustin Hibbits2018-06-041-1/+1
* uma: increase alignment to 128 bytes on amd64Mateusz Guzik2018-05-111-1/+1
* Fix three miscalculations in amount of boot pages:Gleb Smirnoff2018-02-071-0/+5