<feed xmlns='http://www.w3.org/2005/Atom'>
<title>src-test2/lib/libc/stdlib/malloc.c, branch release/7.0.0_cvs</title>
<subtitle>FreeBSD source tree</subtitle>
<id>https://cgit-dev.freebsd.org/src-test2/atom?h=release%2F7.0.0_cvs</id>
<link rel='self' href='https://cgit-dev.freebsd.org/src-test2/atom?h=release%2F7.0.0_cvs'/>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/'/>
<updated>2008-02-24T05:45:17Z</updated>
<entry>
<title>This commit was manufactured by cvs2svn to create tag</title>
<updated>2008-02-24T05:45:17Z</updated>
<author>
<name>cvs2svn</name>
<email>cvs2svn@FreeBSD.org</email>
</author>
<published>2008-02-24T05:45:17Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=a9c219fa3cec18ef9f30edec6fa106bf0e2d423d'/>
<id>urn:sha1:a9c219fa3cec18ef9f30edec6fa106bf0e2d423d</id>
<content type='text'>
'RELENG_7_0_0_RELEASE'.

This commit was manufactured to restore the state of the 7.0-RELEASE image.
</content>
</entry>
<entry>
<title>Turn on MALLOC_PRODUCTION which turns off some stuff used for debugging</title>
<updated>2007-10-11T06:35:46Z</updated>
<author>
<name>Ken Smith</name>
<email>kensmith@FreeBSD.org</email>
</author>
<published>2007-10-11T06:35:46Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=419d5e2694db7763cd4601f6548e151209cecb14'/>
<id>urn:sha1:419d5e2694db7763cd4601f6548e151209cecb14</id>
<content type='text'>
support.

Reminded by:	kris
Approved by:	re (implicit)
</content>
</entry>
<entry>
<title>Fix junk/zero filling for realloc().  Junk filling was missing in one case,</title>
<updated>2007-06-15T22:00:16Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-06-15T22:00:16Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=76507741ab06b468ea2597570691523613602e15'/>
<id>urn:sha1:76507741ab06b468ea2597570691523613602e15</id>
<content type='text'>
and zero filling was broken in a way that could cause memory corruption.

Update comments.
</content>
</entry>
<entry>
<title>Use size_t instead of unsigned for pagesize-related values, in order to</title>
<updated>2007-03-29T21:07:17Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-29T21:07:17Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=d33f4690bac522d8b827990a5823d265452cfaac'/>
<id>urn:sha1:d33f4690bac522d8b827990a5823d265452cfaac</id>
<content type='text'>
avoid downcasting issues.  In particular, this change fixes
posix_memalign(3) for alignments greater than 2^31 on LP64 systems.

Make sure that NDEBUG is always set to be compatible with MALLOC_DEBUG. [1]

Reported by:	[1] Lee Hyo geol &lt;hyogeollee@gmail.com&gt;
</content>
</entry>
<entry>
<title>Remove the run promotion/demotion machinery.  Replace it with red-black</title>
<updated>2007-03-28T19:55:07Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-28T19:55:07Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=eaf8d732126bc0a25d8dd5921b91f47357aa38fb'/>
<id>urn:sha1:eaf8d732126bc0a25d8dd5921b91f47357aa38fb</id>
<content type='text'>
trees that track all non-full runs for each bin.  Use the red-black
trees to be able to guarantee that each new allocation is placed in the
lowest address available in any non-full run.  This change completes the
transition to allocating from low addresses in order to reduce the
retention of sparsely used chunks.

If the run in current use by a bin becomes empty, deallocate the run
rather than retaining it for later use.  The previous behavior had the
tendency to spread empty runs across multiple chunks, thus preventing
the release of chunks that were completely unused.

Generalize base_chunk_alloc() (and rename it to base_pages_alloc()) to
handle allocation sizes larger than the chunk size, so that it is
possible to support chunk sizes that are smaller than an arena object.

Reduce the minimum chunk size from 64kB to 8kB.

Optimize tracking of addresses for deleted chunks.

Fix a statistics bug for huge allocations.
</content>
</entry>
<entry>
<title>Fix some subtle bugs for posix_memalign() having to do with integer</title>
<updated>2007-03-24T20:44:06Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-24T20:44:06Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=12fbf47cfbc67df6e41ff6ede5724b59cc617565'/>
<id>urn:sha1:12fbf47cfbc67df6e41ff6ede5724b59cc617565</id>
<content type='text'>
rounding and overflow.  Carefully document what the various overflow
tests actually detect.

The bugs mostly canceled out, such that the worst possible failure
cases resulted in non-fatal over-allocations.
</content>
</entry>
<entry>
<title>Fix posix_memalign() for large objects.  Now that runs are extents rather</title>
<updated>2007-03-23T22:58:15Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-23T22:58:15Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=e3da012f00f4ebd14b437ccf9639ffb38ba76065'/>
<id>urn:sha1:e3da012f00f4ebd14b437ccf9639ffb38ba76065</id>
<content type='text'>
than binary buddies, the alignment guarantees are weaker, which requires
a more complex aligned allocation algorithm, similar to that used for
alignment greater than the chunk size.

Reported by:	matteo
</content>
</entry>
<entry>
<title>Use extents rather than binary buddies to track free pages within</title>
<updated>2007-03-23T05:05:48Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-23T05:05:48Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=bb99793a2b6bbae919a594718be9435a04e1b55f'/>
<id>urn:sha1:bb99793a2b6bbae919a594718be9435a04e1b55f</id>
<content type='text'>
chunks.  This allows runs to be any multiple of the page size.  The
primary advantage is that large objects are no longer constrained to be
2^n pages, which can dramatically decrease internal fragmentation for
large objects.  This also allows the sizes for runs that back small
objects to be more finely tuned.

Free runs are searched for linearly using the chunk page map (with the
help of some heuristic optimizations).  This changes the allocation
policy from "first best fit" to "first fit".  A prototype red-black tree
implementation for tracking free runs that implemented "first best fit"
did not cause a measurable speed or memory usage difference for
realistic chunk sizes (though of course it is possible to construct
benchmarks that favor one allocation policy over another).

Refine the handling of fullness constraints for small runs to be more
tunable.

Restructure the per chunk page map to contain only two fields per entry,
rather than four.  Also, increase each entry from 4 to 8 bytes, since it
allows for 32-bit integers, without increasing the number of chunk
header pages.

Relax the maximum chunk size constraint.  This is of no practical
interest; it is merely fallout from the chunk page map restructuring.

Revamp statistics gathering and reporting to be faster, clearer and more
informative.  Statistics gathering is fast enough now to have little
to no impact on application speed, but it still requires approximately
two extra pages of memory per arena (per process).  This memory overhead
may be acceptable for most systems, but we still need to leave
statistics gathering disabled by default in RELENG branches.

Rename NO_MALLOC_EXTRAS to MALLOC_PRODUCTION in order to make its intent
clearer (i.e. it should be defined in RELENG branches).
</content>
</entry>
<entry>
<title>Avoid using vsnprintf(3) unless MALLOC_STATS is defined, in order to</title>
<updated>2007-03-20T03:44:10Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-03-20T03:44:10Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=c9f0c8fd74587f08532791c75ebbb097be7a897f'/>
<id>urn:sha1:c9f0c8fd74587f08532791c75ebbb097be7a897f</id>
<content type='text'>
avoid substantial potential bloat for static binaries that do not
otherwise use any printf(3)-family functions. [1]

Rearrange arena_run_t so that the region bitmask can be minimally sized
according to constraints related to each bin's size class.  Previously,
the region bitmask was the same size for all run headers, which wasted
a measurable amount of memory.

Rather than making runs for small objects as large as possible, make
runs as small as possible such that header overhead stays below a
certain bound.  There are two exceptions that override the header
overhead bound:

	1) If the bound is impossible to honor, it is relaxed on a
	   per-size-class basis.  Since there is one bit of header
	   overhead per object (plus a constant), it is impossible to
	   achieve a header overhead less than or equal to 1/(# of bits
	   per object).  For the current setting of maximum 0.5% header
	   overhead, this relaxation comes into play for {2, 4, 8,
	   16}-byte objects, for which header overhead is (on 64-bit
	   systems) {7.1, 4.3, 2.2, 1.2}%, respectively.

	2) There is still a cap on small run size, still set to 64kB.
	   This comes into play for {1024, 2048}-byte objects, for which
	   header overhead is {1.6, 3.1}%, respectively.

In practice, this reduces the run sizes, which makes worst case
low-water memory usage due to fragmentation less bad.  It also reduces
worst case high-water run fragmentation due to non-full runs, but this
is only a constant improvement (most important to small short-lived
processes).

Reduce the default chunk size from 2MB to 1MB.  Benchmarks indicate that
the external fragmentation reduction makes 1MB the new sweet spot (as
small as possible without adversely affecting performance).

Reported by:	[1] kientzle
</content>
</entry>
<entry>
<title>Modify chunk_alloc() to prefer mmap()ed memory over sbrk()ed memory.</title>
<updated>2007-02-22T19:10:30Z</updated>
<author>
<name>Jason Evans</name>
<email>jasone@FreeBSD.org</email>
</author>
<published>2007-02-22T19:10:30Z</published>
<link rel='alternate' type='text/html' href='https://cgit-dev.freebsd.org/src-test2/commit/?id=a326064e24cae50002d8299462c86b3facd8558a'/>
<id>urn:sha1:a326064e24cae50002d8299462c86b3facd8558a</id>
<content type='text'>
This has no impact unless USE_BRK is defined (32-bit platforms), in
which case user allocations are allocated via mmap() if at all possible,
in order to avoid the possibility of unreclaimable chunks in the data
segment.

Fix an obscure bug in base_alloc() that could have allowed undefined
behavior if an application were to use sbrk() in conjunction with a
USE_BRK-enabled malloc.
</content>
</entry>
</feed>
