Arax -8d09c51940345c86062e8ef2427c705ae66e5926
A Runtime Framework for Decoupling Applications from Heterogeneous Accelerators
Loading...
Searching...
No Matches
malloc.c
Go to the documentation of this file.
1// GCOV_EXCL_START
2
3/*
4 * This is a version (aka dlmalloc) of malloc/free/realloc written by
5 * Doug Lea and released to the public domain, as explained at
6 * http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
7 * comments, complaints, performance data, etc to dl@cs.oswego.edu
8 *
9 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
10 * Note: There may be an updated version of this malloc obtainable at
11 * ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 * Check before installing!
13 *
14 * Quickstart
15 *
16 * This library is all in one file to simplify the most common usage:
17 * ftp it, compile it (-O3), and link it into another program. All of
18 * the compile-time options default to reasonable values for use on
19 * most platforms. You might later want to step through various
20 * compile-time and dynamic tuning options.
21 *
22 * For convenience, an include file for code using this malloc is at:
23 * ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
24 * You don't really need this .h file unless you call functions not
25 * defined in your system include files. The .h file contains only the
26 * excerpts from this file needed for using this malloc on ANSI C/C++
27 * systems, so long as you haven't changed compile-time options about
28 * naming and tuning parameters. If you do, then you can create your
29 * own malloc.h that does include all settings by cutting at the point
30 * indicated below. Note that you may already by default be using a C
31 * library containing a malloc that is based on some version of this
32 * malloc (for example in linux). You might still want to use the one
33 * in this file to customize settings or to avoid overheads associated
34 * with library versions.
35 *
36 * Vital statistics:
37 *
38 * Supported pointer/size_t representation: 4 or 8 bytes
39 * size_t MUST be an unsigned type of the same width as
40 * pointers. (If you are using an ancient system that declares
41 * size_t as a signed type, or need it to be a different width
42 * than pointers, you can use a previous release of this malloc
43 * (e.g. 2.7.2) supporting these.)
44 *
45 * Alignment: 8 bytes (minimum)
46 * This suffices for nearly all current machines and C compilers.
47 * However, you can define MALLOC_ALIGNMENT to be wider than this
48 * if necessary (up to 128bytes), at the expense of using more space.
49 *
50 * Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
51 * 8 or 16 bytes (if 8byte sizes)
52 * Each malloced chunk has a hidden word of overhead holding size
53 * and status information, and additional cross-check word
54 * if FOOTERS is defined.
55 *
56 * Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
57 * 8-byte ptrs: 32 bytes (including overhead)
58 *
59 * Even a request for zero bytes (i.e., malloc(0)) returns a
60 * pointer to something of the minimum allocatable size.
61 * The maximum overhead wastage (i.e., number of extra bytes
62 * allocated than were requested in malloc) is less than or equal
63 * to the minimum size, except for requests >= mmap_threshold that
64 * are serviced via mmap(), where the worst case wastage is about
65 * 32 bytes plus the remainder from a system page (the minimal
66 * mmap unit); typically 4096 or 8192 bytes.
67 *
68 * Security: static-safe; optionally more or less
69 * The "security" of malloc refers to the ability of malicious
70 * code to accentuate the effects of errors (for example, freeing
71 * space that is not currently malloc'ed or overwriting past the
72 * ends of chunks) in code that calls malloc. This malloc
73 * guarantees not to modify any memory locations below the base of
74 * heap, i.e., static variables, even in the presence of usage
75 * errors. The routines additionally detect most improper frees
76 * and reallocs. All this holds as long as the static bookkeeping
77 * for malloc itself is not corrupted by some other means. This
78 * is only one aspect of security -- these checks do not, and
79 * cannot, detect all possible programming errors.
80 *
81 * If FOOTERS is defined nonzero, then each allocated chunk
82 * carries an additional check word to verify that it was malloced
83 * from its space. These check words are the same within each
84 * execution of a program using malloc, but differ across
85 * executions, so externally crafted fake chunks cannot be
86 * freed. This improves security by rejecting frees/reallocs that
87 * could corrupt heap memory, in addition to the checks preventing
88 * writes to statics that are always on. This may further improve
89 * security at the expense of time and space overhead. (Note that
90 * FOOTERS may also be worth using with MSPACES.)
91 *
92 * By default detected errors cause the program to abort (calling
93 * "abort()"). You can override this to instead proceed past
94 * errors by defining PROCEED_ON_ERROR. In this case, a bad free
95 * has no effect, and a malloc that encounters a bad address
96 * caused by user overwrites will ignore the bad address by
97 * dropping pointers and indices to all known memory. This may
98 * be appropriate for programs that should continue if at all
99 * possible in the face of programming errors, although they may
100 * run out of memory because dropped memory is never reclaimed.
101 *
102 * If you don't like either of these options, you can define
103 * CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
104 * else. And if if you are sure that your program using malloc has
105 * no errors or vulnerabilities, you can define INSECURE to 1,
106 * which might (or might not) provide a small performance improvement.
107 *
108 * It is also possible to limit the maximum total allocatable
109 * space, using malloc_set_footprint_limit. This is not
110 * designed as a security feature in itself (calls to set limits
111 * are not screened or privileged), but may be useful as one
112 * aspect of a secure implementation.
113 *
114 * Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
115 * When USE_LOCKS is defined, each public call to malloc, free,
116 * etc is surrounded with a lock. By default, this uses a plain
117 * pthread mutex, win32 critical section, or a spin-lock if if
118 * available for the platform and not disabled by setting
119 * USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
120 * recursive versions are used instead (which are not required for
121 * base functionality but may be needed in layered extensions).
122 * Using a global lock is not especially fast, and can be a major
123 * bottleneck. It is designed only to provide minimal protection
124 * in concurrent environments, and to provide a basis for
125 * extensions. If you are using malloc in a concurrent program,
126 * consider instead using nedmalloc
127 * (http://www.nedprod.com/programs/portable/nedmalloc/) or
128 * ptmalloc (See http://www.malloc.de), which are derived from
129 * versions of this malloc.
130 *
131 * System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
132 * This malloc can use unix sbrk or any emulation (invoked using
133 * the CALL_MORECORE macro) and/or mmap/munmap or any emulation
134 * (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
135 * memory. On most unix systems, it tends to work best if both
136 * MORECORE and MMAP are enabled. On Win32, it uses emulations
137 * based on VirtualAlloc. It also uses common C library functions
138 * like memset.
139 *
140 * Compliance: I believe it is compliant with the Single Unix Specification
141 * (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
142 * others as well.
143 *
144 * Overview of algorithms
145 *
146 * This is not the fastest, most space-conserving, most portable, or
147 * most tunable malloc ever written. However it is among the fastest
148 * while also being among the most space-conserving, portable and
149 * tunable. Consistent balance across these factors results in a good
150 * general-purpose allocator for malloc-intensive programs.
151 *
152 * In most ways, this malloc is a best-fit allocator. Generally, it
153 * chooses the best-fitting existing chunk for a request, with ties
154 * broken in approximately least-recently-used order. (This strategy
155 * normally maintains low fragmentation.) However, for requests less
156 * than 256bytes, it deviates from best-fit when there is not an
157 * exactly fitting available chunk by preferring to use space adjacent
158 * to that used for the previous small request, as well as by breaking
159 * ties in approximately most-recently-used order. (These enhance
160 * locality of series of small allocations.) And for very large requests
161 * (>= 256Kb by default), it relies on system memory mapping
162 * facilities, if supported. (This helps avoid carrying around and
163 * possibly fragmenting memory used only for large chunks.)
164 *
165 * All operations (except malloc_stats and mallinfo) have execution
166 * times that are bounded by a constant factor of the number of bits in
167 * a size_t, not counting any clearing in calloc or copying in realloc,
168 * or actions surrounding MORECORE and MMAP that have times
169 * proportional to the number of non-contiguous regions returned by
170 * system allocation routines, which is often just 1. In real-time
171 * applications, you can optionally suppress segment traversals using
172 * NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
173 * system allocators return non-contiguous spaces, at the typical
174 * expense of carrying around more memory and increased fragmentation.
175 *
176 * The implementation is not very modular and seriously overuses
177 * macros. Perhaps someday all C compilers will do as good a job
178 * inlining modular code as can now be done by brute-force expansion,
179 * but now, enough of them seem not to.
180 *
181 * Some compilers issue a lot of warnings about code that is
182 * dead/unreachable only on some platforms, and also about intentional
183 * uses of negation on unsigned types. All known cases of each can be
184 * ignored.
185 *
186 * For a longer but out of date high-level description, see
187 * http://gee.cs.oswego.edu/dl/html/malloc.html
188 *
189 * MSPACES
190 * If MSPACES is defined, then in addition to malloc, free, etc.,
191 * this file also defines mspace_malloc, mspace_free, etc. These
192 * are versions of malloc routines that take an "mspace" argument
193 * obtained using create_mspace, to control all internal bookkeeping.
194 * If ONLY_MSPACES is defined, only these versions are compiled.
195 * So if you would like to use this allocator for only some allocations,
196 * and your system malloc for others, you can compile with
197 * ONLY_MSPACES and then do something like...
198 * static mspace mymspace = create_mspace(0,0); // for example
199 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
200 *
201 * (Note: If you only need one instance of an mspace, you can instead
202 * use "USE_DL_PREFIX" to relabel the global malloc.)
203 *
204 * You can similarly create thread-local allocators by storing
205 * mspaces as thread-locals. For example:
206 * static __thread mspace tlms = 0;
207 * void* tlmalloc(size_t bytes) {
208 * if (tlms == 0) tlms = create_mspace(0, 0);
209 * return mspace_malloc(tlms, bytes);
210 * }
211 * void tlfree(void* mem) { mspace_free(tlms, mem); }
212 *
213 * Unless FOOTERS is defined, each mspace is completely independent.
214 * You cannot allocate from one and free to another (although
215 * conformance is only weakly checked, so usage errors are not always
216 * caught). If FOOTERS is defined, then each chunk carries around a tag
217 * indicating its originating mspace, and frees are directed to their
218 * originating spaces. Normally, this requires use of locks.
219 *
220 * ------------------------- Compile-time options ---------------------------
221 *
222 * Be careful in setting #define values for numerical constants of type
223 * size_t. On some systems, literal values are not automatically extended
224 * to size_t precision unless they are explicitly casted. You can also
225 * use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
226 *
227 * WIN32 default: defined if _WIN32 defined
228 * Defining WIN32 sets up defaults for MS environment and compilers.
229 * Otherwise defaults are for unix. Beware that there seem to be some
230 * cases where this malloc might not be a pure drop-in replacement for
231 * Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
232 * SetDIBits()) may be due to bugs in some video driver implementations
233 * when pixel buffers are malloc()ed, and the region spans more than
234 * one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
235 * default granularity, pixel buffers may straddle virtual allocation
236 * regions more often than when using the Microsoft allocator. You can
237 * avoid this by using VirtualAlloc() and VirtualFree() for all pixel
238 * buffers rather than using malloc(). If this is not possible,
239 * recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
240 * in cases where MSC and gcc (cygwin) are known to differ on WIN32,
241 * conditions use _MSC_VER to distinguish them.
242 *
243 * DLMALLOC_EXPORT default: extern
244 * Defines how public APIs are declared. If you want to export via a
245 * Windows DLL, you might define this as
246 #define DLMALLOC_EXPORT extern __declspec(dllexport)
247 * If you want a POSIX ELF shared object, you might use
248 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
249 *
250 * MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
251 * Controls the minimum alignment for malloc'ed chunks. It must be a
252 * power of two and at least 8, even on machines for which smaller
253 * alignments would suffice. It may be defined as larger than this
254 * though. Note however that code and data structures are optimized for
255 * the case of 8-byte alignment.
256 *
257 * MSPACES default: 0 (false)
258 * If true, compile in support for independent allocation spaces.
259 * This is only supported if HAVE_MMAP is true.
260 *
261 * ONLY_MSPACES default: 0 (false)
262 * If true, only compile in mspace versions, not regular versions.
263 *
264 * USE_LOCKS default: 0 (false)
265 * Causes each call to each public routine to be surrounded with
266 * pthread or WIN32 mutex lock/unlock. (If set true, this can be
267 * overridden on a per-mspace basis for mspace versions.) If set to a
268 * non-zero value other than 1, locks are used, but their
269 * implementation is left out, so lock functions must be supplied manually,
270 * as described below.
271 *
272 * USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
273 * If true, uses custom spin locks for locking. This is currently
274 * supported only gcc >= 4.1, older gccs on x86 platforms, and recent
275 * MS compilers. Otherwise, posix locks or win32 critical sections are
276 * used.
277 *
278 * USE_RECURSIVE_LOCKS default: not defined
279 * If defined nonzero, uses recursive (aka reentrant) locks, otherwise
280 * uses plain mutexes. This is not required for malloc proper, but may
281 * be needed for layered allocators such as nedmalloc.
282 *
283 * LOCK_AT_FORK default: not defined
284 * If defined nonzero, performs pthread_atfork upon initialization
285 * to initialize child lock while holding parent lock. The implementation
286 * assumes that pthread locks (not custom locks) are being used. In other
287 * cases, you may need to customize the implementation.
288 *
289 * FOOTERS default: 0
290 * If true, provide extra checking and dispatching by placing
291 * information in the footers of allocated chunks. This adds
292 * space and time overhead.
293 *
294 * INSECURE default: 0
295 * If true, omit checks for usage errors and heap space overwrites.
296 *
297 * USE_DL_PREFIX default: NOT defined
298 * Causes compiler to prefix all public routines with the string 'dl'.
299 * This can be useful when you only want to use this malloc in one part
300 * of a program, using your regular system malloc elsewhere.
301 *
302 * MALLOC_INSPECT_ALL default: NOT defined
303 * If defined, compiles malloc_inspect_all and mspace_inspect_all, that
304 * perform traversal of all heap space. Unless access to these
305 * functions is otherwise restricted, you probably do not want to
306 * include them in secure implementations.
307 *
308 * ABORT default: defined as abort()
309 * Defines how to abort on failed checks. On most systems, a failed
310 * check cannot die with an "assert" or even print an informative
311 * message, because the underlying print routines in turn call malloc,
312 * which will fail again. Generally, the best policy is to simply call
313 * abort(). It's not very useful to do more than this because many
314 * errors due to overwriting will show up as address faults (null, odd
315 * addresses etc) rather than malloc-triggered checks, so will also
316 * abort. Also, most compilers know that abort() does not return, so
317 * can better optimize code conditionally calling it.
318 *
319 * PROCEED_ON_ERROR default: defined as 0 (false)
320 * Controls whether detected bad addresses cause them to bypassed
321 * rather than aborting. If set, detected bad arguments to free and
322 * realloc are ignored. And all bookkeeping information is zeroed out
323 * upon a detected overwrite of freed heap space, thus losing the
324 * ability to ever return it from malloc again, but enabling the
325 * application to proceed. If PROCEED_ON_ERROR is defined, the
326 * static variable malloc_corruption_error_count is compiled in
327 * and can be examined to see if errors have occurred. This option
328 * generates slower code than the default abort policy.
329 *
330 * DEBUG default: NOT defined
331 * The DEBUG setting is mainly intended for people trying to modify
332 * this code or diagnose problems when porting to new platforms.
333 * However, it may also be able to better isolate user errors than just
334 * using runtime checks. The assertions in the check routines spell
335 * out in more detail the assumptions and invariants underlying the
336 * algorithms. The checking is fairly extensive, and will slow down
337 * execution noticeably. Calling malloc_stats or mallinfo with DEBUG
338 * set will attempt to check every non-mmapped allocated and free chunk
339 * in the course of computing the summaries.
340 *
341 * ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
342 * Debugging assertion failures can be nearly impossible if your
343 * version of the assert macro causes malloc to be called, which will
344 * lead to a cascade of further failures, blowing the runtime stack.
345 * ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
346 * which will usually make debugging easier.
347 *
348 * MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
349 * The action to take before "return 0" when malloc fails to be able to
350 * return memory because there is none available.
351 *
352 * HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
353 * True if this system supports sbrk or an emulation of it.
354 *
355 * MORECORE default: sbrk
356 * The name of the sbrk-style system routine to call to obtain more
357 * memory. See below for guidance on writing custom MORECORE
358 * functions. The type of the argument to sbrk/MORECORE varies across
359 * systems. It cannot be size_t, because it supports negative
360 * arguments, so it is normally the signed type of the same width as
361 * size_t (sometimes declared as "intptr_t"). It doesn't much matter
362 * though. Internally, we only call it with arguments less than half
363 * the max value of a size_t, which should work across all reasonable
364 * possibilities, although sometimes generating compiler warnings.
365 *
366 * MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
367 * If true, take advantage of fact that consecutive calls to MORECORE
368 * with positive arguments always return contiguous increasing
369 * addresses. This is true of unix sbrk. It does not hurt too much to
370 * set it true anyway, since malloc copes with non-contiguities.
371 * Setting it false when definitely non-contiguous saves time
372 * and possibly wasted space it would take to discover this though.
373 *
374 * MORECORE_CANNOT_TRIM default: NOT defined
375 * True if MORECORE cannot release space back to the system when given
376 * negative arguments. This is generally necessary only if you are
377 * using a hand-crafted MORECORE function that cannot handle negative
378 * arguments.
379 *
380 * NO_SEGMENT_TRAVERSAL default: 0
381 * If non-zero, suppresses traversals of memory segments
382 * returned by either MORECORE or CALL_MMAP. This disables
383 * merging of segments that are contiguous, and selectively
384 * releasing them to the OS if unused, but bounds execution times.
385 *
386 * HAVE_MMAP default: 1 (true)
387 * True if this system supports mmap or an emulation of it. If so, and
388 * HAVE_MORECORE is not true, MMAP is used for all system
389 * allocation. If set and HAVE_MORECORE is true as well, MMAP is
390 * primarily used to directly allocate very large blocks. It is also
391 * used as a backup strategy in cases where MORECORE fails to provide
392 * space from system. Note: A single call to MUNMAP is assumed to be
393 * able to unmap memory that may have be allocated using multiple calls
394 * to MMAP, so long as they are adjacent.
395 *
396 * HAVE_MREMAP default: 1 on linux, else 0
397 * If true realloc() uses mremap() to re-allocate large blocks and
398 * extend or shrink allocation spaces.
399 *
400 * MMAP_CLEARS default: 1 except on WINCE.
401 * True if mmap clears memory so calloc doesn't need to. This is true
402 * for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
403 *
404 * USE_BUILTIN_FFS default: 0 (i.e., not used)
405 * Causes malloc to use the builtin ffs() function to compute indices.
406 * Some compilers may recognize and intrinsify ffs to be faster than the
407 * supplied C version. Also, the case of x86 using gcc is special-cased
408 * to an asm instruction, so is already as fast as it can be, and so
409 * this setting has no effect. Similarly for Win32 under recent MS compilers.
410 * (On most x86s, the asm version is only slightly faster than the C version.)
411 *
412 * malloc_getpagesize default: derive from system includes, or 4096.
413 * The system page size. To the extent possible, this malloc manages
414 * memory from the system in page-size units. This may be (and
415 * usually is) a function rather than a constant. This is ignored
416 * if WIN32, where page size is determined using getSystemInfo during
417 * initialization.
418 *
419 * USE_DEV_RANDOM default: 0 (i.e., not used)
420 * Causes malloc to use /dev/random to initialize secure magic seed for
421 * stamping footers. Otherwise, the current time is used.
422 *
423 * NO_MALLINFO default: 0
424 * If defined, don't compile "mallinfo". This can be a simple way
425 * of dealing with mismatches between system declarations and
426 * those in this file.
427 *
428 * MALLINFO_FIELD_TYPE default: size_t
429 * The type of the fields in the mallinfo struct. This was originally
430 * defined as "int" in SVID etc, but is more usefully defined as
431 * size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
432 *
433 * NO_MALLOC_STATS default: 0
434 * If defined, don't compile "malloc_stats". This avoids calls to
435 * fprintf and bringing in stdio dependencies you might not want.
436 *
437 * REALLOC_ZERO_BYTES_FREES default: not defined
438 * This should be set if a call to realloc with zero bytes should
439 * be the same as a call to free. Some people think it should. Otherwise,
440 * since this malloc returns a unique pointer for malloc(0), so does
441 * realloc(p, 0).
442 *
443 * LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
444 * LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
445 * LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
446 * Define these if your system does not have these header files.
447 * You might need to manually insert some of the declarations they provide.
448 *
449 * DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
450 * system_info.dwAllocationGranularity in WIN32,
451 * otherwise 64K.
452 * Also settable using mallopt(M_GRANULARITY, x)
453 * The unit for allocating and deallocating memory from the system. On
454 * most systems with contiguous MORECORE, there is no reason to
455 * make this more than a page. However, systems with MMAP tend to
456 * either require or encourage larger granularities. You can increase
457 * this value to prevent system allocation functions to be called so
458 * often, especially if they are slow. The value must be at least one
459 * page and must be a power of two. Setting to 0 causes initialization
460 * to either page size or win32 region size. (Note: In previous
461 * versions of malloc, the equivalent of this option was called
462 * "TOP_PAD")
463 *
464 * DEFAULT_TRIM_THRESHOLD default: 2MB
465 * Also settable using mallopt(M_TRIM_THRESHOLD, x)
466 * The maximum amount of unused top-most memory to keep before
467 * releasing via malloc_trim in free(). Automatic trimming is mainly
468 * useful in long-lived programs using contiguous MORECORE. Because
469 * trimming via sbrk can be slow on some systems, and can sometimes be
470 * wasteful (in cases where programs immediately afterward allocate
471 * more large chunks) the value should be high enough so that your
472 * overall system performance would improve by releasing this much
473 * memory. As a rough guide, you might set to a value close to the
474 * average size of a process (program) running on your system.
475 * Releasing this much memory would allow such a process to run in
476 * memory. Generally, it is worth tuning trim thresholds when a
477 * program undergoes phases where several large chunks are allocated
478 * and released in ways that can reuse each other's storage, perhaps
479 * mixed with phases where there are no such chunks at all. The trim
480 * value must be greater than page size to have any useful effect. To
481 * disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
482 * some people use of mallocing a huge space and then freeing it at
483 * program startup, in an attempt to reserve system memory, doesn't
484 * have the intended effect under automatic trimming, since that memory
485 * will immediately be returned to the system.
486 *
487 * DEFAULT_MMAP_THRESHOLD default: 256K
488 * Also settable using mallopt(M_MMAP_THRESHOLD, x)
489 * The request size threshold for using MMAP to directly service a
490 * request. Requests of at least this size that cannot be allocated
491 * using already-existing space will be serviced via mmap. (If enough
492 * normal freed space already exists it is used instead.) Using mmap
493 * segregates relatively large chunks of memory so that they can be
494 * individually obtained and released from the host system. A request
495 * serviced through mmap is never reused by any other request (at least
496 * not directly; the system may just so happen to remap successive
497 * requests to the same locations). Segregating space in this way has
498 * the benefits that: Mmapped space can always be individually released
499 * back to the system, which helps keep the system level memory demands
500 * of a long-lived program low. Also, mapped memory doesn't become
501 * `locked' between other chunks, as can happen with normally allocated
502 * chunks, which means that even trimming via malloc_trim would not
503 * release them. However, it has the disadvantage that the space
504 * cannot be reclaimed, consolidated, and then used to service later
505 * requests, as happens with normal chunks. The advantages of mmap
506 * nearly always outweigh disadvantages for "large" chunks, but the
507 * value of "large" may vary across systems. The default is an
508 * empirically derived value that works well in most systems. You can
509 * disable mmap by setting to MAX_SIZE_T.
510 *
511 * MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
512 * The number of consolidated frees between checks to release
513 * unused segments when freeing. When using non-contiguous segments,
514 * especially with multiple mspaces, checking only for topmost space
515 * doesn't always suffice to trigger trimming. To compensate for this,
516 * free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
517 * current number of segments, if greater) try to release unused
518 * segments to the OS when freeing chunks that result in
519 * consolidation. The best value for this parameter is a compromise
520 * between slowing down frees with relatively costly checks that
521 * rarely trigger versus holding on to unused memory. To effectively
522 * disable, set to MAX_SIZE_T. This may lead to a very slight speed
523 * improvement at the expense of carrying around more memory.
524 */
525
526/* Version identifier to allow people to support multiple versions */
527#ifndef DLMALLOC_VERSION
528#define DLMALLOC_VERSION 20806
529#endif /* DLMALLOC_VERSION */
530
531#ifndef DLMALLOC_EXPORT
532#define DLMALLOC_EXPORT extern
533#endif
534
535#ifndef WIN32
536#ifdef _WIN32
537#define WIN32 1
538#endif /* _WIN32 */
539#ifdef _WIN32_WCE
540#define LACKS_FCNTL_H
541#define WIN32 1
542#endif /* _WIN32_WCE */
543#endif /* WIN32 */
544#ifdef WIN32
545#define WIN32_LEAN_AND_MEAN
546#include <windows.h>
547#include <tchar.h>
548#define HAVE_MMAP 0
549#define HAVE_MORECORE 0
550#define LACKS_UNISTD_H
551#define LACKS_SYS_PARAM_H
552#define LACKS_SYS_MMAN_H
553#define LACKS_STRING_H
554#define LACKS_STRINGS_H
555#define LACKS_SYS_TYPES_H
556#define LACKS_ERRNO_H
557#define LACKS_SCHED_H
558#ifndef MALLOC_FAILURE_ACTION
559#define MALLOC_FAILURE_ACTION
560#endif /* MALLOC_FAILURE_ACTION */
561#ifndef MMAP_CLEARS
562#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
563#define MMAP_CLEARS 0
564#else
565#define MMAP_CLEARS 1
566#endif /* _WIN32_WCE */
567#endif /*MMAP_CLEARS */
568#endif /* WIN32 */
569
570#if defined(DARWIN) || defined(_DARWIN)
571/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
572#ifndef HAVE_MORECORE
573#define HAVE_MORECORE 0
574#define HAVE_MMAP 0
575/* OSX allocators provide 16 byte alignment */
576#ifndef MALLOC_ALIGNMENT
577#define MALLOC_ALIGNMENT ((size_t) 16U)
578#endif
579#endif /* HAVE_MORECORE */
580#endif /* DARWIN */
581
582#ifndef LACKS_SYS_TYPES_H
583#include <sys/types.h> /* For size_t */
584#endif /* LACKS_SYS_TYPES_H */
585
586/* The maximum possible size_t value has all bits set */
587#define MAX_SIZE_T (~(size_t) 0)
588
589#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
590#define USE_LOCKS \
591 ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
592 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
593#endif /* USE_LOCKS */
594
595#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
596#if ((defined(__GNUC__) && \
597 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
598 defined(__i386__) || defined(__x86_64__))) || \
599 (defined(_MSC_VER) && _MSC_VER >= 1310))
600#ifndef USE_SPIN_LOCKS
601#define USE_SPIN_LOCKS 1
602#endif /* USE_SPIN_LOCKS */
603#elif USE_SPIN_LOCKS
604#error "USE_SPIN_LOCKS defined without implementation"
605#endif /* ... locks available... */
606#elif !defined(USE_SPIN_LOCKS)
607#define USE_SPIN_LOCKS 0
608#endif /* USE_LOCKS */
609
610#ifndef ONLY_MSPACES
611#define ONLY_MSPACES 0
612#endif /* ONLY_MSPACES */
613#ifndef MSPACES
614#if ONLY_MSPACES
615#define MSPACES 1
616#else /* ONLY_MSPACES */
617#define MSPACES 0
618#endif /* ONLY_MSPACES */
619#endif /* MSPACES */
620#ifndef MALLOC_ALIGNMENT
621#define MALLOC_ALIGNMENT ((size_t) (2 * sizeof(void *)))
622#endif /* MALLOC_ALIGNMENT */
623#ifndef FOOTERS
624#define FOOTERS 0
625#endif /* FOOTERS */
626#ifndef ABORT
627#define ABORT abort()
628#endif /* ABORT */
629#ifndef ABORT_ON_ASSERT_FAILURE
630#define ABORT_ON_ASSERT_FAILURE 1
631#endif /* ABORT_ON_ASSERT_FAILURE */
632#ifndef PROCEED_ON_ERROR
633#define PROCEED_ON_ERROR 0
634#endif /* PROCEED_ON_ERROR */
635
636#ifndef INSECURE
637#define INSECURE 0
638#endif /* INSECURE */
639#ifndef MALLOC_INSPECT_ALL
640#define MALLOC_INSPECT_ALL 0
641#endif /* MALLOC_INSPECT_ALL */
642#ifndef HAVE_MMAP
643#define HAVE_MMAP 0
644#endif /* HAVE_MMAP */
645#ifndef MMAP_CLEARS
646#define MMAP_CLEARS 1
647#endif /* MMAP_CLEARS */
648#ifndef HAVE_MREMAP
649#ifdef linux
650#define HAVE_MREMAP 1
651#define _GNU_SOURCE /* Turns on mremap() definition */
652#else /* linux */
653#define HAVE_MREMAP 0
654#endif /* linux */
655#endif /* HAVE_MREMAP */
656#ifndef MALLOC_FAILURE_ACTION
657#define MALLOC_FAILURE_ACTION errno = ENOMEM;
658#endif /* MALLOC_FAILURE_ACTION */
659#ifndef HAVE_MORECORE
660#if ONLY_MSPACES
661#define HAVE_MORECORE 0
662#else /* ONLY_MSPACES */
663#define HAVE_MORECORE 1
664#endif /* ONLY_MSPACES */
665#endif /* HAVE_MORECORE */
666#if !HAVE_MORECORE
667#define MORECORE_CONTIGUOUS 0
668#else /* !HAVE_MORECORE */
669#define MORECORE_DEFAULT sbrk
670#ifndef MORECORE_CONTIGUOUS
671#define MORECORE_CONTIGUOUS 1
672#endif /* MORECORE_CONTIGUOUS */
673#endif /* HAVE_MORECORE */
674#ifndef DEFAULT_GRANULARITY
675#if (MORECORE_CONTIGUOUS || defined(WIN32))
676#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
677#else /* MORECORE_CONTIGUOUS */
678#define DEFAULT_GRANULARITY ((size_t) 64U * (size_t) 1024U)
679#endif /* MORECORE_CONTIGUOUS */
680#endif /* DEFAULT_GRANULARITY */
681#ifndef DEFAULT_TRIM_THRESHOLD
682#ifndef MORECORE_CANNOT_TRIM
683#define DEFAULT_TRIM_THRESHOLD ((size_t) 2U * (size_t) 1024U * (size_t) 1024U)
684#else /* MORECORE_CANNOT_TRIM */
685#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
686#endif /* MORECORE_CANNOT_TRIM */
687#endif /* DEFAULT_TRIM_THRESHOLD */
688#ifndef DEFAULT_MMAP_THRESHOLD
689#if HAVE_MMAP
690#define DEFAULT_MMAP_THRESHOLD ((size_t) 256U * (size_t) 1024U)
691#else /* HAVE_MMAP */
692#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
693#endif /* HAVE_MMAP */
694#endif /* DEFAULT_MMAP_THRESHOLD */
695#ifndef MAX_RELEASE_CHECK_RATE
696#if HAVE_MMAP
697#define MAX_RELEASE_CHECK_RATE 4095
698#else
699#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
700#endif /* HAVE_MMAP */
701#endif /* MAX_RELEASE_CHECK_RATE */
702#ifndef USE_BUILTIN_FFS
703#define USE_BUILTIN_FFS 0
704#endif /* USE_BUILTIN_FFS */
705#ifndef USE_DEV_RANDOM
706#define USE_DEV_RANDOM 0
707#endif /* USE_DEV_RANDOM */
708#ifndef NO_MALLINFO
709#define NO_MALLINFO 0
710#endif /* NO_MALLINFO */
711#ifndef MALLINFO_FIELD_TYPE
712#define MALLINFO_FIELD_TYPE size_t
713#endif /* MALLINFO_FIELD_TYPE */
714#ifndef NO_MALLOC_STATS
715#define NO_MALLOC_STATS 0
716#endif /* NO_MALLOC_STATS */
717#ifndef NO_SEGMENT_TRAVERSAL
718#define NO_SEGMENT_TRAVERSAL 0
719#endif /* NO_SEGMENT_TRAVERSAL */
720
721/*
722 * mallopt tuning options. SVID/XPG defines four standard parameter
723 * numbers for mallopt, normally defined in malloc.h. None of these
724 * are used in this malloc, so setting them has no effect. But this
725 * malloc does support the following options.
726 */
727
728#ifndef HAVE_USR_INCLUDE_MALLOC_H
729#define M_TRIM_THRESHOLD (-1)
730#endif
731#define M_GRANULARITY (-2)
732#ifndef HAVE_USR_INCLUDE_MALLOC_H
733#define M_MMAP_THRESHOLD (-3)
734#endif
735/* ------------------------ Mallinfo declarations ------------------------ */
736
737#if !NO_MALLINFO
738
739/*
740 * This version of malloc supports the standard SVID/XPG mallinfo
741 * routine that returns a struct containing usage properties and
742 * statistics. It should work on any system that has a
743 * /usr/include/malloc.h defining struct mallinfo. The main
744 * declaration needed is the mallinfo struct that is returned (by-copy)
745 * by mallinfo(). The malloinfo struct contains a bunch of fields that
746 * are not even meaningful in this version of malloc. These fields are
747 * are instead filled by mallinfo() with other numbers that might be of
748 * interest.
749 *
750 * HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
751 * /usr/include/malloc.h file that includes a declaration of struct
752 * mallinfo. If so, it is included; else a compliant version is
753 * declared below. These must be precisely the same for mallinfo() to
754 * work. The original SVID version of this struct, defined on most
755 * systems with mallinfo, declares all fields as ints. But some others
756 * define as unsigned long. If your system defines the fields using a
757 * type of different width than listed here, you MUST #include your
758 * system version and #define HAVE_USR_INCLUDE_MALLOC_H.
759 */
760
761/* #define HAVE_USR_INCLUDE_MALLOC_H */
762
763#ifdef HAVE_USR_INCLUDE_MALLOC_H
764#include "/usr/include/malloc.h"
765#else /* HAVE_USR_INCLUDE_MALLOC_H */
766#ifndef STRUCT_MALLINFO_DECLARED
767/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
768#define _STRUCT_MALLINFO
769#define STRUCT_MALLINFO_DECLARED 1
771{
772 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
773 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
776 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
777 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
779 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
780 MALLINFO_FIELD_TYPE fordblks; /* total free space */
781 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
782};
783#endif /* STRUCT_MALLINFO_DECLARED */
784#endif /* HAVE_USR_INCLUDE_MALLOC_H */
785#endif /* NO_MALLINFO */
786
787/*
788 * Try to persuade compilers to inline. The most critical functions for
789 * inlining are defined as macros, so these aren't used for them.
790 */
791
792#ifndef FORCEINLINE
793#if defined(__GNUC__)
794#define FORCEINLINE __inline __attribute__((always_inline))
795#elif defined(_MSC_VER)
796#define FORCEINLINE __forceinline
797#endif
798#endif
799#ifndef NOINLINE
800#if defined(__GNUC__)
801#define NOINLINE __attribute__((noinline))
802#elif defined(_MSC_VER)
803#define NOINLINE __declspec(noinline)
804#else
805#define NOINLINE
806#endif
807#endif
808
809#ifdef __cplusplus
810extern "C" {
811#ifndef FORCEINLINE
812#define FORCEINLINE inline
813#endif
814#endif /* __cplusplus */
815#ifndef FORCEINLINE
816#define FORCEINLINE
817#endif
818
819#if !ONLY_MSPACES
820
821/* ------------------- Declarations of public routines ------------------- */
822
823#ifndef USE_DL_PREFIX
824#define dlcalloc calloc
825#define dlfree free
826#define dlmalloc malloc
827#define dlmemalign memalign
828#define dlposix_memalign posix_memalign
829#define dlrealloc realloc
830#define dlrealloc_in_place realloc_in_place
831#define dlvalloc valloc
832#define dlpvalloc pvalloc
833#define dlmallinfo mallinfo
834#define dlmallopt mallopt
835#define dlmalloc_trim malloc_trim
836#define dlmalloc_stats malloc_stats
837#define dlmalloc_usable_size malloc_usable_size
838#define dlmalloc_footprint malloc_footprint
839#define dlmalloc_max_footprint malloc_max_footprint
840#define dlmalloc_footprint_limit malloc_footprint_limit
841#define dlmalloc_set_footprint_limit malloc_set_footprint_limit
842#define dlmalloc_inspect_all malloc_inspect_all
843#define dlindependent_calloc independent_calloc
844#define dlindependent_comalloc independent_comalloc
845#define dlbulk_free bulk_free
846#endif /* USE_DL_PREFIX */
847
848/*
849 * malloc(size_t n)
850 * Returns a pointer to a newly allocated chunk of at least n bytes, or
851 * null if no space is available, in which case errno is set to ENOMEM
852 * on ANSI C systems.
853 *
854 * If n is zero, malloc returns a minimum-sized chunk. (The minimum
855 * size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
856 * systems.) Note that size_t is an unsigned type, so calls with
857 * arguments that would be negative if signed are interpreted as
858 * requests for huge amounts of space, which will often fail. The
859 * maximum supported value of n differs across systems, but is in all
860 * cases less than the maximum representable value of a size_t.
861 */
862DLMALLOC_EXPORT void* dlmalloc(size_t);
863
864/*
865 * free(void* p)
866 * Releases the chunk of memory pointed to by p, that had been previously
867 * allocated using malloc or a related routine such as realloc.
868 * It has no effect if p is null. If p was not malloced or already
869 * freed, free(p) will by default cause the current program to abort.
870 */
871DLMALLOC_EXPORT void dlfree(void *);
872
873/*
874 * calloc(size_t n_elements, size_t element_size);
875 * Returns a pointer to n_elements * element_size bytes, with all locations
876 * set to zero.
877 */
878DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
879
880/*
881 * realloc(void* p, size_t n)
882 * Returns a pointer to a chunk of size n that contains the same data
883 * as does chunk p up to the minimum of (n, p's size) bytes, or null
884 * if no space is available.
885 *
886 * The returned pointer may or may not be the same as p. The algorithm
887 * prefers extending p in most cases when possible, otherwise it
888 * employs the equivalent of a malloc-copy-free sequence.
889 *
890 * If p is null, realloc is equivalent to malloc.
891 *
892 * If space is not available, realloc returns null, errno is set (if on
893 * ANSI) and p is NOT freed.
894 *
895 * if n is for fewer bytes than already held by p, the newly unused
896 * space is lopped off and freed if possible. realloc with a size
897 * argument of zero (re)allocates a minimum-sized chunk.
898 *
899 * The old unix realloc convention of allowing the last-free'd chunk
900 * to be used as an argument to realloc is not supported.
901 */
902DLMALLOC_EXPORT void* dlrealloc(void *, size_t);
903
904/*
905 * realloc_in_place(void* p, size_t n)
906 * Resizes the space allocated for p to size n, only if this can be
907 * done without moving p (i.e., only if there is adjacent space
908 * available if n is greater than p's current allocated size, or n is
909 * less than or equal to p's size). This may be used instead of plain
910 * realloc if an alternative allocation strategy is needed upon failure
911 * to expand space; for example, reallocation of a buffer that must be
912 * memory-aligned or cleared. You can use realloc_in_place to trigger
913 * these alternatives only when needed.
914 *
915 * Returns p if successful; otherwise null.
916 */
917DLMALLOC_EXPORT void* dlrealloc_in_place(void *, size_t);
918
919/*
920 * memalign(size_t alignment, size_t n);
921 * Returns a pointer to a newly allocated chunk of n bytes, aligned
922 * in accord with the alignment argument.
923 *
924 * The alignment argument should be a power of two. If the argument is
925 * not a power of two, the nearest greater power is used.
926 * 8-byte alignment is guaranteed by normal malloc calls, so don't
927 * bother calling memalign with an argument of 8 or less.
928 *
929 * Overreliance on memalign is a sure way to fragment space.
930 */
931DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
932
933/*
934 * int posix_memalign(void** pp, size_t alignment, size_t n);
935 * Allocates a chunk of n bytes, aligned in accord with the alignment
936 * argument. Differs from memalign only in that it (1) assigns the
937 * allocated memory to *pp rather than returning it, (2) fails and
938 * returns EINVAL if the alignment is not a power of two (3) fails and
939 * returns ENOMEM if memory cannot be allocated.
940 */
941DLMALLOC_EXPORT int dlposix_memalign(void **, size_t, size_t);
942
943/*
944 * valloc(size_t n);
945 * Equivalent to memalign(pagesize, n), where pagesize is the page
946 * size of the system. If the pagesize is unknown, 4096 is used.
947 */
948DLMALLOC_EXPORT void* dlvalloc(size_t);
949
950/*
951 * mallopt(int parameter_number, int parameter_value)
952 * Sets tunable parameters The format is to provide a
953 * (parameter-number, parameter-value) pair. mallopt then sets the
954 * corresponding parameter to the argument value if it can (i.e., so
955 * long as the value is meaningful), and returns 1 if successful else
956 * 0. To workaround the fact that mallopt is specified to use int,
957 * not size_t parameters, the value -1 is specially treated as the
958 * maximum unsigned size_t value.
959 *
960 * SVID/XPG/ANSI defines four standard param numbers for mallopt,
961 * normally defined in malloc.h. None of these are use in this malloc,
962 * so setting them has no effect. But this malloc also supports other
963 * options in mallopt. See below for details. Briefly, supported
964 * parameters are as follows (listed defaults are for "typical"
965 * configurations).
966 *
967 * Symbol param # default allowed param values
968 * M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
969 * M_GRANULARITY -2 page size any power of 2 >= page size
970 * M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
971 */
972DLMALLOC_EXPORT int dlmallopt(int, int);
973
974/*
975 * malloc_footprint();
976 * Returns the number of bytes obtained from the system. The total
977 * number of bytes allocated by malloc, realloc etc., is less than this
978 * value. Unlike mallinfo, this function returns only a precomputed
979 * result, so can be called frequently to monitor memory consumption.
980 * Even if locks are otherwise defined, this function does not use them,
981 * so results might not be up to date.
982 */
984
985/*
986 * malloc_max_footprint();
987 * Returns the maximum number of bytes obtained from the system. This
988 * value will be greater than current footprint if deallocated space
989 * has been reclaimed by the system. The peak number of bytes allocated
990 * by malloc, realloc etc., is less than this value. Unlike mallinfo,
991 * this function returns only a precomputed result, so can be called
992 * frequently to monitor memory consumption. Even if locks are
993 * otherwise defined, this function does not use them, so results might
994 * not be up to date.
995 */
997
998/*
999 * malloc_footprint_limit();
1000 * Returns the number of bytes that the heap is allowed to obtain from
1001 * the system, returning the last value returned by
1002 * malloc_set_footprint_limit, or the maximum size_t value if
1003 * never set. The returned value reflects a permission. There is no
1004 * guarantee that this number of bytes can actually be obtained from
1005 * the system.
1006 */
1008
1009/*
1010 * malloc_set_footprint_limit();
1011 * Sets the maximum number of bytes to obtain from the system, causing
1012 * failure returns from malloc and related functions upon attempts to
1013 * exceed this value. The argument value may be subject to page
1014 * rounding to an enforceable limit; this actual value is returned.
1015 * Using an argument of the maximum possible size_t effectively
1016 * disables checks. If the argument is less than or equal to the
1017 * current malloc_footprint, then all future allocations that require
1018 * additional system memory will fail. However, invocation cannot
1019 * retroactively deallocate existing used memory.
1020 */
1022
1023#if MALLOC_INSPECT_ALL
1024
1025/*
1026 * malloc_inspect_all(void(*handler)(void *start,
1027 * void *end,
1028 * size_t used_bytes,
1029 * void* callback_arg),
1030 * void* arg);
1031 * Traverses the heap and calls the given handler for each managed
1032 * region, skipping all bytes that are (or may be) used for bookkeeping
1033 * purposes. Traversal does not include include chunks that have been
1034 * directly memory mapped. Each reported region begins at the start
1035 * address, and continues up to but not including the end address. The
1036 * first used_bytes of the region contain allocated data. If
1037 * used_bytes is zero, the region is unallocated. The handler is
1038 * invoked with the given callback argument. If locks are defined, they
1039 * are held during the entire traversal. It is a bad idea to invoke
1040 * other malloc functions from within the handler.
1041 *
1042 * For example, to count the number of in-use chunks with size greater
1043 * than 1000, you could write:
1044 * static int count = 0;
1045 * void count_chunks(void* start, void* end, size_t used, void* arg) {
1046 * if (used >= 1000) ++count;
1047 * }
1048 * then:
1049 * malloc_inspect_all(count_chunks, NULL);
1050 *
1051 * malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1052 */
1053DLMALLOC_EXPORT void dlmalloc_inspect_all(void ( *handler )(void *, void *, size_t, void *),
1054 void *arg);
1055
1056#endif /* MALLOC_INSPECT_ALL */
1057
1058#if !NO_MALLINFO
1059
1060/*
1061 * mallinfo()
1062 * Returns (by copy) a struct containing various summary statistics:
1063 *
1064 * arena: current total non-mmapped bytes allocated from system
1065 * ordblks: the number of free chunks
1066 * smblks: always zero.
1067 * hblks: current number of mmapped regions
1068 * hblkhd: total bytes held in mmapped regions
1069 * usmblks: the maximum total allocated space. This will be greater
1070 * than current total if trimming has occurred.
1071 * fsmblks: always zero
1072 * uordblks: current total allocated space (normal or mmapped)
1073 * fordblks: total free space
1074 * keepcost: the maximum number of bytes that could ideally be released
1075 * back to system via malloc_trim. ("ideally" means that
1076 * it ignores page restrictions etc.)
1077 *
1078 * Because these fields are ints, but internal bookkeeping may
1079 * be kept as longs, the reported values may wrap around zero and
1080 * thus be inaccurate.
1081 */
1083#endif /* NO_MALLINFO */
1084
1085/*
1086 * independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1087 *
1088 * independent_calloc is similar to calloc, but instead of returning a
1089 * single cleared space, it returns an array of pointers to n_elements
1090 * independent elements that can hold contents of size elem_size, each
1091 * of which starts out cleared, and can be independently freed,
1092 * realloc'ed etc. The elements are guaranteed to be adjacently
1093 * allocated (this is not guaranteed to occur with multiple callocs or
1094 * mallocs), which may also improve cache locality in some
1095 * applications.
1096 *
1097 * The "chunks" argument is optional (i.e., may be null, which is
1098 * probably the most typical usage). If it is null, the returned array
1099 * is itself dynamically allocated and should also be freed when it is
1100 * no longer needed. Otherwise, the chunks array must be of at least
1101 * n_elements in length. It is filled in with the pointers to the
1102 * chunks.
1103 *
1104 * In either case, independent_calloc returns this pointer array, or
1105 * null if the allocation failed. If n_elements is zero and "chunks"
1106 * is null, it returns a chunk representing an array with zero elements
1107 * (which should be freed if not wanted).
1108 *
1109 * Each element must be freed when it is no longer needed. This can be
1110 * done all at once using bulk_free.
1111 *
1112 * independent_calloc simplifies and speeds up implementations of many
1113 * kinds of pools. It may also be useful when constructing large data
1114 * structures that initially have a fixed number of fixed-sized nodes,
1115 * but the number is not known at compile time, and some of the nodes
1116 * may later need to be freed. For example:
1117 *
1118 * struct Node { int item; struct Node* next; };
1119 *
1120 * struct Node* build_list() {
1121 * struct Node** pool;
1122 * int n = read_number_of_nodes_needed();
1123 * if (n <= 0) return 0;
1124 * pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1125 * if (pool == 0) die();
1126 * // organize into a linked list...
1127 * struct Node* first = pool[0];
1128 * for (i = 0; i < n-1; ++i)
1129 * pool[i]->next = pool[i+1];
1130 * free(pool); // Can now free the array (or not, if it is needed later)
1131 * return first;
1132 * }
1133 */
1134DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void **);
1135
1136/*
1137 * independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1138 *
1139 * independent_comalloc allocates, all at once, a set of n_elements
1140 * chunks with sizes indicated in the "sizes" array. It returns
1141 * an array of pointers to these elements, each of which can be
1142 * independently freed, realloc'ed etc. The elements are guaranteed to
1143 * be adjacently allocated (this is not guaranteed to occur with
1144 * multiple callocs or mallocs), which may also improve cache locality
1145 * in some applications.
1146 *
1147 * The "chunks" argument is optional (i.e., may be null). If it is null
1148 * the returned array is itself dynamically allocated and should also
1149 * be freed when it is no longer needed. Otherwise, the chunks array
1150 * must be of at least n_elements in length. It is filled in with the
1151 * pointers to the chunks.
1152 *
1153 * In either case, independent_comalloc returns this pointer array, or
1154 * null if the allocation failed. If n_elements is zero and chunks is
1155 * null, it returns a chunk representing an array with zero elements
1156 * (which should be freed if not wanted).
1157 *
1158 * Each element must be freed when it is no longer needed. This can be
1159 * done all at once using bulk_free.
1160 *
1161 * independent_comallac differs from independent_calloc in that each
1162 * element may have a different size, and also that it does not
1163 * automatically clear elements.
1164 *
1165 * independent_comalloc can be used to speed up allocation in cases
1166 * where several structs or objects must always be allocated at the
1167 * same time. For example:
1168 *
1169 * struct Head { ... }
1170 * struct Foot { ... }
1171 *
1172 * void send_message(char* msg) {
1173 * int msglen = strlen(msg);
1174 * size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1175 * void* chunks[3];
1176 * if (independent_comalloc(3, sizes, chunks) == 0)
1177 * die();
1178 * struct Head* head = (struct Head*)(chunks[0]);
1179 * char* body = (char*)(chunks[1]);
1180 * struct Foot* foot = (struct Foot*)(chunks[2]);
1181 * // ...
1182 * }
1183 *
1184 * In general though, independent_comalloc is worth using only for
1185 * larger values of n_elements. For small values, you probably won't
1186 * detect enough difference from series of malloc calls to bother.
1187 *
1188 * Overuse of independent_comalloc can increase overall memory usage,
1189 * since it cannot reuse existing noncontiguous small chunks that
1190 * might be available for some of the elements.
1191 */
1192DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t *, void **);
1193
1194/*
1195 * bulk_free(void* array[], size_t n_elements)
1196 * Frees and clears (sets to null) each non-null pointer in the given
1197 * array. This is likely to be faster than freeing them one-by-one.
1198 * If footers are used, pointers that have been allocated in different
1199 * mspaces are not freed or cleared, and the count of all such pointers
1200 * is returned. For large arrays of pointers with poor locality, it
1201 * may be worthwhile to sort this array before calling bulk_free.
1202 */
1203DLMALLOC_EXPORT size_t dlbulk_free(void **, size_t n_elements);
1204
1205/*
1206 * pvalloc(size_t n);
1207 * Equivalent to valloc(minimum-page-that-holds(n)), that is,
1208 * round up n to nearest pagesize.
1209 */
1210DLMALLOC_EXPORT void* dlpvalloc(size_t);
1211
1212/*
1213 * malloc_trim(size_t pad);
1214 *
1215 * If possible, gives memory back to the system (via negative arguments
1216 * to sbrk) if there is unused memory at the `high' end of the malloc
1217 * pool or in unused MMAP segments. You can call this after freeing
1218 * large blocks of memory to potentially reduce the system-level memory
1219 * requirements of a program. However, it cannot guarantee to reduce
1220 * memory. Under some allocation patterns, some large free blocks of
1221 * memory will be locked between two used chunks, so they cannot be
1222 * given back to the system.
1223 *
1224 * The `pad' argument to malloc_trim represents the amount of free
1225 * trailing space to leave untrimmed. If this argument is zero, only
1226 * the minimum amount of memory to maintain internal data structures
1227 * will be left. Non-zero arguments can be supplied to maintain enough
1228 * trailing space to service future expected allocations without having
1229 * to re-obtain memory from the system.
1230 *
1231 * Malloc_trim returns 1 if it actually released any memory, else 0.
1232 */
1233DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1234
1235/*
1236 * malloc_stats();
1237 * Prints on stderr the amount of space obtained from the system (both
1238 * via sbrk and mmap), the maximum amount (which may be more than
1239 * current if malloc_trim and/or munmap got called), and the current
1240 * number of bytes allocated via malloc (or realloc, etc) but not yet
1241 * freed. Note that this is the number of bytes allocated, not the
1242 * number requested. It will be larger than the number requested
1243 * because of alignment and bookkeeping overhead. Because it includes
1244 * alignment wastage as being in use, this figure may be greater than
1245 * zero even when no user-level chunks are allocated.
1246 *
1247 * The reported current and maximum system memory can be inaccurate if
1248 * a program makes other calls to system memory allocation functions
1249 * (normally sbrk) outside of malloc.
1250 *
1251 * malloc_stats prints only the most commonly interesting statistics.
1252 * More information can be obtained by calling mallinfo.
1253 */
1255
1256/*
1257 * malloc_usable_size(void* p);
1258 *
1259 * Returns the number of bytes you can actually use in
1260 * an allocated chunk, which may be more than you requested (although
1261 * often not) due to alignment and minimum size constraints.
1262 * You can use this many bytes without worrying about
1263 * overwriting other allocated objects. This is not a particularly great
1264 * programming practice. malloc_usable_size can be more useful in
1265 * debugging and assertions, for example:
1266 *
1267 * p = malloc(n);
1268 * assert(malloc_usable_size(p) >= 256);
1269 */
1270size_t dlmalloc_usable_size(void *);
1271
1272#endif /* ONLY_MSPACES */
1273
1274#if MSPACES
1275
1276/*
1277 * mspace is an opaque type representing an independent
1278 * region of space that supports mspace_malloc, etc.
1279 */
1280typedef void *mspace;
1281
1282/*
1283 * create_mspace creates and returns a new independent space with the
1284 * given initial capacity, or, if 0, the default granularity size. It
1285 * returns null if there is no system memory available to create the
1286 * space. If argument locked is non-zero, the space uses a separate
1287 * lock to control access. The capacity of the space will grow
1288 * dynamically as needed to service mspace_malloc requests. You can
1289 * control the sizes of incremental increases of this space by
1290 * compiling with a different DEFAULT_GRANULARITY or dynamically
1291 * setting with mallopt(M_GRANULARITY, value).
1292 */
1293DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1294
1295/*
1296 * destroy_mspace destroys the given space, and attempts to return all
1297 * of its memory back to the system, returning the total number of
1298 * bytes freed. After destruction, the results of access to all memory
1299 * used by the space become undefined.
1300 */
1301DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1302
1303/*
1304 * create_mspace_with_base uses the memory supplied as the initial base
1305 * of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1306 * space is used for bookkeeping, so the capacity must be at least this
1307 * large. (Otherwise 0 is returned.) When this initial space is
1308 * exhausted, additional memory will be obtained from the system.
1309 * Destroying this space will deallocate all additionally allocated
1310 * space (if possible) but not the initial base.
1311 */
1312DLMALLOC_EXPORT mspace create_mspace_with_base(void *base, size_t capacity, int locked);
1313
1314/*
1315 * mspace_track_large_chunks controls whether requests for large chunks
1316 * are allocated in their own untracked mmapped regions, separate from
1317 * others in this mspace. By default large chunks are not tracked,
1318 * which reduces fragmentation. However, such chunks are not
1319 * necessarily released to the system upon destroy_mspace. Enabling
1320 * tracking by setting to true may increase fragmentation, but avoids
1321 * leakage when relying on destroy_mspace to release all memory
1322 * allocated using this space. The function returns the previous
1323 * setting.
1324 */
1325DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1326
1327
1328/*
1329 * mspace_malloc behaves as malloc, but operates within
1330 * the given space.
1331 */
1332DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1333
1334/*
1335 * mspace_free behaves as free, but operates within
1336 * the given space.
1337 *
1338 * If compiled with FOOTERS==1, mspace_free is not actually needed.
1339 * free may be called instead of mspace_free because freed chunks from
1340 * any space are handled by their originating spaces.
1341 */
1342DLMALLOC_EXPORT void mspace_free(mspace msp, void *mem);
1343
1344/*
1345 * mspace_realloc behaves as realloc, but operates within
1346 * the given space.
1347 *
1348 * If compiled with FOOTERS==1, mspace_realloc is not actually
1349 * needed. realloc may be called instead of mspace_realloc because
1350 * realloced chunks from any space are handled by their originating
1351 * spaces.
1352 */
1353DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void *mem, size_t newsize);
1354
1355/*
1356 * mspace_calloc behaves as calloc, but operates within
1357 * the given space.
1358 */
1359DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1360
1361/*
1362 * mspace_memalign behaves as memalign, but operates within
1363 * the given space.
1364 */
1365DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1366
1367/*
1368 * mspace_independent_calloc behaves as independent_calloc, but
1369 * operates within the given space.
1370 */
1371DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1372 size_t elem_size, void *chunks[]);
1373
1374/*
1375 * mspace_independent_comalloc behaves as independent_comalloc, but
1376 * operates within the given space.
1377 */
1378DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1379 size_t sizes[], void *chunks[]);
1380
1381/*
1382 * mspace_footprint() returns the number of bytes obtained from the
1383 * system for this space.
1384 */
1385DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1386
1387/*
1388 * mspace_max_footprint() returns the peak number of bytes obtained from the
1389 * system for this space.
1390 */
1391DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1392
1393
1394#if !NO_MALLINFO
1395
1396/*
1397 * mspace_mallinfo behaves as mallinfo, but reports properties of
1398 * the given space.
1399 */
1400DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1401#endif /* NO_MALLINFO */
1402
1403/*
1404 * malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1405 */
1406DLMALLOC_EXPORT size_t mspace_usable_size(const void *mem);
1407
1408/*
1409 * mspace_malloc_stats behaves as malloc_stats, but reports
1410 * properties of the given space.
1411 */
1412DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1413
1414/*
1415 * mspace_trim behaves as malloc_trim, but
1416 * operates within the given space.
1417 */
1418DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1419
1420/*
1421 * An alias for mallopt.
1422 */
1423DLMALLOC_EXPORT int mspace_mallopt(int, int);
1424
1425#endif /* MSPACES */
1426
1427#ifdef __cplusplus
1428}/* end of extern "C" */
1429#endif /* __cplusplus */
1430
1431/*
1432 * ========================================================================
1433 * To make a fully customizable malloc.h header file, cut everything
1434 * above this line, put into file malloc.h, edit to suit, and #include it
1435 * on the next line, as well as in programs that use this malloc.
1436 * ========================================================================
1437 */
1438
1439/* #include "malloc.h" */
1440
1441/*------------------------------ internal #includes ---------------------- */
1442
1443#ifdef _MSC_VER
1444#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1445#endif /* _MSC_VER */
1446#if !NO_MALLOC_STATS
1447#include <stdio.h> /* for printing in malloc_stats */
1448#endif /* NO_MALLOC_STATS */
1449#ifndef LACKS_ERRNO_H
1450#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1451#endif /* LACKS_ERRNO_H */
1452#ifdef DEBUG
1453#if ABORT_ON_ASSERT_FAILURE
1454#undef assert
1455#define assert(x) if (!(x)) ABORT
1456#else /* ABORT_ON_ASSERT_FAILURE */
1457#include <assert.h>
1458#endif /* ABORT_ON_ASSERT_FAILURE */
1459#else /* DEBUG */
1460#ifndef assert
1461#define assert(x)
1462#endif
1463#define DEBUG 0
1464#endif /* DEBUG */
1465#if !defined(WIN32) && !defined(LACKS_TIME_H)
1466#include <time.h> /* for magic initialization */
1467#endif /* WIN32 */
1468#ifndef LACKS_STDLIB_H
1469#include <stdlib.h> /* for abort() */
1470#endif /* LACKS_STDLIB_H */
1471#ifndef LACKS_STRING_H
1472#include <string.h> /* for memset etc */
1473#endif /* LACKS_STRING_H */
1474#if USE_BUILTIN_FFS
1475#ifndef LACKS_STRINGS_H
1476#include <strings.h> /* for ffs */
1477#endif /* LACKS_STRINGS_H */
1478#endif /* USE_BUILTIN_FFS */
1479#if HAVE_MMAP
1480#ifndef LACKS_SYS_MMAN_H
1481/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1482#if (defined(linux) && !defined(__USE_GNU))
1483#define __USE_GNU 1
1484#include <sys/mman.h> /* for mmap */
1485#undef __USE_GNU
1486#else
1487#include <sys/mman.h> /* for mmap */
1488#endif /* linux */
1489#endif /* LACKS_SYS_MMAN_H */
1490#ifndef LACKS_FCNTL_H
1491#include <fcntl.h>
1492#endif /* LACKS_FCNTL_H */
1493#endif /* HAVE_MMAP */
1494#ifndef LACKS_UNISTD_H
1495#include <unistd.h> /* for sbrk, sysconf */
1496#else /* LACKS_UNISTD_H */
1497#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1498extern void* sbrk(ptrdiff_t);
1499#endif /* FreeBSD etc */
1500#endif /* LACKS_UNISTD_H */
1501
1502/* Declarations for locking */
1503#if USE_LOCKS
1504#ifndef WIN32
1505#if defined(__SVR4) && defined(__sun) /* solaris */
1506#include <thread.h>
1507#elif !defined(LACKS_SCHED_H)
1508#include <sched.h>
1509#endif /* solaris or LACKS_SCHED_H */
1510#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1511#include <pthread.h>
1512#endif /* USE_RECURSIVE_LOCKS ... */
1513#elif defined(_MSC_VER)
1514#ifndef _M_AMD64
1515/* These are already defined on AMD64 builds */
1516#ifdef __cplusplus
1517extern "C" {
1518#endif /* __cplusplus */
1519LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1520LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1521#ifdef __cplusplus
1522}
1523#endif /* __cplusplus */
1524#endif /* _M_AMD64 */
1525#pragma intrinsic (_InterlockedCompareExchange)
1526#pragma intrinsic (_InterlockedExchange)
1527#define interlockedcompareexchange _InterlockedCompareExchange
1528#define interlockedexchange _InterlockedExchange
1529#elif defined(WIN32) && defined(__GNUC__)
1530#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1531#define interlockedexchange __sync_lock_test_and_set
1532#endif /* Win32 */
1533#else /* USE_LOCKS */
1534#endif /* USE_LOCKS */
1535
1536#ifndef LOCK_AT_FORK
1537#define LOCK_AT_FORK 0
1538#endif
1539
1540/* Declarations for bit scanning on win32 */
1541#if defined(_MSC_VER) && _MSC_VER >= 1300
1542#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1543#ifdef __cplusplus
1544extern "C" {
1545#endif /* __cplusplus */
1546unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1547unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1548#ifdef __cplusplus
1549}
1550#endif /* __cplusplus */
1551
1552#define BitScanForward _BitScanForward
1553#define BitScanReverse _BitScanReverse
1554#pragma intrinsic(_BitScanForward)
1555#pragma intrinsic(_BitScanReverse)
1556#endif /* BitScanForward */
1557#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1558
1559#ifndef WIN32
1560#ifndef malloc_getpagesize
1561#ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1562#ifndef _SC_PAGE_SIZE
1563#define _SC_PAGE_SIZE _SC_PAGESIZE
1564#endif
1565#endif
1566#ifdef _SC_PAGE_SIZE
1567#define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1568#else
1569#if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1570extern size_t getpagesize();
1571#define malloc_getpagesize getpagesize()
1572#else
1573#ifdef WIN32 /* use supplied emulation of getpagesize */
1574#define malloc_getpagesize getpagesize()
1575#else
1576#ifndef LACKS_SYS_PARAM_H
1577#include <sys/param.h>
1578#endif
1579#ifdef EXEC_PAGESIZE
1580#define malloc_getpagesize EXEC_PAGESIZE
1581#else
1582#ifdef NBPG
1583#ifndef CLSIZE
1584#define malloc_getpagesize NBPG
1585#else
1586#define malloc_getpagesize (NBPG * CLSIZE)
1587#endif
1588#else
1589#ifdef NBPC
1590#define malloc_getpagesize NBPC
1591#else
1592#ifdef PAGESIZE
1593#define malloc_getpagesize PAGESIZE
1594#else /* just guess */
1595#define malloc_getpagesize ((size_t) 4096U)
1596#endif
1597#endif
1598#endif /* ifdef NBPG */
1599#endif /* ifdef EXEC_PAGESIZE */
1600#endif /* ifdef WIN32 */
1601#endif /* if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) */
1602#endif /* ifdef _SC_PAGE_SIZE */
1603#endif /* ifndef malloc_getpagesize */
1604#endif /* ifndef WIN32 */
1605
1606/* ------------------- size_t and alignment properties -------------------- */
1607
1608/* The byte and bit size of a size_t */
1609#define SIZE_T_SIZE (sizeof(size_t))
1610#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1611
1612/* Some constants coerced to size_t */
1613/* Annoying but necessary to avoid errors on some platforms */
1614#define SIZE_T_ZERO ((size_t) 0)
1615#define SIZE_T_ONE ((size_t) 1)
1616#define SIZE_T_TWO ((size_t) 2)
1617#define SIZE_T_FOUR ((size_t) 4)
1618#define TWO_SIZE_T_SIZES (SIZE_T_SIZE << 1)
1619#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE << 2)
1620#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES + TWO_SIZE_T_SIZES)
1621#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1622
1623/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1624#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1625
1626/* True if address a has acceptable alignment */
1627#define is_aligned(A) (((size_t) ((A)) & (CHUNK_ALIGN_MASK)) == 0)
1628
1629/* the number of bytes to offset an address to align it */
1630#define align_offset(A) \
1631 ((((size_t) (A) &CHUNK_ALIGN_MASK) == 0) ? 0 : \
1632 ((MALLOC_ALIGNMENT - ((size_t) (A) &CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1633
1634/* -------------------------- MMAP preliminaries ------------------------- */
1635
1636/*
1637 * If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1638 * checks to fail so compiler optimizer can delete code rather than
1639 * using so many "#if"s.
1640 */
1641
1642
1643/* MORECORE and MMAP must return MFAIL on failure */
1644#define MFAIL ((void *) (MAX_SIZE_T))
1645#define CMFAIL ((char *) (MFAIL)) /* defined for convenience */
1646
1647#if HAVE_MMAP
1648
1649#ifndef WIN32
1650#define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1651#define MMAP_PROT (PROT_READ | PROT_WRITE)
1652#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1653#define MAP_ANONYMOUS MAP_ANON
1654#endif /* MAP_ANON */
1655#ifdef MAP_ANONYMOUS
1656#define MMAP_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS)
1657#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1658#else /* MAP_ANONYMOUS */
1659
1660/*
1661 * Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1662 * is unlikely to be needed, but is supplied just in case.
1663 */
1664#define MMAP_FLAGS (MAP_PRIVATE)
1665static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1666#define MMAP_DEFAULT(s) \
1667 ((dev_zero_fd < 0) ? \
1668 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1669 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1670 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1671#endif /* MAP_ANONYMOUS */
1672
1673#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1674
1675#else /* WIN32 */
1676
1677/* Win32 MMAP via VirtualAlloc */
1678static FORCEINLINE void* win32mmap(size_t size)
1679{
1680 void *ptr = VirtualAlloc(0, size, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
1681
1682 return (ptr != 0) ? ptr : MFAIL;
1683}
1684
1685/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1686static FORCEINLINE void* win32direct_mmap(size_t size)
1687{
1688 void *ptr = VirtualAlloc(0, size, MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN,
1689 PAGE_READWRITE);
1690
1691 return (ptr != 0) ? ptr : MFAIL;
1692}
1693
1694/* This function supports releasing coalesed segments */
1695static FORCEINLINE int win32munmap(void *ptr, size_t size)
1696{
1697 MEMORY_BASIC_INFORMATION minfo;
1698 char *cptr = (char *) ptr;
1699
1700 while (size) {
1701 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1702 return -1;
1703
1704 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1705 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1706 return -1;
1707
1708 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1709 return -1;
1710
1711 cptr += minfo.RegionSize;
1712 size -= minfo.RegionSize;
1713 }
1714 return 0;
1715}
1716
1717#define MMAP_DEFAULT(s) win32mmap(s)
1718#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1719#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1720#endif /* WIN32 */
1721#endif /* HAVE_MMAP */
1722
1723#if HAVE_MREMAP
1724#ifndef WIN32
1725#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1726#endif /* WIN32 */
1727#endif /* HAVE_MREMAP */
1728
1732#if HAVE_MORECORE
1733#ifdef MORECORE
1734#define CALL_MORECORE(S) MORECORE(S)
1735#else /* MORECORE */
1736#define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1737#endif /* MORECORE */
1738#else /* HAVE_MORECORE */
1739#define CALL_MORECORE(S) MFAIL
1740#endif /* HAVE_MORECORE */
1741
1742#include "../alloc.h"
1743
1744#define MMAP(s) arax_mmap(s)
1745#define DIRECT_MMAP(s) MMAP(s)
1746
1747#define MUNMAP(a, s) arax_ummap(a, s)
1748
1752#if HAVE_MMAP
1753#define USE_MMAP_BIT (SIZE_T_ONE)
1754
1755#ifdef MMAP
1756#define CALL_MMAP(s) MMAP(s)
1757#else /* MMAP */
1758#define CALL_MMAP(s) MMAP_DEFAULT(s)
1759#endif /* MMAP */
1760#ifdef MUNMAP
1761#define CALL_MUNMAP(a, s) arax_ummap((a), (s))
1762#else /* MUNMAP */
1763#define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1764#endif /* MUNMAP */
1765#ifdef DIRECT_MMAP
1766#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1767#else /* DIRECT_MMAP */
1768#define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1769#endif /* DIRECT_MMAP */
1770#else /* HAVE_MMAP */
1771#define USE_MMAP_BIT (SIZE_T_ZERO)
1772
1773#define MMAP(s) MFAIL
1774#define MUNMAP(a, s) (-1)
1775#define DIRECT_MMAP(s) MFAIL
1776#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1777#define CALL_MMAP(s) MMAP(s)
1778#define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1779#endif /* HAVE_MMAP */
1780
1784#if HAVE_MMAP && HAVE_MREMAP
1785#ifdef MREMAP
1786#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1787#else /* MREMAP */
1788#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1789#endif /* MREMAP */
1790#else /* HAVE_MMAP && HAVE_MREMAP */
1791#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1792#endif /* HAVE_MMAP && HAVE_MREMAP */
1793
1794/* mstate bit set if continguous morecore disabled or failed */
1795#define USE_NONCONTIGUOUS_BIT (4U)
1796
1797/* segment bit set in create_mspace_with_base */
1798#define EXTERN_BIT (8U)
1799
1800
1801/* --------------------------- Lock preliminaries ------------------------ */
1802
1803/*
1804 * When locks are defined, there is one global lock, plus
1805 * one per-mspace lock.
1806 *
1807 * The global lock_ensures that mparams.magic and other unique
1808 * mparams values are initialized only once. It also protects
1809 * sequences of calls to MORECORE. In many cases sys_alloc requires
1810 * two calls, that should not be interleaved with calls by other
1811 * threads. This does not protect against direct calls to MORECORE
1812 * by other threads not using this lock, so there is still code to
1813 * cope the best we can on interference.
1814 *
1815 * Per-mspace locks surround calls to malloc, free, etc.
1816 * By default, locks are simple non-reentrant mutexes.
1817 *
1818 * Because lock-protected regions generally have bounded times, it is
1819 * OK to use the supplied simple spinlocks. Spinlocks are likely to
1820 * improve performance for lightly contended applications, but worsen
1821 * performance under heavy contention.
1822 *
1823 * If USE_LOCKS is > 1, the definitions of lock routines here are
1824 * bypassed, in which case you will need to define the type MLOCK_T,
1825 * and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1826 * and TRY_LOCK. You must also declare a
1827 * static MLOCK_T malloc_global_mutex = { initialization values };.
1828 *
1829 */
1830
1831#if !USE_LOCKS
1832#define USE_LOCK_BIT (0U)
1833#define INITIAL_LOCK(l) (0)
1834#define DESTROY_LOCK(l) (0)
1835#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1836#define RELEASE_MALLOC_GLOBAL_LOCK()
1837
1838#else
1839#if USE_LOCKS > 1
1840/* ----------------------- User-defined locks ------------------------ */
1841/* Define your own lock implementation here */
1842/* #define INITIAL_LOCK(lk) ... */
1843/* #define DESTROY_LOCK(lk) ... */
1844/* #define ACQUIRE_LOCK(lk) ... */
1845/* #define RELEASE_LOCK(lk) ... */
1846/* #define TRY_LOCK(lk) ... */
1847/* static MLOCK_T malloc_global_mutex = ... */
1848
1849#elif USE_SPIN_LOCKS
1850
1851/* First, define CAS_LOCK and CLEAR_LOCK on ints */
1852/* Note CAS_LOCK defined to return 0 on success */
1853
1854#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1855#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1856#define CLEAR_LOCK(sl) __sync_lock_release(sl)
1857
1858#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1859/* Custom spin locks for older gcc on x86 */
1860static FORCEINLINE int x86_cas_lock(int *sl)
1861{
1862 int ret;
1863 int val = 1;
1864 int cmp = 0;
1865
1866 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1867 : "=a" (ret)
1868 : "r" (val), "m" (*(sl)), "0" (cmp)
1869 : "memory", "cc");
1870 return ret;
1871}
1872
1873static FORCEINLINE void x86_clear_lock(int *sl)
1874{
1875 assert(*sl != 0);
1876 int prev = 0;
1877 int ret;
1878
1879 __asm__ __volatile__ ("lock; xchgl %0, %1"
1880 : "=r" (ret)
1881 : "m" (*(sl)), "0" (prev)
1882 : "memory");
1883}
1884
1885#define CAS_LOCK(sl) x86_cas_lock(sl)
1886#define CLEAR_LOCK(sl) x86_clear_lock(sl)
1887
1888#else /* Win32 MSC */
1889#define CAS_LOCK(sl) interlockedexchange(sl, (LONG) 1)
1890#define CLEAR_LOCK(sl) interlockedexchange(sl, (LONG) 0)
1891
1892#endif /* ... gcc spins locks ... */
1893
1894/* How to yield for a spin lock */
1895#define SPINS_PER_YIELD 63
1896#if defined(_MSC_VER)
1897#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1898#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1899#elif defined(__SVR4) && defined(__sun) /* solaris */
1900#define SPIN_LOCK_YIELD thr_yield();
1901#elif !defined(LACKS_SCHED_H)
1902#define SPIN_LOCK_YIELD sched_yield();
1903#else
1904#define SPIN_LOCK_YIELD
1905#endif /* ... yield ... */
1906
1907#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1908/* Plain spin locks use single word (embedded in malloc_states) */
1909static int spin_acquire_lock(int *sl)
1910{
1911 int spins = 0;
1912
1913 while (*(volatile int *) sl != 0 || CAS_LOCK(sl)) {
1914 if ((++spins & SPINS_PER_YIELD) == 0) {
1915 SPIN_LOCK_YIELD;
1916 }
1917 }
1918 return 0;
1919}
1920
1921#define MLOCK_T int
1922#define TRY_LOCK(sl) !CAS_LOCK(sl)
1923#define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1924#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl) ? spin_acquire_lock(sl) : 0)
1925#define INITIAL_LOCK(sl) (*sl = 0)
1926#define DESTROY_LOCK(sl) (0)
1927static MLOCK_T malloc_global_mutex = 0;
1928
1929#else /* USE_RECURSIVE_LOCKS */
1930/* types for lock owners */
1931#ifdef WIN32
1932#define THREAD_ID_T DWORD
1933#define CURRENT_THREAD GetCurrentThreadId()
1934#define EQ_OWNER(X, Y) ((X) == (Y))
1935#else
1936
1937/*
1938 * Note: the following assume that pthread_t is a type that can be
1939 * initialized to (casted) zero. If this is not the case, you will need to
1940 * somehow redefine these or not use spin locks.
1941 */
1942#define THREAD_ID_T pthread_t
1943#define CURRENT_THREAD pthread_self()
1944#define EQ_OWNER(X, Y) pthread_equal(X, Y)
1945#endif
1946
1947struct malloc_recursive_lock
1948{
1949 int sl;
1950 unsigned int c;
1951 THREAD_ID_T threadid;
1952};
1953
1954#define MLOCK_T struct malloc_recursive_lock
1955static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T) 0 };
1956
1957static FORCEINLINE void recursive_release_lock(MLOCK_T *lk)
1958{
1959 assert(lk->sl != 0);
1960 if (--lk->c == 0) {
1961 CLEAR_LOCK(&lk->sl);
1962 }
1963}
1964
1965static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk)
1966{
1967 THREAD_ID_T mythreadid = CURRENT_THREAD;
1968 int spins = 0;
1969
1970 for (;;) {
1971 if (*((volatile int *) (&lk->sl)) == 0) {
1972 if (!CAS_LOCK(&lk->sl)) {
1973 lk->threadid = mythreadid;
1974 lk->c = 1;
1975 return 0;
1976 }
1977 } else if (EQ_OWNER(lk->threadid, mythreadid)) {
1978 ++lk->c;
1979 return 0;
1980 }
1981 if ((++spins & SPINS_PER_YIELD) == 0) {
1982 SPIN_LOCK_YIELD;
1983 }
1984 }
1985}
1986
1987static FORCEINLINE int recursive_try_lock(MLOCK_T *lk)
1988{
1989 THREAD_ID_T mythreadid = CURRENT_THREAD;
1990
1991 if (*((volatile int *) (&lk->sl)) == 0) {
1992 if (!CAS_LOCK(&lk->sl)) {
1993 lk->threadid = mythreadid;
1994 lk->c = 1;
1995 return 1;
1996 }
1997 } else if (EQ_OWNER(lk->threadid, mythreadid)) {
1998 ++lk->c;
1999 return 1;
2000 }
2001 return 0;
2002}
2003
2004#define RELEASE_LOCK(lk) recursive_release_lock(lk)
2005#define TRY_LOCK(lk) recursive_try_lock(lk)
2006#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
2007#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T) 0, (lk)->sl = 0, (lk)->c = 0)
2008#define DESTROY_LOCK(lk) (0)
2009#endif /* USE_RECURSIVE_LOCKS */
2010
2011#elif defined(WIN32) /* Win32 critical sections */
2012#define MLOCK_T CRITICAL_SECTION
2013#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
2014#define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
2015#define TRY_LOCK(lk) TryEnterCriticalSection(lk)
2016#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000 | 4000))
2017#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
2018#define NEED_GLOBAL_LOCK_INIT
2019
2020static MLOCK_T malloc_global_mutex;
2021static volatile LONG malloc_global_mutex_status;
2022
2023/* Use spin loop to initialize global lock */
2024static void init_malloc_global_mutex()
2025{
2026 for (;;) {
2027 long stat = malloc_global_mutex_status;
2028 if (stat > 0)
2029 return;
2030
2031 /* transition to < 0 while initializing, then to > 0) */
2032 if (stat == 0 &&
2033 interlockedcompareexchange(&malloc_global_mutex_status, (LONG) -1, (LONG) 0) == 0)
2034 {
2035 InitializeCriticalSection(&malloc_global_mutex);
2036 interlockedexchange(&malloc_global_mutex_status, (LONG) 1);
2037 return;
2038 }
2039 SleepEx(0, FALSE);
2040 }
2041}
2042
2043#else /* pthreads-based locks */
2044#define MLOCK_T pthread_mutex_t
2045#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2046#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2047#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2048#define INITIAL_LOCK(lk) pthread_init_lock(lk)
2049#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2050
2051#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2052/* Cope with old-style linux recursive lock initialization by adding */
2053/* skipped internal declaration from pthread.h */
2054extern int pthread_mutexattr_setkind_np __P((pthread_mutexattr_t *__attr,
2055 int __kind));
2056#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2057#define pthread_mutexattr_settype(x, y) pthread_mutexattr_setkind_np(x, y)
2058#endif /* USE_RECURSIVE_LOCKS ... */
2059
2060static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2061
2062static int pthread_init_lock(MLOCK_T *lk)
2063{
2064 pthread_mutexattr_t attr;
2065
2066 if (pthread_mutexattr_init(&attr)) return 1;
2067
2068 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2069 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2070
2071 #endif
2072 if (pthread_mutex_init(lk, &attr)) return 1;
2073
2074 if (pthread_mutexattr_destroy(&attr)) return 1;
2075
2076 return 0;
2077}
2078
2079#endif /* ... lock types ... */
2080
2081/* Common code for all lock types */
2082#define USE_LOCK_BIT (2U)
2083
2084#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2085#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2086#endif
2087
2088#ifndef RELEASE_MALLOC_GLOBAL_LOCK
2089#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2090#endif
2091
2092#endif /* USE_LOCKS */
2093
2094/* ----------------------- Chunk representations ------------------------ */
2095
2096/*
2097 * (The following includes lightly edited explanations by Colin Plumb.)
2098 *
2099 * The malloc_chunk declaration below is misleading (but accurate and
2100 * necessary). It declares a "view" into memory allowing access to
2101 * necessary fields at known offsets from a given base.
2102 *
2103 * Chunks of memory are maintained using a `boundary tag' method as
2104 * originally described by Knuth. (See the paper by Paul Wilson
2105 * ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2106 * techniques.) Sizes of free chunks are stored both in the front of
2107 * each chunk and at the end. This makes consolidating fragmented
2108 * chunks into bigger chunks fast. The head fields also hold bits
2109 * representing whether chunks are free or in use.
2110 *
2111 * Here are some pictures to make it clearer. They are "exploded" to
2112 * show that the state of a chunk can be thought of as extending from
2113 * the high 31 bits of the head field of its header through the
2114 * prev_foot and PINUSE_BIT bit of the following chunk header.
2115 *
2116 * A chunk that's in use looks like:
2117 *
2118 * chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2119 | Size of previous chunk (if P = 0) |
2120 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2121 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2122 | Size of this chunk 1| +-+
2123 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2124 | |
2125 +- -+
2126 | |
2127 +- -+
2128 | :
2129 +- size - sizeof(size_t) available payload bytes -+
2130 | : |
2131 | chunk-> +- -+
2132 | |
2133 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2134 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2135 | Size of next chunk (may or may not be in use) | +-+
2136 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2137 |
2138 | And if it's free, it looks like this:
2139 |
2140 | chunk-> +- -+
2141 | User payload (must be in use, or we would have merged!) |
2142 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2143 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2144 | Size of this chunk 0| +-+
2145 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2146 | Next pointer |
2147 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2148 | Prev pointer |
2149 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2150 | :
2151 +- size - sizeof(struct chunk) unused bytes -+
2152 | : |
2153 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2154 | Size of this chunk |
2155 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2156 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2157 | Size of next chunk (must be in use, or we would have merged)| +-+
2158 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2159 | :
2160 +- User payload -+
2161 | : |
2162 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2163 |0|
2164 +-+
2165 | Note that since we always merge adjacent free chunks, the chunks
2166 | adjacent to a free chunk must be in use.
2167 |
2168 | Given a pointer to a chunk (which can be derived trivially from the
2169 | payload pointer) we can, in O(1) time, find out whether the adjacent
2170 | chunks are free, and if so, unlink them from the lists that they
2171 | are on and merge them with the current chunk.
2172 |
2173 | Chunks always begin on even word boundaries, so the mem portion
2174 | (which is returned to the user) is also on an even word boundary, and
2175 | thus at least double-word aligned.
2176 |
2177 | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2178 | chunk size (which is always a multiple of two words), is an in-use
2179 | bit for the *previous* chunk. If that bit is *clear*, then the
2180 | word before the current chunk size contains the previous chunk
2181 | size, and can be used to find the front of the previous chunk.
2182 | The very first chunk allocated always has this bit set, preventing
2183 | access to non-existent (or non-owned) memory. If pinuse is set for
2184 | any given chunk, then you CANNOT determine the size of the
2185 | previous chunk, and might even get a memory addressing fault when
2186 | trying to do so.
2187 |
2188 | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2189 | the chunk size redundantly records whether the current chunk is
2190 | inuse (unless the chunk is mmapped). This redundancy enables usage
2191 | checks within free and realloc, and reduces indirection when freeing
2192 | and consolidating chunks.
2193 |
2194 | Each freshly allocated chunk must have both cinuse and pinuse set.
2195 | That is, each allocated chunk borders either a previously allocated
2196 | and still in-use chunk, or the base of its memory arena. This is
2197 | ensured by making all allocations from the `lowest' part of any
2198 | found chunk. Further, no free chunk physically borders another one,
2199 | so each free chunk is known to be preceded and followed by either
2200 | inuse chunks or the ends of memory.
2201 |
2202 | Note that the `foot' of the current chunk is actually represented
2203 | as the prev_foot of the NEXT chunk. This makes it easier to
2204 | deal with alignments etc but can be very confusing when trying
2205 | to extend or adapt this code.
2206 |
2207 | The exceptions to all this are
2208 |
2209 | 1. The special chunk `top' is the top-most available chunk (i.e.,
2210 | the one bordering the end of available memory). It is treated
2211 | specially. Top is never included in any bin, is used only if
2212 | no other chunk is available, and is released back to the
2213 | system if it is very large (see M_TRIM_THRESHOLD). In effect,
2214 | the top chunk is treated as larger (and thus less well
2215 | fitting) than any other available chunk. The top chunk
2216 | doesn't update its trailing size field since there is no next
2217 | contiguous chunk that would have to index off it. However,
2218 | space is still allocated for it (TOP_FOOT_SIZE) to enable
2219 | separation or merging when space is extended.
2220 |
2221 | 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2222 | cleared in their head fields. Because they are allocated
2223 | one-by-one, each must carry its own prev_foot field, which is
2224 | also used to hold the offset this chunk has within its mmapped
2225 | region, which is needed to preserve alignment. Each mmapped
2226 | chunk is trailed by the first two fields of a fake next-chunk
2227 | for sake of usage checks.
2228 |
2229 */
2230
2232{
2233 size_t prev_foot; /* Size of previous chunk (if free). */
2234 size_t head; /* Size and inuse bits. */
2235 struct malloc_chunk *fd; /* double links -- used only if free. */
2237};
2238
2239typedef struct malloc_chunk mchunk;
2240typedef struct malloc_chunk *mchunkptr;
2241typedef struct malloc_chunk *sbinptr; /* The type of bins of chunks */
2242typedef unsigned int bindex_t; /* Described below */
2243typedef unsigned int binmap_t; /* Described below */
2244typedef unsigned int flag_t; /* The type of various bit flag sets */
2245
2246/* ------------------- Chunks sizes and alignments ----------------------- */
2247
2248#define MCHUNK_SIZE (sizeof(mchunk))
2249
2250#if FOOTERS
2251#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2252#else /* FOOTERS */
2253#define CHUNK_OVERHEAD (SIZE_T_SIZE)
2254#endif /* FOOTERS */
2255
2256/* MMapped chunks need a second word of overhead ... */
2257#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2258/* ... and additional padding for fake next-chunk at foot */
2259#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2260
2261/* The smallest size we can malloc is an aligned minimal chunk */
2262#define MIN_CHUNK_SIZE \
2263 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2264
2265/* conversion from malloc headers to user pointers, and back */
2266#define chunk2mem(p) ((void *) ((char *) (p) + TWO_SIZE_T_SIZES))
2267#define mem2chunk(mem) ((mchunkptr) ((char *) (mem) - TWO_SIZE_T_SIZES))
2268/* chunk associated with aligned address A */
2269#define align_as_chunk(A) (mchunkptr) ((A) + align_offset(chunk2mem(A)))
2270
2271/* Bounds on request (not chunk) sizes. */
2272#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2273#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2274
2275/* pad request bytes into a usable size */
2276#define pad_request(req) \
2277 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2278
2279/* pad request, checking for minimum (but not maximum) */
2280#define request2size(req) \
2281 (((req) < MIN_REQUEST) ? MIN_CHUNK_SIZE : pad_request(req))
2282
2283
2284/* ------------------ Operations on head and foot fields ----------------- */
2285
2286/*
2287 * The head field of a chunk is or'ed with PINUSE_BIT when previous
2288 * adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2289 * use, unless mmapped, in which case both bits are cleared.
2290 *
2291 * FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2292 */
2293
2294#define PINUSE_BIT (SIZE_T_ONE)
2295#define CINUSE_BIT (SIZE_T_TWO)
2296#define FLAG4_BIT (SIZE_T_FOUR)
2297#define INUSE_BITS (PINUSE_BIT | CINUSE_BIT)
2298#define FLAG_BITS (PINUSE_BIT | CINUSE_BIT | FLAG4_BIT)
2299
2300/* Head value for fenceposts */
2301#define FENCEPOST_HEAD (INUSE_BITS | SIZE_T_SIZE)
2302
2303/* extraction of fields from head words */
2304#define cinuse(p) ((p)->head & CINUSE_BIT)
2305#define pinuse(p) ((p)->head & PINUSE_BIT)
2306#define flag4inuse(p) ((p)->head & FLAG4_BIT)
2307#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2308#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2309
2310#define chunksize(p) ((p)->head & ~(FLAG_BITS))
2311
2312#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2313#define set_flag4(p) ((p)->head |= FLAG4_BIT)
2314#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2315
2316/* Treat space at ptr +/- offset as a chunk */
2317#define chunk_plus_offset(p, s) ((mchunkptr) (((char *) (p)) + (s)))
2318#define chunk_minus_offset(p, s) ((mchunkptr) (((char *) (p)) - (s)))
2319
2320/* Ptr to next or previous physical malloc_chunk. */
2321#define next_chunk(p) ((mchunkptr) ( ((char *) (p)) + ((p)->head & ~FLAG_BITS)))
2322#define prev_chunk(p) ((mchunkptr) ( ((char *) (p)) - ((p)->prev_foot) ))
2323
2324/* extract next chunk's pinuse bit */
2325#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2326
2327/* Get/set size at footer */
2328#define get_foot(p, s) (((mchunkptr) ((char *) (p) + (s)))->prev_foot)
2329#define set_foot(p, s) (((mchunkptr) ((char *) (p) + (s)))->prev_foot = (s))
2330
2331/* Set size, pinuse bit, and foot */
2332#define set_size_and_pinuse_of_free_chunk(p, s) \
2333 ((p)->head = (s | PINUSE_BIT), set_foot(p, s))
2334
2335/* Set size, pinuse bit, foot, and clear next pinuse */
2336#define set_free_with_pinuse(p, s, n) \
2337 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2338
2339/* Get the internal overhead associated with chunk p */
2340#define overhead_for(p) \
2341 (is_mmapped(p) ? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2342
2343/* Return true if malloced space is not necessarily cleared */
2344#if MMAP_CLEARS
2345#define calloc_must_clear(p) (!is_mmapped(p))
2346#else /* MMAP_CLEARS */
2347#define calloc_must_clear(p) (1)
2348#endif /* MMAP_CLEARS */
2349
2350/* ---------------------- Overlaid data structures ----------------------- */
2351
2352/*
2353 * When chunks are not in use, they are treated as nodes of either
2354 * lists or trees.
2355 *
2356 * "Small" chunks are stored in circular doubly-linked lists, and look
2357 * like this:
2358 *
2359 * chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2360 | Size of previous chunk |
2361 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2362 | `head:' | Size of chunk, in bytes |P|
2363 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2364 | Forward pointer to next chunk in list |
2365 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2366 | Back pointer to previous chunk in list |
2367 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2368 | Unused space (may be 0 bytes long) .
2369 | . .
2370 | . |
2371 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2372 | `foot:' | Size of chunk, in bytes |
2373 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2374 |
2375 | Larger chunks are kept in a form of bitwise digital trees (aka
2376 | tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2377 | free chunks greater than 256 bytes, their size doesn't impose any
2378 | constraints on user chunk sizes. Each node looks like:
2379 |
2380 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2381 | Size of previous chunk |
2382 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2383 | `head:' | Size of chunk, in bytes |P|
2384 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2385 | Forward pointer to next chunk of same size |
2386 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2387 | Back pointer to previous chunk of same size |
2388 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2389 | Pointer to left child (child[0]) |
2390 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2391 | Pointer to right child (child[1]) |
2392 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2393 | Pointer to parent |
2394 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2395 | bin index of this chunk |
2396 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2397 | Unused space .
2398 | . |
2399 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2400 | `foot:' | Size of chunk, in bytes |
2401 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2402 |
2403 | Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2404 | of the same size are arranged in a circularly-linked list, with only
2405 | the oldest chunk (the next to be used, in our FIFO ordering)
2406 | actually in the tree. (Tree members are distinguished by a non-null
2407 | parent pointer.) If a chunk with the same size an an existing node
2408 | is inserted, it is linked off the existing node using pointers that
2409 | work in the same way as fd/bk pointers of small chunks.
2410 |
2411 | Each tree contains a power of 2 sized range of chunk sizes (the
2412 | smallest is 0x100 <= x < 0x180), which is is divided in half at each
2413 | tree level, with the chunks in the smaller half of the range (0x100
2414 | <= x < 0x140 for the top nose) in the left subtree and the larger
2415 | half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2416 | done by inspecting individual bits.
2417 |
2418 | Using these rules, each node's left subtree contains all smaller
2419 | sizes than its right subtree. However, the node at the root of each
2420 | subtree has no particular ordering relationship to either. (The
2421 | dividing line between the subtree sizes is based on trie relation.)
2422 | If we remove the last chunk of a given size from the interior of the
2423 | tree, we need to replace it with a leaf node. The tree ordering
2424 | rules permit a node to be replaced by any leaf below it.
2425 |
2426 | The smallest chunk in a tree (a common operation in a best-fit
2427 | allocator) can be found by walking a path to the leftmost leaf in
2428 | the tree. Unlike a usual binary tree, where we follow left child
2429 | pointers until we reach a null, here we follow the right child
2430 | pointer any time the left one is null, until we reach a leaf with
2431 | both child pointers null. The smallest chunk in the tree will be
2432 | somewhere along that path.
2433 |
2434 | The worst case number of steps to add, find, or remove a node is
2435 | bounded by the number of bits differentiating chunks within
2436 | bins. Under current bin calculations, this ranges from 6 up to 21
2437 | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2438 | is of course much better.
2439 */
2440
2442{
2443 /* The first four fields must be compatible with malloc_chunk */
2445 size_t head;
2448
2452};
2453
2456typedef struct malloc_tree_chunk *tbinptr; /* The type of bins of trees */
2457
2458/* A little helper macro for trees */
2459#define leftmost_child(t) ((t)->child[0] != 0 ? (t)->child[0] : (t)->child[1])
2460
2461/* ----------------------------- Segments -------------------------------- */
2462
2463/*
2464 * Each malloc space may include non-contiguous segments, held in a
2465 * list headed by an embedded malloc_segment record representing the
2466 * top-most space. Segments also include flags holding properties of
2467 * the space. Large chunks that are directly allocated by mmap are not
2468 * included in this list. They are instead independently created and
2469 * destroyed without otherwise keeping track of them.
2470 *
2471 * Segment management mainly comes into play for spaces allocated by
2472 * MMAP. Any call to MMAP might or might not return memory that is
2473 * adjacent to an existing segment. MORECORE normally contiguously
2474 * extends the current space, so this space is almost always adjacent,
2475 * which is simpler and faster to deal with. (This is why MORECORE is
2476 * used preferentially to MMAP when both are available -- see
2477 * sys_alloc.) When allocating using MMAP, we don't use any of the
2478 * hinting mechanisms (inconsistently) supported in various
2479 * implementations of unix mmap, or distinguish reserving from
2480 * committing memory. Instead, we just ask for space, and exploit
2481 * contiguity when we get it. It is probably possible to do
2482 * better than this on some systems, but no general scheme seems
2483 * to be significantly better.
2484 *
2485 * Management entails a simpler variant of the consolidation scheme
2486 * used for chunks to reduce fragmentation -- new adjacent memory is
2487 * normally prepended or appended to an existing segment. However,
2488 * there are limitations compared to chunk consolidation that mostly
2489 * reflect the fact that segment processing is relatively infrequent
2490 * (occurring only when getting memory from system) and that we
2491 * don't expect to have huge numbers of segments:
2492 *
2493 * Segments are not indexed, so traversal requires linear scans. (It
2494 * would be possible to index these, but is not worth the extra
2495 * overhead and complexity for most programs on most platforms.)
2496 * New segments are only appended to old ones when holding top-most
2497 * memory; if they cannot be prepended to others, they are held in
2498 * different segments.
2499 *
2500 * Except for the top-most segment of an mstate, each segment record
2501 * is kept at the tail of its segment. Segments are added by pushing
2502 * segment records onto the list headed by &mstate.seg for the
2503 * containing mstate.
2504 *
2505 * Segment flags control allocation/merge/deallocation policies:
2506 * If EXTERN_BIT set, then we did not allocate this segment,
2507 * and so should not try to deallocate or merge with others.
2508 * (This currently holds only for the initial segment passed
2509 * into create_mspace_with_base.)
2510 * If USE_MMAP_BIT set, the segment may be merged with
2511 * other surrounding mmapped segments and trimmed/de-allocated
2512 * using munmap.
2513 * If neither bit is set, then the segment was obtained using
2514 * MORECORE so can be merged with surrounding MORECORE'd segments
2515 * and deallocated/trimmed using MORECORE with negative arguments.
2516 */
2517
2519{
2520 char * base; /* base address */
2521 size_t size; /* allocated size */
2522 struct malloc_segment *next; /* ptr to next segment */
2523 flag_t sflags; /* mmap and extern flag */
2524};
2525
2526#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2527#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2528
2531
2532/* ---------------------------- malloc_state ----------------------------- */
2533
2534/*
2535 * A malloc_state holds all of the bookkeeping for a space.
2536 * The main fields are:
2537 *
2538 * Top
2539 * The topmost chunk of the currently active segment. Its size is
2540 * cached in topsize. The actual size of topmost space is
2541 * topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2542 * fenceposts and segment records if necessary when getting more
2543 * space from the system. The size at which to autotrim top is
2544 * cached from mparams in trim_check, except that it is disabled if
2545 * an autotrim fails.
2546 *
2547 * Designated victim (dv)
2548 * This is the preferred chunk for servicing small requests that
2549 * don't have exact fits. It is normally the chunk split off most
2550 * recently to service another small request. Its size is cached in
2551 * dvsize. The link fields of this chunk are not maintained since it
2552 * is not kept in a bin.
2553 *
2554 * SmallBins
2555 * An array of bin headers for free chunks. These bins hold chunks
2556 * with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2557 * chunks of all the same size, spaced 8 bytes apart. To simplify
2558 * use in double-linked lists, each bin header acts as a malloc_chunk
2559 * pointing to the real first node, if it exists (else pointing to
2560 * itself). This avoids special-casing for headers. But to avoid
2561 * waste, we allocate only the fd/bk pointers of bins, and then use
2562 * repositioning tricks to treat these as the fields of a chunk.
2563 *
2564 * TreeBins
2565 * Treebins are pointers to the roots of trees holding a range of
2566 * sizes. There are 2 equally spaced treebins for each power of two
2567 * from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2568 * larger.
2569 *
2570 * Bin maps
2571 * There is one bit map for small bins ("smallmap") and one for
2572 * treebins ("treemap). Each bin sets its bit when non-empty, and
2573 * clears the bit when empty. Bit operations are then used to avoid
2574 * bin-by-bin searching -- nearly all "search" is done without ever
2575 * looking at bins that won't be selected. The bit maps
2576 * conservatively use 32 bits per map word, even if on 64bit system.
2577 * For a good description of some of the bit-based techniques used
2578 * here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2579 * supplement at http://hackersdelight.org/). Many of these are
2580 * intended to reduce the branchiness of paths through malloc etc, as
2581 * well as to reduce the number of memory locations read or written.
2582 *
2583 * Segments
2584 * A list of segments headed by an embedded malloc_segment record
2585 * representing the initial space.
2586 *
2587 * Address check support
2588 * The least_addr field is the least address ever obtained from
2589 * MORECORE or MMAP. Attempted frees and reallocs of any address less
2590 * than this are trapped (unless INSECURE is defined).
2591 *
2592 * Magic tag
2593 * A cross-check field that should always hold same value as mparams.magic.
2594 *
2595 * Max allowed footprint
2596 * The maximum allowed bytes to allocate from system (zero means no limit)
2597 *
2598 * Flags
2599 * Bits recording whether to use MMAP, locks, or contiguous MORECORE
2600 *
2601 * Statistics
2602 * Each space keeps track of current and maximum system memory
2603 * obtained via MORECORE or MMAP.
2604 *
2605 * Trim support
2606 * Fields holding the amount of unused topmost memory that should trigger
2607 * trimming, and a counter to force periodic scanning to release unused
2608 * non-topmost segments.
2609 *
2610 * Locking
2611 * If USE_LOCKS is defined, the "mutex" lock is acquired and released
2612 * around every public call using this mspace.
2613 *
2614 * Extension support
2615 * A void* pointer and a size_t field that can be used to help implement
2616 * extensions to this malloc.
2617 */
2618
2619/* Bin types, widths and sizes */
2620#define NSMALLBINS (32U)
2621#define NTREEBINS (32U)
2622#define SMALLBIN_SHIFT (3U)
2623#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2624#define TREEBIN_SHIFT (8U)
2625#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2626#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2627#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2628
2630{
2633 size_t dvsize;
2634 size_t topsize;
2640 size_t magic;
2645 size_t footprint_limit; /* zero means no limit */
2647 #if USE_LOCKS
2648 MLOCK_T mutex; /* locate lock among fields that rarely change */
2649 #endif /* USE_LOCKS */
2651 void * extp; /* Unused but available for extensions */
2652 size_t exts;
2653};
2654
2655typedef struct malloc_state *mstate;
2656
2657/* ------------- Global malloc_state and malloc_params ------------------- */
2658
2659/*
2660 * malloc_params holds global properties, including those that can be
2661 * dynamically set using mallopt. There is a single instance, mparams,
2662 * initialized in init_mparams. Note that the non-zeroness of "magic"
2663 * also serves as an initialization flag.
2664 */
2665
2675
2677
2678/* Ensure mparams initialized */
2679#define ensure_initialization() (void) (mparams.magic != 0 || init_mparams())
2680
2681#if !ONLY_MSPACES
2682
2683/* The global malloc_state used for all non-"mspace" calls */
2684static struct malloc_state _gm_;
2685#define gm (&_gm_)
2686#define is_global(M) ((M) == &_gm_)
2687
2688#endif /* !ONLY_MSPACES */
2689
2690#define is_initialized(M) ((M)->top != 0)
2691
2692/* -------------------------- system alloc setup ------------------------- */
2693
2694/* Operations on mflags */
2695
2696#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2697#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2698#if USE_LOCKS
2699#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2700#else
2701#define disable_lock(M)
2702#endif
2703
2704#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2705#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2706#if HAVE_MMAP
2707#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2708#else
2709#define disable_mmap(M)
2710#endif
2711
2712#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2713#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2714
2715#define set_lock(M, L) \
2716 ((M)->mflags = (L) ? \
2717 ((M)->mflags | USE_LOCK_BIT) : \
2718 ((M)->mflags & ~USE_LOCK_BIT))
2719
2720/* page-align a size */
2721#define page_align(S) \
2722 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2723
2724/* granularity-align a size */
2725#define granularity_align(S) \
2726 (((S) + (mparams.granularity - SIZE_T_ONE)) \
2727 & ~(mparams.granularity - SIZE_T_ONE))
2728
2729
2730/* For mmap, use granularity alignment on windows, else page-align */
2731#ifdef WIN32
2732#define mmap_align(S) granularity_align(S)
2733#else
2734#define mmap_align(S) page_align(S)
2735#endif
2736
2737/* For sys_alloc, enough padding to ensure can malloc request on success */
2738#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2739
2740#define is_page_aligned(S) \
2741 (((size_t) (S) &(mparams.page_size - SIZE_T_ONE)) == 0)
2742#define is_granularity_aligned(S) \
2743 (((size_t) (S) &(mparams.granularity - SIZE_T_ONE)) == 0)
2744
2745/* True if segment S holds address A */
2746#define segment_holds(S, A) \
2747 ((char *) (A) >= S->base && (char *) (A) < S->base + S->size)
2748
2749/* Return segment holding given address */
2751{
2752 msegmentptr sp = &m->seg;
2753
2754 for (;;) {
2755 if (addr >= sp->base && addr < sp->base + sp->size)
2756 return sp;
2757
2758 if ((sp = sp->next) == 0)
2759 return 0;
2760 }
2761}
2762
2763/* Return true if segment contains a segment link */
2765{
2766 msegmentptr sp = &m->seg;
2767
2768 for (;;) {
2769 if ((char *) sp >= ss->base && (char *) sp < ss->base + ss->size)
2770 return 1;
2771
2772 if ((sp = sp->next) == 0)
2773 return 0;
2774 }
2775}
2776
2777#ifndef MORECORE_CANNOT_TRIM
2778#define should_trim(M, s) ((s) > (M)->trim_check)
2779#else /* MORECORE_CANNOT_TRIM */
2780#define should_trim(M, s) (0)
2781#endif /* MORECORE_CANNOT_TRIM */
2782
2783/*
2784 * TOP_FOOT_SIZE is padding at the end of a segment, including space
2785 * that may be needed to place segment records and fenceposts when new
2786 * noncontiguous segments are added.
2787 */
2788#define TOP_FOOT_SIZE \
2789 (align_offset(chunk2mem(0)) + pad_request(sizeof(struct malloc_segment)) + MIN_CHUNK_SIZE)
2790
2791
2792/* ------------------------------- Hooks -------------------------------- */
2793
2794/*
2795 * PREACTION should be defined to return 0 on success, and nonzero on
2796 * failure. If you are not using locking, you can redefine these to do
2797 * anything you like.
2798 */
2799
2800#if USE_LOCKS
2801#define PREACTION(M) ((use_lock(M)) ? ACQUIRE_LOCK(&(M)->mutex) : 0)
2802#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2803#else /* USE_LOCKS */
2804
2805#ifndef PREACTION
2806#define PREACTION(M) (0)
2807#endif /* PREACTION */
2808
2809#ifndef POSTACTION
2810#define POSTACTION(M)
2811#endif /* POSTACTION */
2812
2813#endif /* USE_LOCKS */
2814
2815/*
2816 * CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2817 * USAGE_ERROR_ACTION is triggered on detected bad frees and
2818 * reallocs. The argument p is an address that might have triggered the
2819 * fault. It is ignored by the two predefined actions, but might be
2820 * useful in custom actions that try to help diagnose errors.
2821 */
2822
2823#if PROCEED_ON_ERROR
2824
2825/* A count of the number of corruption errors causing resets */
2826int malloc_corruption_error_count;
2827
2828/* default corruption action */
2829static void reset_on_error(mstate m);
2830
2831#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2832#define USAGE_ERROR_ACTION(m, p)
2833
2834#else /* PROCEED_ON_ERROR */
2835
2836#ifndef CORRUPTION_ERROR_ACTION
2837#define CORRUPTION_ERROR_ACTION(m) ABORT
2838#endif /* CORRUPTION_ERROR_ACTION */
2839
2840#ifndef USAGE_ERROR_ACTION
2841#define USAGE_ERROR_ACTION(m, p) ABORT
2842#endif /* USAGE_ERROR_ACTION */
2843
2844#endif /* PROCEED_ON_ERROR */
2845
2846
2847/* -------------------------- Debugging setup ---------------------------- */
2848
2849#if !DEBUG
2850
2851#define check_free_chunk(M, P)
2852#define check_inuse_chunk(M, P)
2853#define check_malloced_chunk(M, P, N)
2854#define check_mmapped_chunk(M, P)
2855#define check_malloc_state(M)
2856#define check_top_chunk(M, P)
2857
2858#else /* DEBUG */
2859#define check_free_chunk(M, P) do_check_free_chunk(M, P)
2860#define check_inuse_chunk(M, P) do_check_inuse_chunk(M, P)
2861#define check_top_chunk(M, P) do_check_top_chunk(M, P)
2862#define check_malloced_chunk(M, P, N) do_check_malloced_chunk(M, P, N)
2863#define check_mmapped_chunk(M, P) do_check_mmapped_chunk(M, P)
2864#define check_malloc_state(M) do_check_malloc_state(M)
2865
2866static void do_check_any_chunk(mstate m, mchunkptr p);
2867static void do_check_top_chunk(mstate m, mchunkptr p);
2868static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2869static void do_check_inuse_chunk(mstate m, mchunkptr p);
2870static void do_check_free_chunk(mstate m, mchunkptr p);
2871static void do_check_malloced_chunk(mstate m, void *mem, size_t s);
2872static void do_check_tree(mstate m, tchunkptr t);
2873static void do_check_treebin(mstate m, bindex_t i);
2874static void do_check_smallbin(mstate m, bindex_t i);
2875static void do_check_malloc_state(mstate m);
2876static int bin_find(mstate m, mchunkptr x);
2877static size_t traverse_and_check(mstate m);
2878#endif /* DEBUG */
2879
2880/* ---------------------------- Indexing Bins ---------------------------- */
2881
2882#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2883#define small_index(s) (bindex_t) ((s) >> SMALLBIN_SHIFT)
2884#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2885#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2886
2887/* addressing by index. See above about smallbin repositioning */
2888#define smallbin_at(M, i) ((sbinptr) ((char *) &((M)->smallbins[(i) << 1])))
2889#define treebin_at(M, i) (&((M)->treebins[i]))
2890
2891/* assign tree index for size S to variable I. Use x86 asm if possible */
2892#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2893#define compute_tree_index(S, I) \
2894 { \
2895 unsigned int X = S >> TREEBIN_SHIFT; \
2896 if (X == 0) \
2897 I = 0; \
2898 else if (X > 0xFFFF) \
2899 I = NTREEBINS - 1; \
2900 else { \
2901 unsigned int K = (unsigned) sizeof(X) * __CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2902 I = (bindex_t) ((K << 1) + ((S >> (K + (TREEBIN_SHIFT - 1)) & 1))); \
2903 } \
2904 }
2905
2906#elif defined(__INTEL_COMPILER)
2907#define compute_tree_index(S, I) \
2908 { \
2909 size_t X = S >> TREEBIN_SHIFT; \
2910 if (X == 0) \
2911 I = 0; \
2912 else if (X > 0xFFFF) \
2913 I = NTREEBINS - 1; \
2914 else { \
2915 unsigned int K = _bit_scan_reverse(X); \
2916 I = (bindex_t) ((K << 1) + ((S >> (K + (TREEBIN_SHIFT - 1)) & 1))); \
2917 } \
2918 }
2919
2920#elif defined(_MSC_VER) && _MSC_VER >= 1300
2921#define compute_tree_index(S, I) \
2922 { \
2923 size_t X = S >> TREEBIN_SHIFT; \
2924 if (X == 0) \
2925 I = 0; \
2926 else if (X > 0xFFFF) \
2927 I = NTREEBINS - 1; \
2928 else { \
2929 unsigned int K; \
2930 _BitScanReverse((DWORD *) &K, (DWORD) X); \
2931 I = (bindex_t) ((K << 1) + ((S >> (K + (TREEBIN_SHIFT - 1)) & 1))); \
2932 } \
2933 }
2934
2935#else /* GNUC */
2936#define compute_tree_index(S, I) \
2937 { \
2938 size_t X = S >> TREEBIN_SHIFT; \
2939 if (X == 0) \
2940 I = 0; \
2941 else if (X > 0xFFFF) \
2942 I = NTREEBINS - 1; \
2943 else { \
2944 unsigned int Y = (unsigned int) X; \
2945 unsigned int N = ((Y - 0x100) >> 16) & 8; \
2946 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4; \
2947 N += K; \
2948 N += K = (((Y <<= K) - 0x4000) >> 16) & 2; \
2949 K = 14 - N + ((Y <<= K) >> 15); \
2950 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT - 1)) & 1)); \
2951 } \
2952 }
2953#endif /* GNUC */
2954
2955/* Bit representing maximum resolved size in a treebin at i */
2956#define bit_for_tree_index(i) \
2957 (i == NTREEBINS - 1) ? (SIZE_T_BITSIZE - 1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2958
2959/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2960#define leftshift_for_tree_index(i) \
2961 ((i == NTREEBINS - 1) ? 0 : \
2962 ((SIZE_T_BITSIZE - SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2963
2964/* The size of the smallest chunk held in bin with index i */
2965#define minsize_for_tree_index(i) \
2966 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) \
2967 | (((size_t) ((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2968
2969
2970/* ------------------------ Operations on bin maps ----------------------- */
2971
2972/* bit corresponding to given index */
2973#define idx2bit(i) ((binmap_t) (1) << (i))
2974
2975/* Mark/Clear bits with given index */
2976#define mark_smallmap(M, i) ((M)->smallmap |= idx2bit(i))
2977#define clear_smallmap(M, i) ((M)->smallmap &= ~idx2bit(i))
2978#define smallmap_is_marked(M, i) ((M)->smallmap & idx2bit(i))
2979
2980#define mark_treemap(M, i) ((M)->treemap |= idx2bit(i))
2981#define clear_treemap(M, i) ((M)->treemap &= ~idx2bit(i))
2982#define treemap_is_marked(M, i) ((M)->treemap & idx2bit(i))
2983
2984/* isolate the least set bit of a bitmap */
2985#define least_bit(x) ((x) & -(x))
2986
2987/* mask with all bits to left of least bit of x on */
2988#define left_bits(x) ((x << 1) | -(x << 1))
2989
2990/* mask with all bits to left of or equal to least bit of x on */
2991#define same_or_left_bits(x) ((x) | -(x))
2992
2993/* index corresponding to given bit. Use x86 asm if possible */
2994
2995#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2996#define compute_bit2idx(X, I) \
2997 { \
2998 unsigned int J; \
2999 J = __builtin_ctz(X); \
3000 I = (bindex_t) J; \
3001 }
3002
3003#elif defined(__INTEL_COMPILER)
3004#define compute_bit2idx(X, I) \
3005 { \
3006 unsigned int J; \
3007 J = _bit_scan_forward(X); \
3008 I = (bindex_t) J; \
3009 }
3010
3011#elif defined(_MSC_VER) && _MSC_VER >= 1300
3012#define compute_bit2idx(X, I) \
3013 { \
3014 unsigned int J; \
3015 _BitScanForward((DWORD *) &J, X); \
3016 I = (bindex_t) J; \
3017 }
3018
3019#elif USE_BUILTIN_FFS
3020#define compute_bit2idx(X, I) I = ffs(X) - 1
3021
3022#else /* if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) */
3023#define compute_bit2idx(X, I) \
3024 { \
3025 unsigned int Y = X - 1; \
3026 unsigned int K = Y >> (16 - 4) & 16; \
3027 unsigned int N = K; Y >>= K; \
3028 N += K = Y >> (8 - 3) & 8; Y >>= K; \
3029 N += K = Y >> (4 - 2) & 4; Y >>= K; \
3030 N += K = Y >> (2 - 1) & 2; Y >>= K; \
3031 N += K = Y >> (1 - 0) & 1; Y >>= K; \
3032 I = (bindex_t) (N + Y); \
3033 }
3034#endif /* GNUC */
3035
3036
3037/* ----------------------- Runtime Check Support ------------------------- */
3038
3039/*
3040 * For security, the main invariant is that malloc/free/etc never
3041 * writes to a static address other than malloc_state, unless static
3042 * malloc_state itself has been corrupted, which cannot occur via
3043 * malloc (because of these checks). In essence this means that we
3044 * believe all pointers, sizes, maps etc held in malloc_state, but
3045 * check all of those linked or offsetted from other embedded data
3046 * structures. These checks are interspersed with main code in a way
3047 * that tends to minimize their run-time cost.
3048 *
3049 * When FOOTERS is defined, in addition to range checking, we also
3050 * verify footer fields of inuse chunks, which can be used guarantee
3051 * that the mstate controlling malloc/free is intact. This is a
3052 * streamlined version of the approach described by William Robertson
3053 * et al in "Run-time Detection of Heap-based Overflows" LISA'03
3054 * http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3055 * of an inuse chunk holds the xor of its mstate and a random seed,
3056 * that is checked upon calls to free() and realloc(). This is
3057 * (probabalistically) unguessable from outside the program, but can be
3058 * computed by any code successfully malloc'ing any chunk, so does not
3059 * itself provide protection against code that has already broken
3060 * security through some other means. Unlike Robertson et al, we
3061 * always dynamically check addresses of all offset chunks (previous,
3062 * next, etc). This turns out to be cheaper than relying on hashes.
3063 */
3064
3065#if !INSECURE
3066/* Check if address a is at least as high as any from MORECORE or MMAP */
3067#define ok_address(M, a) ((char *) (a) >= (M)->least_addr)
3068/* Check if address of next chunk n is higher than base chunk p */
3069#define ok_next(p, n) ((char *) (p) < (char *) (n))
3070/* Check if p has inuse status */
3071#define ok_inuse(p) is_inuse(p)
3072/* Check if p has its pinuse bit on */
3073#define ok_pinuse(p) pinuse(p)
3074
3075#else /* !INSECURE */
3076#define ok_address(M, a) (1)
3077#define ok_next(b, n) (1)
3078#define ok_inuse(p) (1)
3079#define ok_pinuse(p) (1)
3080#endif /* !INSECURE */
3081
3082#if (FOOTERS && !INSECURE)
3083/* Check if (alleged) mstate m has expected magic field */
3084#define ok_magic(M) ((M)->magic == mparams.magic)
3085#else /* (FOOTERS && !INSECURE) */
3086#define ok_magic(M) (1)
3087#endif /* (FOOTERS && !INSECURE) */
3088
3089/* In gcc, use __builtin_expect to minimize impact of checks */
3090#if !INSECURE
3091#if defined(__GNUC__) && __GNUC__ >= 3
3092#define RTCHECK(e) __builtin_expect(e, 1)
3093#else /* GNUC */
3094#define RTCHECK(e) (e)
3095#endif /* GNUC */
3096#else /* !INSECURE */
3097#define RTCHECK(e) (1)
3098#endif /* !INSECURE */
3099
3100/* macros to set up inuse chunks with or without footers */
3101
3102#if !FOOTERS
3103
3104#define mark_inuse_foot(M, p, s)
3105
3106/* Macros for setting head/foot of non-mmapped chunks */
3107
3108/* Set cinuse bit and pinuse bit of next chunk */
3109#define set_inuse(M, p, s) \
3110 ((p)->head = (((p)->head & PINUSE_BIT) | s | CINUSE_BIT), \
3111 ((mchunkptr) (((char *) (p)) + (s)))->head |= PINUSE_BIT)
3112
3113/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3114#define set_inuse_and_pinuse(M, p, s) \
3115 ((p)->head = (s | PINUSE_BIT | CINUSE_BIT), \
3116 ((mchunkptr) (((char *) (p)) + (s)))->head |= PINUSE_BIT)
3117
3118/* Set size, cinuse and pinuse bit of this chunk */
3119#define set_size_and_pinuse_of_inuse_chunk(M, p, s) \
3120 ((p)->head = (s | PINUSE_BIT | CINUSE_BIT))
3121
3122#else /* FOOTERS */
3123
3124/* Set foot of inuse chunk to be xor of mstate and seed */
3125#define mark_inuse_foot(M, p, s) \
3126 (((mchunkptr) ((char *) (p) + (s)))->prev_foot = ((size_t) (M) ^ mparams.magic))
3127
3128#define get_mstate_for(p) \
3129 ((mstate) (((mchunkptr) ((char *) (p) \
3130 + (chunksize(p))))->prev_foot ^ mparams.magic))
3131
3132#define set_inuse(M, p, s) \
3133 ((p)->head = (((p)->head & PINUSE_BIT) | s | CINUSE_BIT), \
3134 (((mchunkptr) (((char *) (p)) + (s)))->head |= PINUSE_BIT), \
3135 mark_inuse_foot(M, p, s))
3136
3137#define set_inuse_and_pinuse(M, p, s) \
3138 ((p)->head = (s | PINUSE_BIT | CINUSE_BIT), \
3139 (((mchunkptr) (((char *) (p)) + (s)))->head |= PINUSE_BIT), \
3140 mark_inuse_foot(M, p, s))
3141
3142#define set_size_and_pinuse_of_inuse_chunk(M, p, s) \
3143 ((p)->head = (s | PINUSE_BIT | CINUSE_BIT), \
3144 mark_inuse_foot(M, p, s))
3145
3146#endif /* !FOOTERS */
3147
3148/* ---------------------------- setting mparams -------------------------- */
3149
3150#if LOCK_AT_FORK
3151static void pre_fork(void){ ACQUIRE_LOCK(&(gm)->mutex); }
3152
3153static void post_fork_parent(void){ RELEASE_LOCK(&(gm)->mutex); }
3154
3155static void post_fork_child(void){ INITIAL_LOCK(&(gm)->mutex); }
3156
3157#endif /* LOCK_AT_FORK */
3158
3159/* Initialize mparams */
3160static int init_mparams(void)
3161{
3162 #ifdef NEED_GLOBAL_LOCK_INIT
3163 if (malloc_global_mutex_status <= 0)
3164 init_malloc_global_mutex();
3165 #endif
3166
3168 if (mparams.magic == 0) {
3169 size_t magic;
3170 size_t psize;
3171 size_t gsize;
3172
3173 #ifndef WIN32
3174 psize = malloc_getpagesize;
3175 gsize = ((DEFAULT_GRANULARITY != 0) ? DEFAULT_GRANULARITY : psize);
3176 #else /* WIN32 */
3177 {
3178 SYSTEM_INFO system_info;
3179 GetSystemInfo(&system_info);
3180 psize = system_info.dwPageSize;
3181 gsize = ((DEFAULT_GRANULARITY != 0) ?
3182 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3183 }
3184 #endif /* WIN32 */
3185
3186 /* Sanity-check configuration:
3187 * size_t must be unsigned and as wide as pointer type.
3188 * ints must be at least 4 bytes.
3189 * alignment must be at least 8.
3190 * Alignment, min chunk size, and page size must all be powers of 2.
3191 */
3192 if ((sizeof(size_t) != sizeof(char *)) ||
3194 (sizeof(int) < 4) ||
3195 (MALLOC_ALIGNMENT < (size_t) 8U) ||
3197 ((MCHUNK_SIZE & (MCHUNK_SIZE - SIZE_T_ONE)) != 0) ||
3198 ((gsize & (gsize - SIZE_T_ONE)) != 0) ||
3199 ((psize & (psize - SIZE_T_ONE)) != 0))
3200 ABORT;
3201 mparams.granularity = gsize;
3202 mparams.page_size = psize;
3203 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3204 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3205 #if MORECORE_CONTIGUOUS
3206 mparams.default_mflags = USE_LOCK_BIT | USE_MMAP_BIT;
3207 #else /* MORECORE_CONTIGUOUS */
3209 #endif /* MORECORE_CONTIGUOUS */
3210
3211 #if !ONLY_MSPACES
3212 /* Set up lock for main malloc area */
3213 gm->mflags = mparams.default_mflags;
3214 (void) INITIAL_LOCK(&gm->mutex);
3215 #endif
3216 #if LOCK_AT_FORK
3217 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3218 #endif
3219
3220 {
3221 #if USE_DEV_RANDOM
3222 int fd;
3223 unsigned char buf[sizeof(size_t)];
3224 /* Try to use /dev/urandom, else fall back on using time */
3225 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3226 read(fd, buf, sizeof(buf)) == sizeof(buf))
3227 {
3228 magic = *((size_t *) buf);
3229 close(fd);
3230 } else
3231 #endif /* USE_DEV_RANDOM */
3232 #ifdef WIN32
3233 magic = (size_t) (GetTickCount() ^ (size_t) 0x55555555U);
3234 #elif defined(LACKS_TIME_H)
3235 magic = (size_t) &magic ^ (size_t) 0x55555555U;
3236 #else
3237 magic = (size_t) (time(0) ^ (size_t) 0x55555555U);
3238 #endif
3239 magic |= (size_t) 8U; /* ensure nonzero */
3240 magic &= ~(size_t) 7U; /* improve chances of fault for bad values */
3241 /* Until memory modes commonly available, use volatile-write */
3242 (*(volatile size_t *) (&(mparams.magic))) = magic;
3243 }
3244 }
3245
3247 return 1;
3248} /* init_mparams */
3249
3250/* support for mallopt */
3251static int change_mparam(int param_number, int value)
3252{
3253 size_t val;
3254
3256 val = (value == -1) ? MAX_SIZE_T : (size_t) value;
3257 switch (param_number) {
3258 case M_TRIM_THRESHOLD:
3259 mparams.trim_threshold = val;
3260 return 1;
3261
3262 case M_GRANULARITY:
3263 if (val >= mparams.page_size && ((val & (val - 1)) == 0)) {
3264 mparams.granularity = val;
3265 return 1;
3266 } else {
3267 return 0;
3268 }
3269 case M_MMAP_THRESHOLD:
3270 mparams.mmap_threshold = val;
3271 return 1;
3272
3273 default:
3274 return 0;
3275 }
3276}
3277
3278#if DEBUG
3279/* ------------------------- Debugging Support --------------------------- */
3280
3281/* Check properties of any chunk, whether free, inuse, mmapped etc */
3282static void do_check_any_chunk(mstate m, mchunkptr p)
3283{
3284 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3285 assert(ok_address(m, p));
3286}
3287
3288/* Check properties of top chunk */
3289static void do_check_top_chunk(mstate m, mchunkptr p)
3290{
3291 msegmentptr sp = segment_holding(m, (char *) p);
3292 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3293
3294 assert(sp != 0);
3295 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3296 assert(ok_address(m, p));
3297 assert(sz == m->topsize);
3298 assert(sz > 0);
3299 assert(sz == ((sp->base + sp->size) - (char *) p) - TOP_FOOT_SIZE);
3300 assert(pinuse(p));
3302}
3303
3304/* Check properties of (inuse) mmapped chunks */
3305static void do_check_mmapped_chunk(mstate m, mchunkptr p)
3306{
3307 size_t sz = chunksize(p);
3308 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3309
3310 assert(is_mmapped(p));
3311 assert(use_mmap(m));
3312 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3313 assert(ok_address(m, p));
3314 assert(!is_small(sz));
3315 assert((len & (mparams.page_size - SIZE_T_ONE)) == 0);
3316 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3317 assert(chunk_plus_offset(p, sz + SIZE_T_SIZE)->head == 0);
3318}
3319
3320/* Check properties of inuse chunks */
3321static void do_check_inuse_chunk(mstate m, mchunkptr p)
3322{
3323 do_check_any_chunk(m, p);
3324 assert(is_inuse(p));
3325 assert(next_pinuse(p));
3326 /* If not pinuse and not mmapped, previous chunk has OK offset */
3327 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3328 if (is_mmapped(p))
3329 do_check_mmapped_chunk(m, p);
3330}
3331
3332/* Check properties of free chunks */
3333static void do_check_free_chunk(mstate m, mchunkptr p)
3334{
3335 size_t sz = chunksize(p);
3336 mchunkptr next = chunk_plus_offset(p, sz);
3337
3338 do_check_any_chunk(m, p);
3339 assert(!is_inuse(p));
3340 assert(!next_pinuse(p));
3341 assert(!is_mmapped(p));
3342 if (p != m->dv && p != m->top) {
3343 if (sz >= MIN_CHUNK_SIZE) {
3344 assert((sz & CHUNK_ALIGN_MASK) == 0);
3346 assert(next->prev_foot == sz);
3347 assert(pinuse(p));
3348 assert(next == m->top || is_inuse(next));
3349 assert(p->fd->bk == p);
3350 assert(p->bk->fd == p);
3351 } else { /* markers are always of size SIZE_T_SIZE */
3352 assert(sz == SIZE_T_SIZE);
3353 }
3354 }
3355}
3356
3357/* Check properties of malloced chunks at the point they are malloced */
3358static void do_check_malloced_chunk(mstate m, void *mem, size_t s)
3359{
3360 if (mem != 0) {
3361 mchunkptr p = mem2chunk(mem);
3362 size_t sz = p->head & ~INUSE_BITS;
3363 do_check_inuse_chunk(m, p);
3364 assert((sz & CHUNK_ALIGN_MASK) == 0);
3365 assert(sz >= MIN_CHUNK_SIZE);
3366 assert(sz >= s);
3367 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3368 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3369 }
3370}
3371
3372/* Check a tree and its subtrees. */
3373static void do_check_tree(mstate m, tchunkptr t)
3374{
3375 tchunkptr head = 0;
3376 tchunkptr u = t;
3377 bindex_t tindex = t->index;
3378 size_t tsize = chunksize(t);
3379 bindex_t idx;
3380
3381 compute_tree_index(tsize, idx);
3382 assert(tindex == idx);
3383 assert(tsize >= MIN_LARGE_SIZE);
3384 assert(tsize >= minsize_for_tree_index(idx));
3385 assert((idx == NTREEBINS - 1) || (tsize < minsize_for_tree_index((idx + 1))));
3386
3387 do { /* traverse through chain of same-sized nodes */
3388 do_check_any_chunk(m, ((mchunkptr) u));
3389 assert(u->index == tindex);
3390 assert(chunksize(u) == tsize);
3391 assert(!is_inuse(u));
3392 assert(!next_pinuse(u));
3393 assert(u->fd->bk == u);
3394 assert(u->bk->fd == u);
3395 if (u->parent == 0) {
3396 assert(u->child[0] == 0);
3397 assert(u->child[1] == 0);
3398 } else {
3399 assert(head == 0); /* only one node on chain has parent */
3400 head = u;
3401 assert(u->parent != u);
3402 assert(u->parent->child[0] == u ||
3403 u->parent->child[1] == u ||
3404 *((tbinptr *) (u->parent)) == u);
3405 if (u->child[0] != 0) {
3406 assert(u->child[0]->parent == u);
3407 assert(u->child[0] != u);
3408 do_check_tree(m, u->child[0]);
3409 }
3410 if (u->child[1] != 0) {
3411 assert(u->child[1]->parent == u);
3412 assert(u->child[1] != u);
3413 do_check_tree(m, u->child[1]);
3414 }
3415 if (u->child[0] != 0 && u->child[1] != 0) {
3416 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3417 }
3418 }
3419 u = u->fd;
3420 } while (u != t);
3421 assert(head != 0);
3422} /* do_check_tree */
3423
3424/* Check all the chunks in a treebin. */
3425static void do_check_treebin(mstate m, bindex_t i)
3426{
3427 tbinptr *tb = treebin_at(m, i);
3428 tchunkptr t = *tb;
3429 int empty = (m->treemap & (1U << i)) == 0;
3430
3431 if (t == 0)
3432 assert(empty);
3433 if (!empty)
3434 do_check_tree(m, t);
3435}
3436
3437/* Check all the chunks in a smallbin. */
3438static void do_check_smallbin(mstate m, bindex_t i)
3439{
3440 sbinptr b = smallbin_at(m, i);
3441 mchunkptr p = b->bk;
3442 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3443
3444 if (p == b)
3445 assert(empty);
3446 if (!empty) {
3447 for (; p != b; p = p->bk) {
3448 size_t size = chunksize(p);
3449 mchunkptr q;
3450 /* each chunk claims to be free */
3451 do_check_free_chunk(m, p);
3452 /* chunk belongs in bin */
3453 assert(small_index(size) == i);
3454 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3455 /* chunk is followed by an inuse chunk */
3456 q = next_chunk(p);
3457 if (q->head != FENCEPOST_HEAD)
3458 do_check_inuse_chunk(m, q);
3459 }
3460 }
3461}
3462
3463/* Find x in a bin. Used in other check functions. */
3464static int bin_find(mstate m, mchunkptr x)
3465{
3466 size_t size = chunksize(x);
3467
3468 if (is_small(size)) {
3469 bindex_t sidx = small_index(size);
3470 sbinptr b = smallbin_at(m, sidx);
3471 if (smallmap_is_marked(m, sidx)) {
3472 mchunkptr p = b;
3473 do {
3474 if (p == x)
3475 return 1;
3476 } while ((p = p->fd) != b);
3477 }
3478 } else {
3479 bindex_t tidx;
3480 compute_tree_index(size, tidx);
3481 if (treemap_is_marked(m, tidx)) {
3482 tchunkptr t = *treebin_at(m, tidx);
3483 size_t sizebits = size << leftshift_for_tree_index(tidx);
3484 while (t != 0 && chunksize(t) != size) {
3485 t = t->child[(sizebits >> (SIZE_T_BITSIZE - SIZE_T_ONE)) & 1];
3486 sizebits <<= 1;
3487 }
3488 if (t != 0) {
3489 tchunkptr u = t;
3490 do {
3491 if (u == (tchunkptr) x)
3492 return 1;
3493 } while ((u = u->fd) != t);
3494 }
3495 }
3496 }
3497 return 0;
3498} /* bin_find */
3499
3500/* Traverse each chunk and check it; return total */
3501static size_t traverse_and_check(mstate m)
3502{
3503 size_t sum = 0;
3504
3505 if (is_initialized(m)) {
3506 msegmentptr s = &m->seg;
3507 sum += m->topsize + TOP_FOOT_SIZE;
3508 while (s != 0) {
3510 mchunkptr lastq = 0;
3511 assert(pinuse(q));
3512 while (segment_holds(s, q) &&
3513 q != m->top && q->head != FENCEPOST_HEAD)
3514 {
3515 sum += chunksize(q);
3516 if (is_inuse(q)) {
3517 assert(!bin_find(m, q));
3518 do_check_inuse_chunk(m, q);
3519 } else {
3520 assert(q == m->dv || bin_find(m, q));
3521 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3522 do_check_free_chunk(m, q);
3523 }
3524 lastq = q;
3525 q = next_chunk(q);
3526 }
3527 s = s->next;
3528 }
3529 }
3530 return sum;
3531}
3532
3533/* Check all properties of malloc_state. */
3534static void do_check_malloc_state(mstate m)
3535{
3536 bindex_t i;
3537 size_t total;
3538
3539 /* check bins */
3540 for (i = 0; i < NSMALLBINS; ++i)
3541 do_check_smallbin(m, i);
3542 for (i = 0; i < NTREEBINS; ++i)
3543 do_check_treebin(m, i);
3544
3545 if (m->dvsize != 0) { /* check dv chunk */
3546 do_check_any_chunk(m, m->dv);
3547 assert(m->dvsize == chunksize(m->dv));
3549 assert(bin_find(m, m->dv) == 0);
3550 }
3551
3552 if (m->top != 0) { /* check top chunk */
3553 do_check_top_chunk(m, m->top);
3554 /*assert(m->topsize == chunksize(m->top)); redundant */
3555 assert(m->topsize > 0);
3556 assert(bin_find(m, m->top) == 0);
3557 }
3558
3559 total = traverse_and_check(m);
3560 assert(total <= m->footprint);
3561 assert(m->footprint <= m->max_footprint);
3562}
3563
3564#endif /* DEBUG */
3565
3566/* ----------------------------- statistics ------------------------------ */
3567
3568#if !NO_MALLINFO
3570{
3571 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3572
3574 if (!PREACTION(m)) {
3576 if (is_initialized(m)) {
3577 size_t nfree = SIZE_T_ONE; /* top always free */
3578 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3579 size_t sum = mfree;
3580 msegmentptr s = &m->seg;
3581 while (s != 0) {
3583 while (segment_holds(s, q) &&
3584 q != m->top && q->head != FENCEPOST_HEAD)
3585 {
3586 size_t sz = chunksize(q);
3587 sum += sz;
3588 if (!is_inuse(q)) {
3589 mfree += sz;
3590 ++nfree;
3591 }
3592 q = next_chunk(q);
3593 }
3594 s = s->next;
3595 }
3596
3597 nm.arena = sum;
3598 nm.ordblks = nfree;
3599 nm.hblkhd = m->footprint - sum;
3600 nm.usmblks = m->max_footprint;
3601 nm.uordblks = m->footprint - mfree;
3602 nm.fordblks = mfree;
3603 nm.keepcost = m->topsize;
3604 }
3605
3606 POSTACTION(m);
3607 }
3608 return nm;
3609} /* internal_mallinfo */
3610
3611#endif /* !NO_MALLINFO */
3612
3613#if !NO_MALLOC_STATS
3615{
3617 if (!PREACTION(m)) {
3618 size_t maxfp = 0;
3619 size_t fp = 0;
3620 size_t used = 0;
3622 if (is_initialized(m)) {
3623 msegmentptr s = &m->seg;
3624 maxfp = m->max_footprint;
3625 fp = m->footprint;
3626 used = fp - (m->topsize + TOP_FOOT_SIZE);
3627
3628 while (s != 0) {
3630 while (segment_holds(s, q) &&
3631 q != m->top && q->head != FENCEPOST_HEAD)
3632 {
3633 if (!is_inuse(q))
3634 used -= chunksize(q);
3635 q = next_chunk(q);
3636 }
3637 s = s->next;
3638 }
3639 }
3640 POSTACTION(m); /* drop lock */
3641 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long) (maxfp));
3642 fprintf(stderr, "system bytes = %10lu\n", (unsigned long) (fp));
3643 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long) (used));
3644 }
3645}
3646
3647#endif /* NO_MALLOC_STATS */
3648
3649/* ----------------------- Operations on smallbins ----------------------- */
3650
3651/*
3652 * Various forms of linking and unlinking are defined as macros. Even
3653 * the ones for trees, which are very long but have very short typical
3654 * paths. This is ugly but reduces reliance on inlining support of
3655 * compilers.
3656 */
3657
3658/* Link a free chunk into a smallbin */
3659#define insert_small_chunk(M, P, S) \
3660 { \
3661 bindex_t I = small_index(S); \
3662 mchunkptr B = smallbin_at(M, I); \
3663 mchunkptr F = B; \
3664 assert(S >= MIN_CHUNK_SIZE); \
3665 if (!smallmap_is_marked(M, I)) \
3666 mark_smallmap(M, I); \
3667 else if (RTCHECK(ok_address(M, B->fd))) \
3668 F = B->fd; \
3669 else { \
3670 CORRUPTION_ERROR_ACTION(M); \
3671 } \
3672 B->fd = P; \
3673 F->bk = P; \
3674 P->fd = F; \
3675 P->bk = B; \
3676 }
3677
3678/* Unlink a chunk from a smallbin */
3679#define unlink_small_chunk(M, P, S) \
3680 { \
3681 mchunkptr F = P->fd; \
3682 mchunkptr B = P->bk; \
3683 bindex_t I = small_index(S); \
3684 assert(P != B); \
3685 assert(P != F); \
3686 assert(chunksize(P) == small_index2size(I)); \
3687 if (RTCHECK(F == smallbin_at(M, I) || (ok_address(M, F) && F->bk == P))) { \
3688 if (B == F) { \
3689 clear_smallmap(M, I); \
3690 } \
3691 else if (RTCHECK(B == smallbin_at(M, I) || \
3692 (ok_address(M, B) && B->fd == P))) { \
3693 F->bk = B; \
3694 B->fd = F; \
3695 } \
3696 else { \
3697 CORRUPTION_ERROR_ACTION(M); \
3698 } \
3699 } \
3700 else { \
3701 CORRUPTION_ERROR_ACTION(M); \
3702 } \
3703 }
3704
3705/* Unlink the first chunk from a smallbin */
3706#define unlink_first_small_chunk(M, B, P, I) \
3707 { \
3708 mchunkptr F = P->fd; \
3709 assert(P != B); \
3710 assert(P != F); \
3711 assert(chunksize(P) == small_index2size(I)); \
3712 if (B == F) { \
3713 clear_smallmap(M, I); \
3714 } \
3715 else if (RTCHECK(ok_address(M, F) && F->bk == P)) { \
3716 F->bk = B; \
3717 B->fd = F; \
3718 } \
3719 else { \
3720 CORRUPTION_ERROR_ACTION(M); \
3721 } \
3722 }
3723
3724/* Replace dv node, binning the old one */
3725/* Used only when dvsize known to be small */
3726#define replace_dv(M, P, S) \
3727 { \
3728 size_t DVS = M->dvsize; \
3729 assert(is_small(DVS)); \
3730 if (DVS != 0) { \
3731 mchunkptr DV = M->dv; \
3732 insert_small_chunk(M, DV, DVS); \
3733 } \
3734 M->dvsize = S; \
3735 M->dv = P; \
3736 }
3737
3738/* ------------------------- Operations on trees ------------------------- */
3739
3740/* Insert chunk into tree */
3741#define insert_large_chunk(M, X, S) \
3742 { \
3743 tbinptr *H; \
3744 bindex_t I; \
3745 compute_tree_index(S, I); \
3746 H = treebin_at(M, I); \
3747 X->index = I; \
3748 X->child[0] = X->child[1] = 0; \
3749 if (!treemap_is_marked(M, I)) { \
3750 mark_treemap(M, I); \
3751 *H = X; \
3752 X->parent = (tchunkptr) H; \
3753 X->fd = X->bk = X; \
3754 } \
3755 else { \
3756 tchunkptr T = *H; \
3757 size_t K = S << leftshift_for_tree_index(I); \
3758 for (;;) { \
3759 if (chunksize(T) != S) { \
3760 tchunkptr *C = &(T->child[(K >> (SIZE_T_BITSIZE - SIZE_T_ONE)) & 1]); \
3761 K <<= 1; \
3762 if (*C != 0) \
3763 T = *C; \
3764 else if (RTCHECK(ok_address(M, C))) { \
3765 *C = X; \
3766 X->parent = T; \
3767 X->fd = X->bk = X; \
3768 break; \
3769 } \
3770 else { \
3771 CORRUPTION_ERROR_ACTION(M); \
3772 break; \
3773 } \
3774 } \
3775 else { \
3776 tchunkptr F = T->fd; \
3777 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) { \
3778 T->fd = F->bk = X; \
3779 X->fd = F; \
3780 X->bk = T; \
3781 X->parent = 0; \
3782 break; \
3783 } \
3784 else { \
3785 CORRUPTION_ERROR_ACTION(M); \
3786 break; \
3787 } \
3788 } \
3789 } \
3790 } \
3791 }
3792
3793/*
3794 * Unlink steps:
3795 *
3796 * 1. If x is a chained node, unlink it from its same-sized fd/bk links
3797 * and choose its bk node as its replacement.
3798 * 2. If x was the last node of its size, but not a leaf node, it must
3799 * be replaced with a leaf node (not merely one with an open left or
3800 * right), to make sure that lefts and rights of descendents
3801 * correspond properly to bit masks. We use the rightmost descendent
3802 * of x. We could use any other leaf, but this is easy to locate and
3803 * tends to counteract removal of leftmosts elsewhere, and so keeps
3804 * paths shorter than minimally guaranteed. This doesn't loop much
3805 * because on average a node in a tree is near the bottom.
3806 * 3. If x is the base of a chain (i.e., has parent links) relink
3807 * x's parent and children to x's replacement (or null if none).
3808 */
3809
3810#define unlink_large_chunk(M, X) \
3811 { \
3812 tchunkptr XP = X->parent; \
3813 tchunkptr R; \
3814 if (X->bk != X) { \
3815 tchunkptr F = X->fd; \
3816 R = X->bk; \
3817 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) { \
3818 F->bk = R; \
3819 R->fd = F; \
3820 } \
3821 else { \
3822 CORRUPTION_ERROR_ACTION(M); \
3823 } \
3824 } \
3825 else { \
3826 tchunkptr *RP; \
3827 if (((R = *(RP = &(X->child[1]))) != 0) || \
3828 ((R = *(RP = &(X->child[0]))) != 0)) { \
3829 tchunkptr *CP; \
3830 while ((*(CP = &(R->child[1])) != 0) || \
3831 (*(CP = &(R->child[0])) != 0)) { \
3832 R = *(RP = CP); \
3833 } \
3834 if (RTCHECK(ok_address(M, RP))) \
3835 *RP = 0; \
3836 else { \
3837 CORRUPTION_ERROR_ACTION(M); \
3838 } \
3839 } \
3840 } \
3841 if (XP != 0) { \
3842 tbinptr *H = treebin_at(M, X->index); \
3843 if (X == *H) { \
3844 if ((*H = R) == 0) \
3845 clear_treemap(M, X->index); \
3846 } \
3847 else if (RTCHECK(ok_address(M, XP))) { \
3848 if (XP->child[0] == X) \
3849 XP->child[0] = R; \
3850 else \
3851 XP->child[1] = R; \
3852 } \
3853 else \
3854 CORRUPTION_ERROR_ACTION(M); \
3855 if (R != 0) { \
3856 if (RTCHECK(ok_address(M, R))) { \
3857 tchunkptr C0, C1; \
3858 R->parent = XP; \
3859 if ((C0 = X->child[0]) != 0) { \
3860 if (RTCHECK(ok_address(M, C0))) { \
3861 R->child[0] = C0; \
3862 C0->parent = R; \
3863 } \
3864 else \
3865 CORRUPTION_ERROR_ACTION(M); \
3866 } \
3867 if ((C1 = X->child[1]) != 0) { \
3868 if (RTCHECK(ok_address(M, C1))) { \
3869 R->child[1] = C1; \
3870 C1->parent = R; \
3871 } \
3872 else \
3873 CORRUPTION_ERROR_ACTION(M); \
3874 } \
3875 } \
3876 else \
3877 CORRUPTION_ERROR_ACTION(M); \
3878 } \
3879 } \
3880 }
3881
3882/* Relays to large vs small bin operations */
3883
3884#define insert_chunk(M, P, S) \
3885 if (is_small(S)) insert_small_chunk(M, P, S) \
3886 else { tchunkptr TP = (tchunkptr) (P); insert_large_chunk(M, TP, S); }
3887
3888#define unlink_chunk(M, P, S) \
3889 if (is_small(S)) unlink_small_chunk(M, P, S) \
3890 else { tchunkptr TP = (tchunkptr) (P); unlink_large_chunk(M, TP); }
3891
3892
3893/* Relays to internal calls to malloc/free from realloc, memalign etc */
3894
3895#if ONLY_MSPACES
3896#define internal_malloc(m, b) mspace_malloc(m, b)
3897#define internal_free(m, mem) mspace_free(m, mem);
3898#else /* ONLY_MSPACES */
3899#if MSPACES
3900#define internal_malloc(m, b) \
3901 ((m == gm) ? dlmalloc(b) : mspace_malloc(m, b))
3902#define internal_free(m, mem) \
3903 if (m == gm) dlfree(mem); else mspace_free(m, mem);
3904#else /* MSPACES */
3905#define internal_malloc(m, b) dlmalloc(b)
3906#define internal_free(m, mem) dlfree(mem)
3907#endif /* MSPACES */
3908#endif /* ONLY_MSPACES */
3909
3910/* ----------------------- Direct-mmapping chunks ----------------------- */
3911
3912/*
3913 * Directly mmapped chunks are set up with an offset to the start of
3914 * the mmapped region stored in the prev_foot field of the chunk. This
3915 * allows reconstruction of the required argument to MUNMAP when freed,
3916 * and also allows adjustment of the returned chunk to meet alignment
3917 * requirements (especially in memalign).
3918 */
3919
3920/* Malloc using mmap */
3921static void* mmap_alloc(mstate m, size_t nb)
3922{
3923 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3924
3925 if (m->footprint_limit != 0) {
3926 size_t fp = m->footprint + mmsize;
3927 if (fp <= m->footprint || fp > m->footprint_limit)
3928 return 0;
3929 }
3930 if (mmsize > nb) { /* Check for wrap around 0 */
3931 char *mm = (char *) (CALL_DIRECT_MMAP(mmsize));
3932 if (mm != CMFAIL) {
3933 size_t offset = align_offset(chunk2mem(mm));
3934 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3935 mchunkptr p = (mchunkptr) (mm + offset);
3936 p->prev_foot = offset;
3937 p->head = psize;
3938 mark_inuse_foot(m, p, psize);
3939 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3940 chunk_plus_offset(p, psize + SIZE_T_SIZE)->head = 0;
3941
3942 if (m->least_addr == 0 || mm < m->least_addr)
3943 m->least_addr = mm;
3944 if ((m->footprint += mmsize) > m->max_footprint)
3945 m->max_footprint = m->footprint;
3947 check_mmapped_chunk(m, p);
3948 return chunk2mem(p);
3949 }
3950 }
3951 return 0;
3952}
3953
3954/* Realloc using mmap */
3955static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags)
3956{
3957 size_t oldsize = chunksize(oldp);
3958
3959 (void) flags; /* placate people compiling -Wunused */
3960 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3961 return 0;
3962
3963 /* Keep old chunk if big enough but not too big */
3964 if (oldsize >= nb + SIZE_T_SIZE &&
3965 (oldsize - nb) <= (mparams.granularity << 1))
3966 {
3967 return oldp;
3968 } else {
3969 size_t offset = oldp->prev_foot;
3970 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3971 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3972 char *cp = (char *) CALL_MREMAP((char *) oldp - offset,
3973 oldmmsize, newmmsize, flags);
3974 if (cp != CMFAIL) {
3975 mchunkptr newp = (mchunkptr) (cp + offset);
3976 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3977 newp->head = psize;
3978 mark_inuse_foot(m, newp, psize);
3979 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3980 chunk_plus_offset(newp, psize + SIZE_T_SIZE)->head = 0;
3981
3982 if (cp < m->least_addr)
3983 m->least_addr = cp;
3984 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3985 m->max_footprint = m->footprint;
3986 check_mmapped_chunk(m, newp);
3987 return newp;
3988 }
3989 }
3990 return 0;
3991} /* mmap_resize */
3992
3993/* -------------------------- mspace management -------------------------- */
3994
3995/* Initialize top chunk and its size */
3996static void init_top(mstate m, mchunkptr p, size_t psize)
3997{
3998 /* Ensure alignment */
3999 size_t offset = align_offset(chunk2mem(p));
4000
4001 p = (mchunkptr) ((char *) p + offset);
4002 psize -= offset;
4003
4004 m->top = p;
4005 m->topsize = psize;
4006 p->head = psize | PINUSE_BIT;
4007 /* set size of fake trailing chunk holding overhead space only once */
4008 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
4009 m->trim_check = mparams.trim_threshold; /* reset on each update */
4010}
4011
4012/* Initialize bins for a new mstate that is otherwise zeroed out */
4013static void init_bins(mstate m)
4014{
4015 /* Establish circular links for smallbins */
4016 bindex_t i;
4017
4018 for (i = 0; i < NSMALLBINS; ++i) {
4019 sbinptr bin = smallbin_at(m, i);
4020 bin->fd = bin->bk = bin;
4021 }
4022}
4023
4024#if PROCEED_ON_ERROR
4025
4026/* default corruption action */
4027static void reset_on_error(mstate m)
4028{
4029 int i;
4030
4031 ++malloc_corruption_error_count;
4032 /* Reinitialize fields to forget about all memory */
4033 m->smallmap = m->treemap = 0;
4034 m->dvsize = m->topsize = 0;
4035 m->seg.base = 0;
4036 m->seg.size = 0;
4037 m->seg.next = 0;
4038 m->top = m->dv = 0;
4039 for (i = 0; i < NTREEBINS; ++i)
4040 *treebin_at(m, i) = 0;
4041 init_bins(m);
4042}
4043
4044#endif /* PROCEED_ON_ERROR */
4045
4046/* Allocate chunk and prepend remainder with chunk in successor base. */
4047static void* prepend_alloc(mstate m, char *newbase, char *oldbase,
4048 size_t nb)
4049{
4050 mchunkptr p = align_as_chunk(newbase);
4051 mchunkptr oldfirst = align_as_chunk(oldbase);
4052 size_t psize = (char *) oldfirst - (char *) p;
4053 mchunkptr q = chunk_plus_offset(p, nb);
4054 size_t qsize = psize - nb;
4055
4057
4058 assert((char *) oldfirst > (char *) q);
4059 assert(pinuse(oldfirst));
4060 assert(qsize >= MIN_CHUNK_SIZE);
4061
4062 /* consolidate remainder with first chunk of old base */
4063 if (oldfirst == m->top) {
4064 size_t tsize = m->topsize += qsize;
4065 m->top = q;
4066 q->head = tsize | PINUSE_BIT;
4067 check_top_chunk(m, q);
4068 } else if (oldfirst == m->dv) {
4069 size_t dsize = m->dvsize += qsize;
4070 m->dv = q;
4072 } else {
4073 if (!is_inuse(oldfirst)) {
4074 size_t nsize = chunksize(oldfirst);
4075 unlink_chunk(m, oldfirst, nsize);
4076 oldfirst = chunk_plus_offset(oldfirst, nsize);
4077 qsize += nsize;
4078 }
4079 set_free_with_pinuse(q, qsize, oldfirst);
4080 insert_chunk(m, q, qsize);
4081 check_free_chunk(m, q);
4082 }
4083
4085 return chunk2mem(p);
4086} /* prepend_alloc */
4087
4088/* Add a segment to hold a new noncontiguous region */
4089static void add_segment(mstate m, char *tbase, size_t tsize, flag_t mmapped)
4090{
4091 /* Determine locations and sizes of segment, fenceposts, old top */
4092 char *old_top = (char *) m->top;
4093 msegmentptr oldsp = segment_holding(m, old_top);
4094 char *old_end = oldsp->base + oldsp->size;
4095 size_t ssize = pad_request(sizeof(struct malloc_segment));
4096 char *rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
4097 size_t offset = align_offset(chunk2mem(rawsp));
4098 char *asp = rawsp + offset;
4099 char *csp = (asp < (old_top + MIN_CHUNK_SIZE)) ? old_top : asp;
4100 mchunkptr sp = (mchunkptr) csp;
4101 msegmentptr ss = (msegmentptr) (chunk2mem(sp));
4102 mchunkptr tnext = chunk_plus_offset(sp, ssize);
4103 mchunkptr p = tnext;
4104 int nfences = 0;
4105
4106 /* reset top to new space */
4107 init_top(m, (mchunkptr) tbase, tsize - TOP_FOOT_SIZE);
4108
4109 /* Set up segment record */
4110 assert(is_aligned(ss));
4112 *ss = m->seg; /* Push current record */
4113 m->seg.base = tbase;
4114 m->seg.size = tsize;
4115 m->seg.sflags = mmapped;
4116 m->seg.next = ss;
4117
4118 /* Insert trailing fenceposts */
4119 for (;;) {
4121 p->head = FENCEPOST_HEAD;
4122 ++nfences;
4123 if ((char *) (&(nextp->head)) < old_end)
4124 p = nextp;
4125 else
4126 break;
4127 }
4128 assert(nfences >= 2);
4129
4130 /* Insert the rest of old top into a bin as an ordinary free chunk */
4131 if (csp != old_top) {
4132 mchunkptr q = (mchunkptr) old_top;
4133 size_t psize = csp - old_top;
4134 mchunkptr tn = chunk_plus_offset(q, psize);
4135 set_free_with_pinuse(q, psize, tn);
4136 insert_chunk(m, q, psize);
4137 }
4138
4139 check_top_chunk(m, m->top);
4140} /* add_segment */
4141
4142/* -------------------------- System allocation -------------------------- */
4143
4144/* Get memory from system using MORECORE or MMAP */
4145static void* sys_alloc(mstate m, size_t nb)
4146{
4147 char *tbase = CMFAIL;
4148 size_t tsize = 0;
4149 flag_t mmap_flag = 0;
4150 size_t asize; /* allocation size */
4151
4153
4154 /* Directly map large chunks, but only if already initialized */
4155 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4156 void *mem = mmap_alloc(m, nb);
4157 if (mem != 0)
4158 return mem;
4159 }
4160
4162 if (asize <= nb)
4163 return 0; /* wraparound */
4164
4165 if (m->footprint_limit != 0) {
4166 size_t fp = m->footprint + asize;
4167 if (fp <= m->footprint || fp > m->footprint_limit)
4168 return 0;
4169 }
4170
4171 /*
4172 * Try getting memory in any of three ways (in most-preferred to
4173 * least-preferred order):
4174 * 1. A call to MORECORE that can normally contiguously extend memory.
4175 * (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4176 * or main space is mmapped or a previous contiguous call failed)
4177 * 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4178 * Note that under the default settings, if MORECORE is unable to
4179 * fulfill a request, and HAVE_MMAP is true, then mmap is
4180 * used as a noncontiguous system allocator. This is a useful backup
4181 * strategy for systems with holes in address spaces -- in this case
4182 * sbrk cannot contiguously expand the heap, but mmap may be able to
4183 * find space.
4184 * 3. A call to MORECORE that cannot usually contiguously extend memory.
4185 * (disabled if not HAVE_MORECORE)
4186 *
4187 * In all cases, we need to request enough bytes from system to ensure
4188 * we can malloc nb bytes upon success, so pad with enough space for
4189 * top_foot, plus alignment-pad to make sure we don't lose bytes if
4190 * not on boundary, and round this up to a granularity unit.
4191 */
4192
4194 char *br = CMFAIL;
4195 size_t ssize = asize; /* sbrk call size */
4196 msegmentptr ss = (m->top == 0) ? 0 : segment_holding(m, (char *) m->top);
4198
4199 if (ss == 0) { /* First time through or recovery */
4200 char *base = (char *) CALL_MORECORE(0);
4201 if (base != CMFAIL) {
4202 size_t fp;
4203 /* Adjust to end on a page boundary */
4204 if (!is_page_aligned(base))
4205 ssize += (page_align((size_t) base) - (size_t) base);
4206 fp = m->footprint + ssize; /* recheck limits */
4207 if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4208 (m->footprint_limit == 0 ||
4209 (fp > m->footprint && fp <= m->footprint_limit)) &&
4210 (br = (char *) (CALL_MORECORE(ssize))) == base)
4211 {
4212 tbase = base;
4213 tsize = ssize;
4214 }
4215 }
4216 } else {
4217 /* Subtract out existing available top space from MORECORE request. */
4218 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4219 /* Use mem here only if it did continuously extend old space */
4220 if (ssize < HALF_MAX_SIZE_T &&
4221 (br = (char *) (CALL_MORECORE(ssize))) == ss->base + ss->size)
4222 {
4223 tbase = br;
4224 tsize = ssize;
4225 }
4226 }
4227
4228 if (tbase == CMFAIL) { /* Cope with partial failure */
4229 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4230 if (ssize < HALF_MAX_SIZE_T &&
4231 ssize < nb + SYS_ALLOC_PADDING)
4232 {
4233 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4234 if (esize < HALF_MAX_SIZE_T) {
4235 char *end = (char *) CALL_MORECORE(esize);
4236 if (end != CMFAIL) {
4237 ssize += esize;
4238 } else { /* Can't use; try to release */
4239 (void) CALL_MORECORE(-ssize);
4240 br = CMFAIL;
4241 }
4242 }
4243 }
4244 }
4245 if (br != CMFAIL) { /* Use the space we did get */
4246 tbase = br;
4247 tsize = ssize;
4248 } else {
4249 disable_contiguous(m); /* Don't try contiguous path in the future */
4250 }
4251 }
4252
4254 }
4255
4256 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4257 char *mp = (char *) (CALL_MMAP(asize));
4258 if (mp != CMFAIL) {
4259 tbase = mp;
4260 tsize = asize;
4261 mmap_flag = USE_MMAP_BIT;
4262 }
4263 }
4264
4265 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4266 if (asize < HALF_MAX_SIZE_T) {
4267 char *br = CMFAIL;
4268 char *end = CMFAIL;
4270 br = (char *) (CALL_MORECORE(asize));
4271 end = (char *) (CALL_MORECORE(0));
4273 if (br != CMFAIL && end != CMFAIL && br < end) {
4274 size_t ssize = end - br;
4275 if (ssize > nb + TOP_FOOT_SIZE) {
4276 tbase = br;
4277 tsize = ssize;
4278 }
4279 }
4280 }
4281 }
4282
4283 if (tbase != CMFAIL) {
4284 if ((m->footprint += tsize) > m->max_footprint)
4285 m->max_footprint = m->footprint;
4286
4287 if (!is_initialized(m)) { /* first-time initialization */
4288 if (m->least_addr == 0 || tbase < m->least_addr)
4289 m->least_addr = tbase;
4290 m->seg.base = tbase;
4291 m->seg.size = tsize;
4292 m->seg.sflags = mmap_flag;
4293 m->magic = mparams.magic;
4295 init_bins(m);
4296 #if !ONLY_MSPACES
4297 if (is_global(m)) {
4298 init_top(m, (mchunkptr) tbase, tsize - TOP_FOOT_SIZE);
4299 } else
4300 #endif
4301 {
4302 /* Offset top by embedded malloc_state */
4304 init_top(m, mn, (size_t) ((tbase + tsize) - (char *) mn) - TOP_FOOT_SIZE);
4305 }
4306 } else {
4307 /* Try to merge with an existing segment */
4308 msegmentptr sp = &m->seg;
4309 /* Only consider most recent segment if traversal suppressed */
4310 while (sp != 0 && tbase != sp->base + sp->size)
4311 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4312 if (sp != 0 &&
4313 !is_extern_segment(sp) &&
4314 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4315 segment_holds(sp, m->top)) /* append */
4316 {
4317 sp->size += tsize;
4318 init_top(m, m->top, m->topsize + tsize);
4319 } else {
4320 if (tbase < m->least_addr)
4321 m->least_addr = tbase;
4322 sp = &m->seg;
4323 while (sp != 0 && sp->base != tbase + tsize)
4324 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4325 if (sp != 0 &&
4326 !is_extern_segment(sp) &&
4327 (sp->sflags & USE_MMAP_BIT) == mmap_flag)
4328 {
4329 char *oldbase = sp->base;
4330 sp->base = tbase;
4331 sp->size += tsize;
4332 return prepend_alloc(m, tbase, oldbase, nb);
4333 } else {
4334 add_segment(m, tbase, tsize, mmap_flag);
4335 }
4336 }
4337 }
4338
4339 if (nb < m->topsize) { /* Allocate from new or extended top space */
4340 size_t rsize = m->topsize -= nb;
4341 mchunkptr p = m->top;
4342 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4343 r->head = rsize | PINUSE_BIT;
4345 check_top_chunk(m, m->top);
4347 return chunk2mem(p);
4348 }
4349 }
4350
4352 return 0;
4353} /* sys_alloc */
4354
4355/* ----------------------- system deallocation -------------------------- */
4356
4357/* Unmap and unlink any mmapped segments that don't contain used chunks */
4359{
4360 size_t released = 0;
4361 int nsegs = 0;
4362 msegmentptr pred = &m->seg;
4363 msegmentptr sp = pred->next;
4364
4365 while (sp != 0) {
4366 char *base = sp->base;
4367 size_t size = sp->size;
4368 msegmentptr next = sp->next;
4369 ++nsegs;
4370 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4371 mchunkptr p = align_as_chunk(base);
4372 size_t psize = chunksize(p);
4373 /* Can unmap if first chunk holds entire segment and not pinned */
4374 if (!is_inuse(p) && (char *) p + psize >= base + size - TOP_FOOT_SIZE) {
4375 tchunkptr tp = (tchunkptr) p;
4376 assert(segment_holds(sp, (char *) sp));
4377 if (p == m->dv) {
4378 m->dv = 0;
4379 m->dvsize = 0;
4380 } else {
4381 unlink_large_chunk(m, tp);
4382 }
4383 if (CALL_MUNMAP(base, size) == 0) {
4384 released += size;
4385 m->footprint -= size;
4386 /* unlink obsoleted record */
4387 sp = pred;
4388 sp->next = next;
4389 } else { /* back out if cannot unmap */
4390 insert_large_chunk(m, tp, psize);
4391 }
4392 }
4393 }
4394 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4395 break;
4396 pred = sp;
4397 sp = next;
4398 }
4399 /* Reset check counter */
4400 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE) ?
4401 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4402 return released;
4403} /* release_unused_segments */
4404
4405static int sys_trim(mstate m, size_t pad)
4406{
4407 size_t released = 0;
4408
4410 if (pad < MAX_REQUEST && is_initialized(m)) {
4411 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4412
4413 if (m->topsize > pad) {
4414 /* Shrink top space in granularity-size units, keeping at least one */
4415 size_t unit = mparams.granularity;
4416 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit
4417 - SIZE_T_ONE) * unit;
4418 msegmentptr sp = segment_holding(m, (char *) m->top);
4419
4420 if (!is_extern_segment(sp)) {
4421 if (is_mmapped_segment(sp)) {
4422 if (HAVE_MMAP &&
4423 sp->size >= extra &&
4424 !has_segment_link(m, sp)) /* can't shrink if pinned */
4425 {
4426 size_t newsize = sp->size - extra;
4427 (void) newsize; /* placate people compiling -Wunused-variable */
4428 /* Prefer mremap, fall back to munmap */
4429 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4430 (CALL_MUNMAP(sp->base + newsize, extra) == 0))
4431 {
4432 released = extra;
4433 }
4434 }
4435 } else if (HAVE_MORECORE) {
4436 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4437 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4439 {
4440 /* Make sure end of memory is where we last set it. */
4441 char *old_br = (char *) (CALL_MORECORE(0));
4442 if (old_br == sp->base + sp->size) {
4443 char *rel_br = (char *) (CALL_MORECORE(-extra));
4444 char *new_br = (char *) (CALL_MORECORE(0));
4445 if (rel_br != CMFAIL && new_br < old_br)
4446 released = old_br - new_br;
4447 }
4448 }
4450 }
4451 }
4452
4453 if (released != 0) {
4454 sp->size -= released;
4455 m->footprint -= released;
4456 init_top(m, m->top, m->topsize - released);
4457 check_top_chunk(m, m->top);
4458 }
4459 }
4460
4461 /* Unmap any unused mmapped segments */
4462 if (HAVE_MMAP)
4463 released += release_unused_segments(m);
4464
4465 /* On failure, disable autotrim to avoid repeated failed future calls */
4466 if (released == 0 && m->topsize > m->trim_check)
4468 }
4469
4470 return (released != 0) ? 1 : 0;
4471} /* sys_trim */
4472
4473/* Consolidate and bin a chunk. Differs from exported versions
4474 * of free mainly in that the chunk need not be marked as inuse.
4475 */
4476static void dispose_chunk(mstate m, mchunkptr p, size_t psize)
4477{
4478 mchunkptr next = chunk_plus_offset(p, psize);
4479
4480 if (!pinuse(p)) {
4481 mchunkptr prev;
4482 size_t prevsize = p->prev_foot;
4483 if (is_mmapped(p)) {
4484 psize += prevsize + MMAP_FOOT_PAD;
4485 if (CALL_MUNMAP((char *) p - prevsize, psize) == 0)
4486 m->footprint -= psize;
4487 return;
4488 }
4489 prev = chunk_minus_offset(p, prevsize);
4490 psize += prevsize;
4491 p = prev;
4492 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4493 if (p != m->dv) {
4494 unlink_chunk(m, p, prevsize);
4495 } else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4496 m->dvsize = psize;
4497 set_free_with_pinuse(p, psize, next);
4498 return;
4499 }
4500 } else {
4502 return;
4503 }
4504 }
4505 if (RTCHECK(ok_address(m, next))) {
4506 if (!cinuse(next)) { /* consolidate forward */
4507 if (next == m->top) {
4508 size_t tsize = m->topsize += psize;
4509 m->top = p;
4510 p->head = tsize | PINUSE_BIT;
4511 if (p == m->dv) {
4512 m->dv = 0;
4513 m->dvsize = 0;
4514 }
4515 return;
4516 } else if (next == m->dv) {
4517 size_t dsize = m->dvsize += psize;
4518 m->dv = p;
4520 return;
4521 } else {
4522 size_t nsize = chunksize(next);
4523 psize += nsize;
4524 unlink_chunk(m, next, nsize);
4526 if (p == m->dv) {
4527 m->dvsize = psize;
4528 return;
4529 }
4530 }
4531 } else {
4532 set_free_with_pinuse(p, psize, next);
4533 }
4534 insert_chunk(m, p, psize);
4535 } else {
4537 }
4538} /* dispose_chunk */
4539
4540/* ---------------------------- malloc --------------------------- */
4541
4542/* allocate a large request from the best fitting chunk in a treebin */
4543static void* tmalloc_large(mstate m, size_t nb)
4544{
4545 tchunkptr v = 0;
4546 size_t rsize = -nb; /* Unsigned negation */
4547 tchunkptr t;
4548 bindex_t idx;
4549
4550 compute_tree_index(nb, idx);
4551 if ((t = *treebin_at(m, idx)) != 0) {
4552 /* Traverse tree for this bin looking for node with size == nb */
4553 size_t sizebits = nb << leftshift_for_tree_index(idx);
4554 tchunkptr rst = 0; /* The deepest untaken right subtree */
4555 for (;;) {
4556 tchunkptr rt;
4557 size_t trem = chunksize(t) - nb;
4558 if (trem < rsize) {
4559 v = t;
4560 if ((rsize = trem) == 0)
4561 break;
4562 }
4563 rt = t->child[1];
4564 t = t->child[(sizebits >> (SIZE_T_BITSIZE - SIZE_T_ONE)) & 1];
4565 if (rt != 0 && rt != t)
4566 rst = rt;
4567 if (t == 0) {
4568 t = rst; /* set t to least subtree holding sizes > nb */
4569 break;
4570 }
4571 sizebits <<= 1;
4572 }
4573 }
4574 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4575 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4576 if (leftbits != 0) {
4577 bindex_t i;
4578 binmap_t leastbit = least_bit(leftbits);
4579 compute_bit2idx(leastbit, i);
4580 t = *treebin_at(m, i);
4581 }
4582 }
4583
4584 while (t != 0) { /* find smallest of tree or subtree */
4585 size_t trem = chunksize(t) - nb;
4586 if (trem < rsize) {
4587 rsize = trem;
4588 v = t;
4589 }
4590 t = leftmost_child(t);
4591 }
4592
4593 /* If dv is a better fit, return 0 so malloc will use it */
4594 if (v != 0 && rsize < (size_t) (m->dvsize - nb)) {
4595 if (RTCHECK(ok_address(m, v))) { /* split */
4596 mchunkptr r = chunk_plus_offset(v, nb);
4597 assert(chunksize(v) == rsize + nb);
4598 if (RTCHECK(ok_next(v, r))) {
4599 unlink_large_chunk(m, v);
4600 if (rsize < MIN_CHUNK_SIZE) {
4601 set_inuse_and_pinuse(m, v, (rsize + nb));
4602 } else {
4605 insert_chunk(m, r, rsize);
4606 }
4607 return chunk2mem(v);
4608 }
4609 }
4611 }
4612 return 0;
4613} /* tmalloc_large */
4614
4615/* allocate a small request from the best fitting chunk in a treebin */
4616static void* tmalloc_small(mstate m, size_t nb)
4617{
4618 tchunkptr t, v;
4619 size_t rsize;
4620 bindex_t i;
4621 binmap_t leastbit = least_bit(m->treemap);
4622
4623 compute_bit2idx(leastbit, i);
4624 v = t = *treebin_at(m, i);
4625 rsize = chunksize(t) - nb;
4626
4627 while ((t = leftmost_child(t)) != 0) {
4628 size_t trem = chunksize(t) - nb;
4629 if (trem < rsize) {
4630 rsize = trem;
4631 v = t;
4632 }
4633 }
4634
4635 if (RTCHECK(ok_address(m, v))) {
4636 mchunkptr r = chunk_plus_offset(v, nb);
4637 assert(chunksize(v) == rsize + nb);
4638 if (RTCHECK(ok_next(v, r))) {
4639 unlink_large_chunk(m, v);
4640 if (rsize < MIN_CHUNK_SIZE) {
4641 set_inuse_and_pinuse(m, v, (rsize + nb));
4642 } else {
4645 replace_dv(m, r, rsize);
4646 }
4647 return chunk2mem(v);
4648 }
4649 }
4650
4652 return 0;
4653} /* tmalloc_small */
4654
4655#if !ONLY_MSPACES
4656
4657void* dlmalloc(size_t bytes)
4658{
4659 /*
4660 * Basic algorithm:
4661 * If a small request (< 256 bytes minus per-chunk overhead):
4662 * 1. If one exists, use a remainderless chunk in associated smallbin.
4663 * (Remainderless means that there are too few excess bytes to
4664 * represent as a chunk.)
4665 * 2. If it is big enough, use the dv chunk, which is normally the
4666 * chunk adjacent to the one used for the most recent small request.
4667 * 3. If one exists, split the smallest available chunk in a bin,
4668 * saving remainder in dv.
4669 * 4. If it is big enough, use the top chunk.
4670 * 5. If available, get memory from system and use it
4671 * Otherwise, for a large request:
4672 * 1. Find the smallest available binned chunk that fits, and use it
4673 * if it is better fitting than dv chunk, splitting if necessary.
4674 * 2. If better fitting than any binned chunk, use the dv chunk.
4675 * 3. If it is big enough, use the top chunk.
4676 * 4. If request size >= mmap threshold, try to directly mmap this chunk.
4677 * 5. If available, get memory from system and use it
4678 *
4679 * The ugly goto's here ensure that postaction occurs along all paths.
4680 */
4681
4682 #if USE_LOCKS
4683 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4684 #endif
4685
4686 if (!PREACTION(gm)) {
4687 void *mem;
4688 size_t nb;
4689 if (bytes <= MAX_SMALL_REQUEST) {
4690 bindex_t idx;
4691 binmap_t smallbits;
4692 nb = (bytes < MIN_REQUEST) ? MIN_CHUNK_SIZE : pad_request(bytes);
4693 idx = small_index(nb);
4694 smallbits = gm->smallmap >> idx;
4695
4696 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4697 mchunkptr b, p;
4698 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4699 b = smallbin_at(gm, idx);
4700 p = b->fd;
4701 assert(chunksize(p) == small_index2size(idx));
4702 unlink_first_small_chunk(gm, b, p, idx);
4704 mem = chunk2mem(p);
4705 check_malloced_chunk(gm, mem, nb);
4706 goto postaction;
4707 } else if (nb > gm->dvsize) {
4708 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4709 mchunkptr b, p, r;
4710 size_t rsize;
4711 bindex_t i;
4712 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4713 binmap_t leastbit = least_bit(leftbits);
4714 compute_bit2idx(leastbit, i);
4715 b = smallbin_at(gm, i);
4716 p = b->fd;
4718 unlink_first_small_chunk(gm, b, p, i);
4719 rsize = small_index2size(i) - nb;
4720 /* Fit here cannot be remainderless if 4byte sizes */
4721 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) {
4723 } else {
4725 r = chunk_plus_offset(p, nb);
4727 replace_dv(gm, r, rsize);
4728 }
4729 mem = chunk2mem(p);
4730 check_malloced_chunk(gm, mem, nb);
4731 goto postaction;
4732 } else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4733 check_malloced_chunk(gm, mem, nb);
4734 goto postaction;
4735 }
4736 }
4737 } else if (bytes >= MAX_REQUEST) {
4738 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4739 } else {
4740 nb = pad_request(bytes);
4741 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4742 check_malloced_chunk(gm, mem, nb);
4743 goto postaction;
4744 }
4745 }
4746
4747 if (nb <= gm->dvsize) {
4748 size_t rsize = gm->dvsize - nb;
4749 mchunkptr p = gm->dv;
4750 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4751 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4752 gm->dvsize = rsize;
4755 } else { /* exhaust dv */
4756 size_t dvs = gm->dvsize;
4757 gm->dvsize = 0;
4758 gm->dv = 0;
4759 set_inuse_and_pinuse(gm, p, dvs);
4760 }
4761 mem = chunk2mem(p);
4762 check_malloced_chunk(gm, mem, nb);
4763 goto postaction;
4764 } else if (nb < gm->topsize) { /* Split top */
4765 size_t rsize = gm->topsize -= nb;
4766 mchunkptr p = gm->top;
4767 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4768 r->head = rsize | PINUSE_BIT;
4770 mem = chunk2mem(p);
4771 check_top_chunk(gm, gm->top);
4772 check_malloced_chunk(gm, mem, nb);
4773 goto postaction;
4774 }
4775
4776 mem = sys_alloc(gm, nb);
4777
4778postaction:
4779 POSTACTION(gm);
4780 return mem;
4781 }
4782
4783 return 0;
4784} /* dlmalloc */
4785
4786/* ---------------------------- free --------------------------- */
4787
4788void dlfree(void *mem)
4789{
4790 /*
4791 * Consolidate freed chunks with preceeding or succeeding bordering
4792 * free chunks, if they exist, and then place in a bin. Intermixed
4793 * with special cases for top, dv, mmapped chunks, and usage errors.
4794 */
4795
4796 if (mem != 0) {
4797 mchunkptr p = mem2chunk(mem);
4798 #if FOOTERS
4799 mstate fm = get_mstate_for(p);
4800 if (!ok_magic(fm)) {
4802 return;
4803 }
4804 #else /* FOOTERS */
4805 #define fm gm
4806 #endif /* FOOTERS */
4807 if (!PREACTION(fm)) {
4809 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4810 size_t psize = chunksize(p);
4811 mchunkptr next = chunk_plus_offset(p, psize);
4812 if (!pinuse(p)) {
4813 size_t prevsize = p->prev_foot;
4814 if (is_mmapped(p)) {
4815 psize += prevsize + MMAP_FOOT_PAD;
4816 if (CALL_MUNMAP((char *) p - prevsize, psize) == 0)
4817 fm->footprint -= psize;
4818 goto postaction;
4819 } else {
4820 mchunkptr prev = chunk_minus_offset(p, prevsize);
4821 psize += prevsize;
4822 p = prev;
4823 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4824 if (p != fm->dv) {
4825 unlink_chunk(fm, p, prevsize);
4826 } else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4827 fm->dvsize = psize;
4828 set_free_with_pinuse(p, psize, next);
4829 goto postaction;
4830 }
4831 } else {
4832 goto erroraction;
4833 }
4834 }
4835 }
4836
4837 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4838 if (!cinuse(next)) { /* consolidate forward */
4839 if (next == fm->top) {
4840 size_t tsize = fm->topsize += psize;
4841 fm->top = p;
4842 p->head = tsize | PINUSE_BIT;
4843 if (p == fm->dv) {
4844 fm->dv = 0;
4845 fm->dvsize = 0;
4846 }
4847 if (should_trim(fm, tsize))
4848 sys_trim(fm, 0);
4849 goto postaction;
4850 } else if (next == fm->dv) {
4851 size_t dsize = fm->dvsize += psize;
4852 fm->dv = p;
4854 goto postaction;
4855 } else {
4856 size_t nsize = chunksize(next);
4857 psize += nsize;
4858 unlink_chunk(fm, next, nsize);
4860 if (p == fm->dv) {
4861 fm->dvsize = psize;
4862 goto postaction;
4863 }
4864 }
4865 } else {
4866 set_free_with_pinuse(p, psize, next);
4867 }
4868
4869 if (is_small(psize)) {
4870 insert_small_chunk(fm, p, psize);
4871 check_free_chunk(fm, p);
4872 } else {
4873 tchunkptr tp = (tchunkptr) p;
4874 insert_large_chunk(fm, tp, psize);
4875 check_free_chunk(fm, p);
4876 if (--fm->release_checks == 0)
4878 }
4879 goto postaction;
4880 }
4881 }
4882erroraction:
4884postaction:
4885 POSTACTION(fm);
4886 }
4887 }
4888 #if !FOOTERS
4889 #undef fm
4890 #endif /* FOOTERS */
4891} /* dlfree */
4892
4893void* dlcalloc(size_t n_elements, size_t elem_size)
4894{
4895 void *mem;
4896 size_t req = 0;
4897
4898 if (n_elements != 0) {
4899 req = n_elements * elem_size;
4900 if (((n_elements | elem_size) & ~(size_t) 0xffff) &&
4901 (req / n_elements != elem_size))
4902 req = MAX_SIZE_T; /* force downstream failure on overflow */
4903 }
4904 mem = dlmalloc(req);
4905 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4906 memset(mem, 0, req);
4907 return mem;
4908}
4909
4910#endif /* !ONLY_MSPACES */
4911
4912/* ------------ Internal support for realloc, memalign, etc -------------- */
4913
4914/* Try to realloc; only in-place unless can_move true */
4916 int can_move)
4917{
4918 mchunkptr newp = 0;
4919 size_t oldsize = chunksize(p);
4920 mchunkptr next = chunk_plus_offset(p, oldsize);
4921
4922 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4923 ok_next(p, next) && ok_pinuse(next)))
4924 {
4925 if (is_mmapped(p)) {
4926 newp = mmap_resize(m, p, nb, can_move);
4927 } else if (oldsize >= nb) { /* already big enough */
4928 size_t rsize = oldsize - nb;
4929 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4930 mchunkptr r = chunk_plus_offset(p, nb);
4931 set_inuse(m, p, nb);
4932 set_inuse(m, r, rsize);
4933 dispose_chunk(m, r, rsize);
4934 }
4935 newp = p;
4936 } else if (next == m->top) { /* extend into top */
4937 if (oldsize + m->topsize > nb) {
4938 size_t newsize = oldsize + m->topsize;
4939 size_t newtopsize = newsize - nb;
4940 mchunkptr newtop = chunk_plus_offset(p, nb);
4941 set_inuse(m, p, nb);
4942 newtop->head = newtopsize | PINUSE_BIT;
4943 m->top = newtop;
4944 m->topsize = newtopsize;
4945 newp = p;
4946 }
4947 } else if (next == m->dv) { /* extend into dv */
4948 size_t dvs = m->dvsize;
4949 if (oldsize + dvs >= nb) {
4950 size_t dsize = oldsize + dvs - nb;
4951 if (dsize >= MIN_CHUNK_SIZE) {
4952 mchunkptr r = chunk_plus_offset(p, nb);
4953 mchunkptr n = chunk_plus_offset(r, dsize);
4954 set_inuse(m, p, nb);
4956 clear_pinuse(n);
4957 m->dvsize = dsize;
4958 m->dv = r;
4959 } else { /* exhaust dv */
4960 size_t newsize = oldsize + dvs;
4961 set_inuse(m, p, newsize);
4962 m->dvsize = 0;
4963 m->dv = 0;
4964 }
4965 newp = p;
4966 }
4967 } else if (!cinuse(next)) { /* extend into next free chunk */
4968 size_t nextsize = chunksize(next);
4969 if (oldsize + nextsize >= nb) {
4970 size_t rsize = oldsize + nextsize - nb;
4971 unlink_chunk(m, next, nextsize);
4972 if (rsize < MIN_CHUNK_SIZE) {
4973 size_t newsize = oldsize + nextsize;
4974 set_inuse(m, p, newsize);
4975 } else {
4976 mchunkptr r = chunk_plus_offset(p, nb);
4977 set_inuse(m, p, nb);
4978 set_inuse(m, r, rsize);
4979 dispose_chunk(m, r, rsize);
4980 }
4981 newp = p;
4982 }
4983 }
4984 } else {
4986 }
4987 return newp;
4988} /* try_realloc_chunk */
4989
4990static void* internal_memalign(mstate m, size_t alignment, size_t bytes)
4991{
4992 void *mem = 0;
4993
4994 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4995 alignment = MIN_CHUNK_SIZE;
4996 if ((alignment & (alignment - SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4997 size_t a = MALLOC_ALIGNMENT << 1;
4998 while (a < alignment) a <<= 1;
4999 alignment = a;
5000 }
5001 if (bytes >= MAX_REQUEST - alignment) {
5002 if (m != 0) { /* Test isn't needed but avoids compiler warning */
5004 }
5005 } else {
5006 size_t nb = request2size(bytes);
5007 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
5008 mem = internal_malloc(m, req);
5009 if (mem != 0) {
5010 mchunkptr p = mem2chunk(mem);
5011 if (PREACTION(m))
5012 return 0;
5013
5014 if ((((size_t) (mem)) & (alignment - 1)) != 0) { /* misaligned */
5015 /*
5016 * Find an aligned spot inside chunk. Since we need to give
5017 * back leading space in a chunk of at least MIN_CHUNK_SIZE, if
5018 * the first calculation places us at a spot with less than
5019 * MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
5020 * We've allocated enough total room so that this is always
5021 * possible.
5022 */
5023 char *br = (char *) mem2chunk((size_t) (((size_t) ((char *) mem + alignment
5024 - SIZE_T_ONE))
5025 & -alignment));
5026 char *pos = ((size_t) (br - (char *) (p)) >= MIN_CHUNK_SIZE) ?
5027 br : br + alignment;
5028 mchunkptr newp = (mchunkptr) pos;
5029 size_t leadsize = pos - (char *) (p);
5030 size_t newsize = chunksize(p) - leadsize;
5031
5032 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
5033 newp->prev_foot = p->prev_foot + leadsize;
5034 newp->head = newsize;
5035 } else { /* Otherwise, give back leader, use the rest */
5036 set_inuse(m, newp, newsize);
5037 set_inuse(m, p, leadsize);
5038 dispose_chunk(m, p, leadsize);
5039 }
5040 p = newp;
5041 }
5042
5043 /* Give back spare room at the end */
5044 if (!is_mmapped(p)) {
5045 size_t size = chunksize(p);
5046 if (size > nb + MIN_CHUNK_SIZE) {
5047 size_t remainder_size = size - nb;
5048 mchunkptr remainder = chunk_plus_offset(p, nb);
5049 set_inuse(m, p, nb);
5050 set_inuse(m, remainder, remainder_size);
5051 dispose_chunk(m, remainder, remainder_size);
5052 }
5053 }
5054
5055 mem = chunk2mem(p);
5056 assert(chunksize(p) >= nb);
5057 assert(((size_t) mem & (alignment - 1)) == 0);
5058 check_inuse_chunk(m, p);
5059 POSTACTION(m);
5060 }
5061 }
5062 return mem;
5063} /* internal_memalign */
5064
5065/*
5066 * Common support for independent_X routines, handling
5067 * all of the combinations that can result.
5068 * The opts arg has:
5069 * bit 0 set if all elements are same size (using sizes[0])
5070 * bit 1 set if elements should be zeroed
5071 */
5072static void** ialloc(mstate m,
5073 size_t n_elements,
5074 size_t * sizes,
5075 int opts,
5076 void * chunks[])
5077{
5078 size_t element_size; /* chunksize of each element, if all same */
5079 size_t contents_size; /* total size of elements */
5080 size_t array_size; /* request size of pointer array */
5081 void *mem; /* malloced aggregate space */
5082 mchunkptr p; /* corresponding chunk */
5083 size_t remainder_size; /* remaining bytes while splitting */
5084 void **marray; /* either "chunks" or malloced ptr array */
5085 mchunkptr array_chunk; /* chunk for malloced ptr array */
5086 flag_t was_enabled; /* to disable mmap */
5087 size_t size;
5088 size_t i;
5089
5091 /* compute array length, if needed */
5092 if (chunks != 0) {
5093 if (n_elements == 0)
5094 return chunks; /* nothing to do */
5095
5096 marray = chunks;
5097 array_size = 0;
5098 } else {
5099 /* if empty req, must still return chunk representing empty array */
5100 if (n_elements == 0)
5101 return (void **) internal_malloc(m, 0);
5102
5103 marray = 0;
5104 array_size = request2size(n_elements * (sizeof(void *)));
5105 }
5106
5107 /* compute total element size */
5108 if (opts & 0x1) { /* all-same-size */
5109 element_size = request2size(*sizes);
5110 contents_size = n_elements * element_size;
5111 } else { /* add up all the sizes */
5112 element_size = 0;
5113 contents_size = 0;
5114 for (i = 0; i != n_elements; ++i)
5115 contents_size += request2size(sizes[i]);
5116 }
5117
5118 size = contents_size + array_size;
5119
5120 /*
5121 * Allocate the aggregate chunk. First disable direct-mmapping so
5122 * malloc won't use it, since we would not be able to later
5123 * free/realloc space internal to a segregated mmap region.
5124 */
5125 was_enabled = use_mmap(m);
5126 disable_mmap(m);
5127 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5128 if (was_enabled)
5129 enable_mmap(m);
5130 if (mem == 0)
5131 return 0;
5132
5133 if (PREACTION(m)) return 0;
5134
5135 p = mem2chunk(mem);
5136 remainder_size = chunksize(p);
5137
5138 assert(!is_mmapped(p));
5139
5140 if (opts & 0x2) { /* optionally clear the elements */
5141 memset((size_t *) mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5142 }
5143
5144 /* If not provided, allocate the pointer array as final part of chunk */
5145 if (marray == 0) {
5146 size_t array_chunk_size;
5147 array_chunk = chunk_plus_offset(p, contents_size);
5148 array_chunk_size = remainder_size - contents_size;
5149 marray = (void **) (chunk2mem(array_chunk));
5150 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5151 remainder_size = contents_size;
5152 }
5153
5154 /* split out elements */
5155 for (i = 0;; ++i) {
5156 marray[i] = chunk2mem(p);
5157 if (i != n_elements - 1) {
5158 if (element_size != 0)
5159 size = element_size;
5160 else
5161 size = request2size(sizes[i]);
5162 remainder_size -= size;
5164 p = chunk_plus_offset(p, size);
5165 } else { /* the final element absorbs any overallocation slop */
5166 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5167 break;
5168 }
5169 }
5170
5171 #if DEBUG
5172 if (marray != chunks) {
5173 /* final element must have exactly exhausted chunk */
5174 if (element_size != 0) {
5175 assert(remainder_size == element_size);
5176 } else {
5177 assert(remainder_size == request2size(sizes[i]));
5178 }
5179 check_inuse_chunk(m, mem2chunk(marray));
5180 }
5181 for (i = 0; i != n_elements; ++i)
5182 check_inuse_chunk(m, mem2chunk(marray[i]));
5183
5184 #endif /* DEBUG */
5185
5186 POSTACTION(m);
5187 return marray;
5188} /* ialloc */
5189
5190/* Try to free all pointers in the given array.
5191 * Note: this could be made faster, by delaying consolidation,
5192 * at the price of disabling some user integrity checks, We
5193 * still optimize some consolidations by combining adjacent
5194 * chunks before freeing, which will occur often if allocated
5195 * with ialloc or the array is sorted.
5196 */
5197static size_t internal_bulk_free(mstate m, void *array[], size_t nelem)
5198{
5199 size_t unfreed = 0;
5200
5201 if (!PREACTION(m)) {
5202 void **a;
5203 void **fence = &(array[nelem]);
5204 for (a = array; a != fence; ++a) {
5205 void *mem = *a;
5206 if (mem != 0) {
5207 mchunkptr p = mem2chunk(mem);
5208 size_t psize = chunksize(p);
5209 #if FOOTERS
5210 if (get_mstate_for(p) != m) {
5211 ++unfreed;
5212 continue;
5213 }
5214 #endif
5215 check_inuse_chunk(m, p);
5216 *a = 0;
5217 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5218 void **b = a + 1; /* try to merge with next chunk */
5219 mchunkptr next = next_chunk(p);
5220 if (b != fence && *b == chunk2mem(next)) {
5221 size_t newsize = chunksize(next) + psize;
5222 set_inuse(m, p, newsize);
5223 *b = chunk2mem(p);
5224 } else {
5225 dispose_chunk(m, p, psize);
5226 }
5227 } else {
5229 break;
5230 }
5231 }
5232 }
5233 if (should_trim(m, m->topsize))
5234 sys_trim(m, 0);
5235 POSTACTION(m);
5236 }
5237 return unfreed;
5238} /* internal_bulk_free */
5239
5240/* Traversal */
5241#if MALLOC_INSPECT_ALL
5242static void internal_inspect_all(mstate m,
5243 void ( * handler )(void *start,
5244 void * end,
5245 size_t used_bytes,
5246 void * callback_arg),
5247 void * arg)
5248{
5249 if (is_initialized(m)) {
5250 mchunkptr top = m->top;
5251 msegmentptr s;
5252 for (s = &m->seg; s != 0; s = s->next) {
5254 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5255 mchunkptr next = next_chunk(q);
5256 size_t sz = chunksize(q);
5257 size_t used;
5258 void *start;
5259 if (is_inuse(q)) {
5260 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5261 start = chunk2mem(q);
5262 } else {
5263 used = 0;
5264 if (is_small(sz)) { /* offset by possible bookkeeping */
5265 start = (void *) ((char *) q + sizeof(struct malloc_chunk));
5266 } else {
5267 start = (void *) ((char *) q + sizeof(struct malloc_tree_chunk));
5268 }
5269 }
5270 if (start < (void *) next) /* skip if all space is bookkeeping */
5271 handler(start, next, used, arg);
5272 if (q == top)
5273 break;
5274 q = next;
5275 }
5276 }
5277 }
5278}
5279
5280#endif /* MALLOC_INSPECT_ALL */
5281
5282/* ------------------ Exported realloc, memalign, etc -------------------- */
5283
5284#if !ONLY_MSPACES
5285
5286void* dlrealloc(void *oldmem, size_t bytes)
5287{
5288 void *mem = 0;
5289
5290 if (oldmem == 0) {
5291 mem = dlmalloc(bytes);
5292 } else if (bytes >= MAX_REQUEST) {
5294 }
5295 #ifdef REALLOC_ZERO_BYTES_FREES
5296 else if (bytes == 0) {
5297 dlfree(oldmem);
5298 }
5299 #endif /* REALLOC_ZERO_BYTES_FREES */
5300 else {
5301 size_t nb = request2size(bytes);
5302 mchunkptr oldp = mem2chunk(oldmem);
5303 #if !FOOTERS
5304 mstate m = gm;
5305 #else /* FOOTERS */
5306 mstate m = get_mstate_for(oldp);
5307 if (!ok_magic(m)) {
5308 USAGE_ERROR_ACTION(m, oldmem);
5309 return 0;
5310 }
5311 #endif /* FOOTERS */
5312 if (!PREACTION(m)) {
5313 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5314 POSTACTION(m);
5315 if (newp != 0) {
5316 check_inuse_chunk(m, newp);
5317 mem = chunk2mem(newp);
5318 } else {
5319 mem = internal_malloc(m, bytes);
5320 if (mem != 0) {
5321 size_t oc = chunksize(oldp) - overhead_for(oldp);
5322 memcpy(mem, oldmem, (oc < bytes) ? oc : bytes);
5323 internal_free(m, oldmem);
5324 }
5325 }
5326 }
5327 }
5328 return mem;
5329} /* dlrealloc */
5330
5331void* dlrealloc_in_place(void *oldmem, size_t bytes)
5332{
5333 void *mem = 0;
5334
5335 if (oldmem != 0) {
5336 if (bytes >= MAX_REQUEST) {
5338 } else {
5339 size_t nb = request2size(bytes);
5340 mchunkptr oldp = mem2chunk(oldmem);
5341 #if !FOOTERS
5342 mstate m = gm;
5343 #else /* FOOTERS */
5344 mstate m = get_mstate_for(oldp);
5345 if (!ok_magic(m)) {
5346 USAGE_ERROR_ACTION(m, oldmem);
5347 return 0;
5348 }
5349 #endif /* FOOTERS */
5350 if (!PREACTION(m)) {
5351 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5352 POSTACTION(m);
5353 if (newp == oldp) {
5354 check_inuse_chunk(m, newp);
5355 mem = oldmem;
5356 }
5357 }
5358 }
5359 }
5360 return mem;
5361}
5362
5363void* dlmemalign(size_t alignment, size_t bytes)
5364{
5365 if (alignment <= MALLOC_ALIGNMENT) {
5366 return dlmalloc(bytes);
5367 }
5368 return internal_memalign(gm, alignment, bytes);
5369}
5370
5371int dlposix_memalign(void **pp, size_t alignment, size_t bytes)
5372{
5373 void *mem = 0;
5374
5375 if (alignment == MALLOC_ALIGNMENT) {
5376 mem = dlmalloc(bytes);
5377 } else {
5378 size_t d = alignment / sizeof(void *);
5379 size_t r = alignment % sizeof(void *);
5380 if (r != 0 || d == 0 || (d & (d - SIZE_T_ONE)) != 0) {
5381 return EINVAL;
5382 } else if (bytes <= MAX_REQUEST - alignment) {
5383 if (alignment < MIN_CHUNK_SIZE)
5384 alignment = MIN_CHUNK_SIZE;
5385 mem = internal_memalign(gm, alignment, bytes);
5386 }
5387 }
5388 if (mem == 0) {
5389 return ENOMEM;
5390 } else {
5391 *pp = mem;
5392 return 0;
5393 }
5394}
5395
5396void* dlvalloc(size_t bytes)
5397{
5398 size_t pagesz;
5399
5401 pagesz = mparams.page_size;
5402 return dlmemalign(pagesz, bytes);
5403}
5404
5405void* dlpvalloc(size_t bytes)
5406{
5407 size_t pagesz;
5408
5410 pagesz = mparams.page_size;
5411 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5412}
5413
5414void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5415 void *chunks[])
5416{
5417 size_t sz = elem_size; /* serves as 1-element array */
5418
5419 return ialloc(gm, n_elements, &sz, 3, chunks);
5420}
5421
5422void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5423 void *chunks[])
5424{
5425 return ialloc(gm, n_elements, sizes, 0, chunks);
5426}
5427
5428size_t dlbulk_free(void *array[], size_t nelem)
5429{
5430 return internal_bulk_free(gm, array, nelem);
5431}
5432
5433#if MALLOC_INSPECT_ALL
5434void dlmalloc_inspect_all(void ( *handler )(void *start,
5435 void * end,
5436 size_t used_bytes,
5437 void * callback_arg),
5438 void * arg)
5439{
5441 if (!PREACTION(gm)) {
5442 internal_inspect_all(gm, handler, arg);
5443 POSTACTION(gm);
5444 }
5445}
5446
5447#endif /* MALLOC_INSPECT_ALL */
5448
5449int dlmalloc_trim(size_t pad)
5450{
5451 int result = 0;
5452
5454 if (!PREACTION(gm)) {
5455 result = sys_trim(gm, pad);
5456 POSTACTION(gm);
5457 }
5458 return result;
5459}
5460
5462{
5463 return gm->footprint;
5464}
5465
5467{
5468 return gm->max_footprint;
5469}
5470
5472{
5473 size_t maf = gm->footprint_limit;
5474
5475 return maf == 0 ? MAX_SIZE_T : maf;
5476}
5477
5479{
5480 size_t result; /* invert sense of 0 */
5481
5482 if (bytes == 0)
5483 result = granularity_align(1); /* Use minimal size */
5484 if (bytes == MAX_SIZE_T)
5485 result = 0; /* disable */
5486 else
5487 result = granularity_align(bytes);
5488 return gm->footprint_limit = result;
5489}
5490
5491#if !NO_MALLINFO
5493{
5494 return internal_mallinfo(gm);
5495}
5496
5497#endif /* NO_MALLINFO */
5498
5499#if !NO_MALLOC_STATS
5501{
5503}
5504
5505#endif /* NO_MALLOC_STATS */
5506
5507int dlmallopt(int param_number, int value)
5508{
5509 return change_mparam(param_number, value);
5510}
5511
5512size_t dlmalloc_usable_size(void *mem)
5513{
5514 if (mem != 0) {
5515 mchunkptr p = mem2chunk(mem);
5516 if (is_inuse(p))
5517 return chunksize(p) - overhead_for(p);
5518 }
5519 return 0;
5520}
5521
5522#endif /* !ONLY_MSPACES */
5523
5524/* ----------------------------- user mspaces ---------------------------- */
5525
5526#if MSPACES
5527
5528static mstate init_user_mstate(char *tbase, size_t tsize)
5529{
5530 size_t msize = pad_request(sizeof(struct malloc_state));
5531 mchunkptr mn;
5532 mchunkptr msp = align_as_chunk(tbase);
5533 mstate m = (mstate) (chunk2mem(msp));
5534
5535 memset(m, 0, msize);
5536 (void) INITIAL_LOCK(&m->mutex);
5537 msp->head = (msize | INUSE_BITS);
5538 m->seg.base = m->least_addr = tbase;
5539 m->seg.size = m->footprint = m->max_footprint = tsize;
5540 m->magic = mparams.magic;
5542 m->mflags = mparams.default_mflags;
5543 m->extp = 0;
5544 m->exts = 0;
5546 init_bins(m);
5547 mn = next_chunk(mem2chunk(m));
5548 init_top(m, mn, (size_t) ((tbase + tsize) - (char *) mn) - TOP_FOOT_SIZE);
5549 check_top_chunk(m, m->top);
5550 return m;
5551}
5552
5553mspace create_mspace(size_t capacity, int locked)
5554{
5555 mstate m = 0;
5556 size_t msize;
5557
5559 msize = pad_request(sizeof(struct malloc_state));
5560 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5561 size_t rs = ((capacity == 0) ? mparams.granularity :
5562 (capacity + TOP_FOOT_SIZE + msize));
5563 size_t tsize = granularity_align(rs);
5564 char *tbase = (char *) (CALL_MMAP(tsize));
5565 if (tbase != CMFAIL) {
5566 m = init_user_mstate(tbase, tsize);
5567 m->seg.sflags = USE_MMAP_BIT;
5568 set_lock(m, locked);
5569 }
5570 }
5571 return (mspace) m;
5572}
5573
5574mspace create_mspace_with_base(void *base, size_t capacity, int locked)
5575{
5576 mstate m = 0;
5577 size_t msize;
5578
5580 msize = pad_request(sizeof(struct malloc_state));
5581 if (capacity > msize + TOP_FOOT_SIZE &&
5582 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size))
5583 {
5584 m = init_user_mstate((char *) base, capacity);
5585 m->seg.sflags = EXTERN_BIT;
5586 set_lock(m, locked);
5587 }
5588 return (mspace) m;
5589}
5590
5591int mspace_track_large_chunks(mspace msp, int enable)
5592{
5593 int ret = 0;
5594 mstate ms = (mstate) msp;
5595
5596 if (!PREACTION(ms)) {
5597 if (!use_mmap(ms)) {
5598 ret = 1;
5599 }
5600 if (!enable) {
5601 enable_mmap(ms);
5602 } else {
5603 disable_mmap(ms);
5604 }
5605 POSTACTION(ms);
5606 }
5607 return ret;
5608}
5609
5610size_t destroy_mspace(mspace msp)
5611{
5612 size_t freed = 0;
5613 mstate ms = (mstate) msp;
5614
5615 if (ok_magic(ms)) {
5616 msegmentptr sp = &ms->seg;
5617 (void) DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5618 while (sp != 0) {
5619 char *base = sp->base;
5620 size_t size = sp->size;
5621 flag_t flag = sp->sflags;
5622 (void) base; /* placate people compiling -Wunused-variable */
5623 sp = sp->next;
5624 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5625 CALL_MUNMAP(base, size) == 0)
5626 freed += size;
5627 }
5628 } else {
5629 USAGE_ERROR_ACTION(ms, ms);
5630 }
5631 return freed;
5632}
5633
5634/*
5635 * mspace versions of routines are near-clones of the global
5636 * versions. This is not so nice but better than the alternatives.
5637 */
5638
5639void* mspace_malloc(mspace msp, size_t bytes)
5640{
5641 mstate ms = (mstate) msp;
5642
5643 if (!ok_magic(ms)) {
5644 USAGE_ERROR_ACTION(ms, ms);
5645 return 0;
5646 }
5647 if (!PREACTION(ms)) {
5648 void *mem;
5649 size_t nb;
5650 if (bytes <= MAX_SMALL_REQUEST) {
5651 bindex_t idx;
5652 binmap_t smallbits;
5653 nb = (bytes < MIN_REQUEST) ? MIN_CHUNK_SIZE : pad_request(bytes);
5654 idx = small_index(nb);
5655 smallbits = ms->smallmap >> idx;
5656
5657 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5658 mchunkptr b, p;
5659 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5660 b = smallbin_at(ms, idx);
5661 p = b->fd;
5662 assert(chunksize(p) == small_index2size(idx));
5663 unlink_first_small_chunk(ms, b, p, idx);
5665 mem = chunk2mem(p);
5666 check_malloced_chunk(ms, mem, nb);
5667 goto postaction;
5668 } else if (nb > ms->dvsize) {
5669 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5670 mchunkptr b, p, r;
5671 size_t rsize;
5672 bindex_t i;
5673 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5674 binmap_t leastbit = least_bit(leftbits);
5675 compute_bit2idx(leastbit, i);
5676 b = smallbin_at(ms, i);
5677 p = b->fd;
5679 unlink_first_small_chunk(ms, b, p, i);
5680 rsize = small_index2size(i) - nb;
5681 /* Fit here cannot be remainderless if 4byte sizes */
5682 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) {
5684 } else {
5686 r = chunk_plus_offset(p, nb);
5688 replace_dv(ms, r, rsize);
5689 }
5690 mem = chunk2mem(p);
5691 check_malloced_chunk(ms, mem, nb);
5692 goto postaction;
5693 } else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5694 check_malloced_chunk(ms, mem, nb);
5695 goto postaction;
5696 }
5697 }
5698 } else if (bytes >= MAX_REQUEST) {
5699 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5700 } else {
5701 nb = pad_request(bytes);
5702 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5703 check_malloced_chunk(ms, mem, nb);
5704 goto postaction;
5705 }
5706 }
5707
5708 if (nb <= ms->dvsize) {
5709 size_t rsize = ms->dvsize - nb;
5710 mchunkptr p = ms->dv;
5711 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5712 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5713 ms->dvsize = rsize;
5716 } else { /* exhaust dv */
5717 size_t dvs = ms->dvsize;
5718 ms->dvsize = 0;
5719 ms->dv = 0;
5720 set_inuse_and_pinuse(ms, p, dvs);
5721 }
5722 mem = chunk2mem(p);
5723 check_malloced_chunk(ms, mem, nb);
5724 goto postaction;
5725 } else if (nb < ms->topsize) { /* Split top */
5726 size_t rsize = ms->topsize -= nb;
5727 mchunkptr p = ms->top;
5728 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5729 r->head = rsize | PINUSE_BIT;
5731 mem = chunk2mem(p);
5732 check_top_chunk(ms, ms->top);
5733 check_malloced_chunk(ms, mem, nb);
5734 goto postaction;
5735 }
5736
5737 mem = sys_alloc(ms, nb);
5738
5739postaction:
5740 POSTACTION(ms);
5741 return mem;
5742 }
5743
5744 return 0;
5745} /* mspace_malloc */
5746
5747void mspace_free(mspace msp, void *mem)
5748{
5749 if (mem != 0) {
5750 mchunkptr p = mem2chunk(mem);
5751 #if FOOTERS
5752 mstate fm = get_mstate_for(p);
5753 (void) msp; /* placate people compiling -Wunused */
5754 #else /* FOOTERS */
5755 mstate fm = (mstate) msp;
5756 #endif /* FOOTERS */
5757 if (!ok_magic(fm)) {
5759 return;
5760 }
5761 if (!PREACTION(fm)) {
5763 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5764 size_t psize = chunksize(p);
5765 mchunkptr next = chunk_plus_offset(p, psize);
5766 if (!pinuse(p)) {
5767 size_t prevsize = p->prev_foot;
5768 if (is_mmapped(p)) {
5769 psize += prevsize + MMAP_FOOT_PAD;
5770 if (CALL_MUNMAP((char *) p - prevsize, psize) == 0)
5771 fm->footprint -= psize;
5772 goto postaction;
5773 } else {
5774 mchunkptr prev = chunk_minus_offset(p, prevsize);
5775 psize += prevsize;
5776 p = prev;
5777 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5778 if (p != fm->dv) {
5779 unlink_chunk(fm, p, prevsize);
5780 } else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5781 fm->dvsize = psize;
5782 set_free_with_pinuse(p, psize, next);
5783 goto postaction;
5784 }
5785 } else {
5786 goto erroraction;
5787 }
5788 }
5789 }
5790
5791 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5792 if (!cinuse(next)) { /* consolidate forward */
5793 if (next == fm->top) {
5794 size_t tsize = fm->topsize += psize;
5795 fm->top = p;
5796 p->head = tsize | PINUSE_BIT;
5797 if (p == fm->dv) {
5798 fm->dv = 0;
5799 fm->dvsize = 0;
5800 }
5801 if (should_trim(fm, tsize))
5802 sys_trim(fm, 0);
5803 goto postaction;
5804 } else if (next == fm->dv) {
5805 size_t dsize = fm->dvsize += psize;
5806 fm->dv = p;
5808 goto postaction;
5809 } else {
5810 size_t nsize = chunksize(next);
5811 psize += nsize;
5812 unlink_chunk(fm, next, nsize);
5814 if (p == fm->dv) {
5815 fm->dvsize = psize;
5816 goto postaction;
5817 }
5818 }
5819 } else {
5820 set_free_with_pinuse(p, psize, next);
5821 }
5822
5823 if (is_small(psize)) {
5824 insert_small_chunk(fm, p, psize);
5825 check_free_chunk(fm, p);
5826 } else {
5827 tchunkptr tp = (tchunkptr) p;
5828 insert_large_chunk(fm, tp, psize);
5829 check_free_chunk(fm, p);
5830 if (--fm->release_checks == 0)
5832 }
5833 goto postaction;
5834 }
5835 }
5836erroraction:
5838postaction:
5839 POSTACTION(fm);
5840 }
5841 }
5842} /* mspace_free */
5843
5844void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size)
5845{
5846 void *mem;
5847 size_t req = 0;
5848 mstate ms = (mstate) msp;
5849
5850 if (!ok_magic(ms)) {
5851 USAGE_ERROR_ACTION(ms, ms);
5852 return 0;
5853 }
5854 if (n_elements != 0) {
5855 req = n_elements * elem_size;
5856 if (((n_elements | elem_size) & ~(size_t) 0xffff) &&
5857 (req / n_elements != elem_size))
5858 req = MAX_SIZE_T; /* force downstream failure on overflow */
5859 }
5860 mem = internal_malloc(ms, req);
5861 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5862 memset(mem, 0, req);
5863 return mem;
5864}
5865
5866void* mspace_realloc(mspace msp, void *oldmem, size_t bytes)
5867{
5868 void *mem = 0;
5869
5870 if (oldmem == 0) {
5871 mem = mspace_malloc(msp, bytes);
5872 } else if (bytes >= MAX_REQUEST) {
5874 }
5875 #ifdef REALLOC_ZERO_BYTES_FREES
5876 else if (bytes == 0) {
5877 mspace_free(msp, oldmem);
5878 }
5879 #endif /* REALLOC_ZERO_BYTES_FREES */
5880 else {
5881 size_t nb = request2size(bytes);
5882 mchunkptr oldp = mem2chunk(oldmem);
5883 #if !FOOTERS
5884 mstate m = (mstate) msp;
5885 #else /* FOOTERS */
5886 mstate m = get_mstate_for(oldp);
5887 if (!ok_magic(m)) {
5888 USAGE_ERROR_ACTION(m, oldmem);
5889 return 0;
5890 }
5891 #endif /* FOOTERS */
5892 if (!PREACTION(m)) {
5893 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5894 POSTACTION(m);
5895 if (newp != 0) {
5896 check_inuse_chunk(m, newp);
5897 mem = chunk2mem(newp);
5898 } else {
5899 mem = mspace_malloc(m, bytes);
5900 if (mem != 0) {
5901 size_t oc = chunksize(oldp) - overhead_for(oldp);
5902 memcpy(mem, oldmem, (oc < bytes) ? oc : bytes);
5903 mspace_free(m, oldmem);
5904 }
5905 }
5906 }
5907 }
5908 return mem;
5909} /* mspace_realloc */
5910
5911void* mspace_realloc_in_place(mspace msp, void *oldmem, size_t bytes)
5912{
5913 void *mem = 0;
5914
5915 if (oldmem != 0) {
5916 if (bytes >= MAX_REQUEST) {
5918 } else {
5919 size_t nb = request2size(bytes);
5920 mchunkptr oldp = mem2chunk(oldmem);
5921 #if !FOOTERS
5922 mstate m = (mstate) msp;
5923 #else /* FOOTERS */
5924 mstate m = get_mstate_for(oldp);
5925 (void) msp; /* placate people compiling -Wunused */
5926 if (!ok_magic(m)) {
5927 USAGE_ERROR_ACTION(m, oldmem);
5928 return 0;
5929 }
5930 #endif /* FOOTERS */
5931 if (!PREACTION(m)) {
5932 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5933 POSTACTION(m);
5934 if (newp == oldp) {
5935 check_inuse_chunk(m, newp);
5936 mem = oldmem;
5937 }
5938 }
5939 }
5940 }
5941 return mem;
5942}
5943
5944void* mspace_memalign(mspace msp, size_t alignment, size_t bytes)
5945{
5946 mstate ms = (mstate) msp;
5947
5948 if (!ok_magic(ms)) {
5949 USAGE_ERROR_ACTION(ms, ms);
5950 return 0;
5951 }
5952 if (alignment <= MALLOC_ALIGNMENT)
5953 return mspace_malloc(msp, bytes);
5954
5955 return internal_memalign(ms, alignment, bytes);
5956}
5957
5958void** mspace_independent_calloc(mspace msp, size_t n_elements,
5959 size_t elem_size, void *chunks[])
5960{
5961 size_t sz = elem_size; /* serves as 1-element array */
5962 mstate ms = (mstate) msp;
5963
5964 if (!ok_magic(ms)) {
5965 USAGE_ERROR_ACTION(ms, ms);
5966 return 0;
5967 }
5968 return ialloc(ms, n_elements, &sz, 3, chunks);
5969}
5970
5971void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5972 size_t sizes[], void *chunks[])
5973{
5974 mstate ms = (mstate) msp;
5975
5976 if (!ok_magic(ms)) {
5977 USAGE_ERROR_ACTION(ms, ms);
5978 return 0;
5979 }
5980 return ialloc(ms, n_elements, sizes, 0, chunks);
5981}
5982
5983size_t mspace_bulk_free(mspace msp, void *array[], size_t nelem)
5984{
5985 return internal_bulk_free((mstate) msp, array, nelem);
5986}
5987
5988#if MALLOC_INSPECT_ALL
5989void mspace_inspect_all(mspace msp,
5990 void ( * handler )(void *start,
5991 void * end,
5992 size_t used_bytes,
5993 void * callback_arg),
5994 void * arg)
5995{
5996 mstate ms = (mstate) msp;
5997
5998 if (ok_magic(ms)) {
5999 if (!PREACTION(ms)) {
6000 internal_inspect_all(ms, handler, arg);
6001 POSTACTION(ms);
6002 }
6003 } else {
6004 USAGE_ERROR_ACTION(ms, ms);
6005 }
6006}
6007
6008#endif /* MALLOC_INSPECT_ALL */
6009
6010int mspace_trim(mspace msp, size_t pad)
6011{
6012 int result = 0;
6013 mstate ms = (mstate) msp;
6014
6015 if (ok_magic(ms)) {
6016 if (!PREACTION(ms)) {
6017 result = sys_trim(ms, pad);
6018 POSTACTION(ms);
6019 }
6020 } else {
6021 USAGE_ERROR_ACTION(ms, ms);
6022 }
6023 return result;
6024}
6025
6026#if !NO_MALLOC_STATS
6027void mspace_malloc_stats(mspace msp)
6028{
6029 mstate ms = (mstate) msp;
6030
6031 if (ok_magic(ms)) {
6033 } else {
6034 USAGE_ERROR_ACTION(ms, ms);
6035 }
6036}
6037
6038#endif /* NO_MALLOC_STATS */
6039
6040size_t mspace_footprint(mspace msp)
6041{
6042 size_t result = 0;
6043 mstate ms = (mstate) msp;
6044
6045 if (ok_magic(ms)) {
6046 result = ms->footprint;
6047 } else {
6048 USAGE_ERROR_ACTION(ms, ms);
6049 }
6050 return result;
6051}
6052
6053size_t mspace_max_footprint(mspace msp)
6054{
6055 size_t result = 0;
6056 mstate ms = (mstate) msp;
6057
6058 if (ok_magic(ms)) {
6059 result = ms->max_footprint;
6060 } else {
6061 USAGE_ERROR_ACTION(ms, ms);
6062 }
6063 return result;
6064}
6065
6066size_t mspace_footprint_limit(mspace msp)
6067{
6068 size_t result = 0;
6069 mstate ms = (mstate) msp;
6070
6071 if (ok_magic(ms)) {
6072 size_t maf = ms->footprint_limit;
6073 result = (maf == 0) ? MAX_SIZE_T : maf;
6074 } else {
6075 USAGE_ERROR_ACTION(ms, ms);
6076 }
6077 return result;
6078}
6079
6080size_t mspace_set_footprint_limit(mspace msp, size_t bytes)
6081{
6082 size_t result = 0;
6083 mstate ms = (mstate) msp;
6084
6085 if (ok_magic(ms)) {
6086 if (bytes == 0)
6087 result = granularity_align(1); /* Use minimal size */
6088 if (bytes == MAX_SIZE_T)
6089 result = 0; /* disable */
6090 else
6091 result = granularity_align(bytes);
6092 ms->footprint_limit = result;
6093 } else {
6094 USAGE_ERROR_ACTION(ms, ms);
6095 }
6096 return result;
6097}
6098
6099#if !NO_MALLINFO
6100struct mallinfo mspace_mallinfo(mspace msp)
6101{
6102 mstate ms = (mstate) msp;
6103
6104 if (!ok_magic(ms)) {
6105 USAGE_ERROR_ACTION(ms, ms);
6106 }
6107 return internal_mallinfo(ms);
6108}
6109
6110#endif /* NO_MALLINFO */
6111
6112size_t mspace_usable_size(const void *mem)
6113{
6114 if (mem != 0) {
6115 mchunkptr p = mem2chunk(mem);
6116 if (is_inuse(p))
6117 return chunksize(p) - overhead_for(p);
6118 }
6119 return 0;
6120}
6121
6122int mspace_mallopt(int param_number, int value)
6123{
6124 return change_mparam(param_number, value);
6125}
6126
6127#endif /* MSPACES */
6128
6129
6130/* -------------------- Alternative MORECORE functions ------------------- */
6131
6132/*
6133 * Guidelines for creating a custom version of MORECORE:
6134 *
6135 * For best performance, MORECORE should allocate in multiples of pagesize.
6136 * MORECORE may allocate more memory than requested. (Or even less,
6137 * but this will usually result in a malloc failure.)
6138 * MORECORE must not allocate memory when given argument zero, but
6139 * instead return one past the end address of memory from previous
6140 * nonzero call.
6141 * For best performance, consecutive calls to MORECORE with positive
6142 * arguments should return increasing addresses, indicating that
6143 * space has been contiguously extended.
6144 * Even though consecutive calls to MORECORE need not return contiguous
6145 * addresses, it must be OK for malloc'ed chunks to span multiple
6146 * regions in those cases where they do happen to be contiguous.
6147 * MORECORE need not handle negative arguments -- it may instead
6148 * just return MFAIL when given negative arguments.
6149 * Negative arguments are always multiples of pagesize. MORECORE
6150 * must not misinterpret negative args as large positive unsigned
6151 * args. You can suppress all such calls from even occurring by defining
6152 * MORECORE_CANNOT_TRIM,
6153 *
6154 * As an example alternative MORECORE, here is a custom allocator
6155 * kindly contributed for pre-OSX macOS. It uses virtually but not
6156 * necessarily physically contiguous non-paged memory (locked in,
6157 * present and won't get swapped out). You can use it by uncommenting
6158 * this section, adding some #includes, and setting up the appropriate
6159 * defines above:
6160 *
6161 #define MORECORE osMoreCore
6162 *
6163 * There is also a shutdown routine that should somehow be called for
6164 * cleanup upon program exit.
6165 *
6166 #define MAX_POOL_ENTRIES 100
6167 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6168 * static int next_os_pool;
6169 * void *our_os_pools[MAX_POOL_ENTRIES];
6170 *
6171 * void *osMoreCore(int size)
6172 * {
6173 * void *ptr = 0;
6174 * static void *sbrk_top = 0;
6175 *
6176 * if (size > 0)
6177 * {
6178 * if (size < MINIMUM_MORECORE_SIZE)
6179 * size = MINIMUM_MORECORE_SIZE;
6180 * if (CurrentExecutionLevel() == kTaskLevel)
6181 * ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6182 * if (ptr == 0)
6183 * {
6184 * return (void *) MFAIL;
6185 * }
6186 * // save ptrs so they can be freed during cleanup
6187 * our_os_pools[next_os_pool] = ptr;
6188 * next_os_pool++;
6189 * ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6190 * sbrk_top = (char *) ptr + size;
6191 * return ptr;
6192 * }
6193 * else if (size < 0)
6194 * {
6195 * // we don't currently support shrink behavior
6196 * return (void *) MFAIL;
6197 * }
6198 * else
6199 * {
6200 * return sbrk_top;
6201 * }
6202 * }
6203 *
6204 * // cleanup any allocated memory pools
6205 * // called as last thing before shutting down driver
6206 *
6207 * void osCleanupMem(void)
6208 * {
6209 * void **ptr;
6210 *
6211 * for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6212 * if (*ptr)
6213 * {
6214 * PoolDeallocate(*ptr);
6215 * ptr = 0;
6216 * }
6217 * }
6218 *
6219 */
6220
6221
6222/* -----------------------------------------------------------------------
6223 * History:
6224 * v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6225 * fix bad comparison in dlposix_memalign
6226 * don't reuse adjusted asize in sys_alloc
6227 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6228 * reduce compiler warnings -- thanks to all who reported/suggested these
6229 *
6230 * v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6231 * Always perform unlink checks unless INSECURE
6232 * Add posix_memalign.
6233 * Improve realloc to expand in more cases; expose realloc_in_place.
6234 * Thanks to Peter Buhr for the suggestion.
6235 * Add footprint_limit, inspect_all, bulk_free. Thanks
6236 * to Barry Hayes and others for the suggestions.
6237 * Internal refactorings to avoid calls while holding locks
6238 * Use non-reentrant locks by default. Thanks to Roland McGrath
6239 * for the suggestion.
6240 * Small fixes to mspace_destroy, reset_on_error.
6241 * Various configuration extensions/changes. Thanks
6242 * to all who contributed these.
6243 *
6244 * V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6245 * Update Creative Commons URL
6246 *
6247 * V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6248 * Use zeros instead of prev foot for is_mmapped
6249 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6250 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6251 * Fix insufficient sys_alloc padding when using 16byte alignment
6252 * Fix bad error check in mspace_footprint
6253 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6254 * Reentrant spin locks; thanks to Earl Chew and others
6255 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6256 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6257 * Extension hook in malloc_state
6258 * Various small adjustments to reduce warnings on some compilers
6259 * Various configuration extensions/changes for more platforms. Thanks
6260 * to all who contributed these.
6261 *
6262 * V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6263 * Add max_footprint functions
6264 * Ensure all appropriate literals are size_t
6265 * Fix conditional compilation problem for some #define settings
6266 * Avoid concatenating segments with the one provided
6267 * in create_mspace_with_base
6268 * Rename some variables to avoid compiler shadowing warnings
6269 * Use explicit lock initialization.
6270 * Better handling of sbrk interference.
6271 * Simplify and fix segment insertion, trimming and mspace_destroy
6272 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6273 * Thanks especially to Dennis Flanagan for help on these.
6274 *
6275 * V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6276 * Fix memalign brace error.
6277 *
6278 * V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6279 * Fix improper #endif nesting in C++
6280 * Add explicit casts needed for C++
6281 *
6282 * V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6283 * Use trees for large bins
6284 * Support mspaces
6285 * Use segments to unify sbrk-based and mmap-based system allocation,
6286 * removing need for emulation on most platforms without sbrk.
6287 * Default safety checks
6288 * Optional footer checks. Thanks to William Robertson for the idea.
6289 * Internal code refactoring
6290 * Incorporate suggestions and platform-specific changes.
6291 * Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6292 * Aaron Bachmann, Emery Berger, and others.
6293 * Speed up non-fastbin processing enough to remove fastbins.
6294 * Remove useless cfree() to avoid conflicts with other apps.
6295 * Remove internal memcpy, memset. Compilers handle builtins better.
6296 * Remove some options that no one ever used and rename others.
6297 *
6298 * V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6299 * Fix malloc_state bitmap array misdeclaration
6300 *
6301 * V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6302 * Allow tuning of FIRST_SORTED_BIN_SIZE
6303 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6304 * Better detection and support for non-contiguousness of MORECORE.
6305 * Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6306 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6307 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6308 * Raised default trim and map thresholds to 256K.
6309 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6310 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6311 * Branch-free bin calculation
6312 * Default trim and mmap thresholds now 256K.
6313 *
6314 * V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6315 * Introduce independent_comalloc and independent_calloc.
6316 * Thanks to Michael Pachos for motivation and help.
6317 * Make optional .h file available
6318 * Allow > 2GB requests on 32bit systems.
6319 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6320 * Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6321 * and Anonymous.
6322 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6323 * helping test this.)
6324 * memalign: check alignment arg
6325 * realloc: don't try to shift chunks backwards, since this
6326 * leads to more fragmentation in some programs and doesn't
6327 * seem to help in any others.
6328 * Collect all cases in malloc requiring system memory into sysmalloc
6329 * Use mmap as backup to sbrk
6330 * Place all internal state in malloc_state
6331 * Introduce fastbins (although similar to 2.5.1)
6332 * Many minor tunings and cosmetic improvements
6333 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6334 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6335 * Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6336 * Include errno.h to support default failure action.
6337 *
6338 * V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6339 * return null for negative arguments
6340 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6341 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6342 * (e.g. WIN32 platforms)
6343 * Cleanup header file inclusion for WIN32 platforms
6344 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6345 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6346 * memory allocation routines
6347 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6348 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6349 * usage of 'assert' in non-WIN32 code
6350 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6351 * avoid infinite loop
6352 * Always call 'fREe()' rather than 'free()'
6353 *
6354 * V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6355 * Fixed ordering problem with boundary-stamping
6356 *
6357 * V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6358 * Added pvalloc, as recommended by H.J. Liu
6359 * Added 64bit pointer support mainly from Wolfram Gloger
6360 * Added anonymously donated WIN32 sbrk emulation
6361 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6362 * malloc_extend_top: fix mask error that caused wastage after
6363 * foreign sbrks
6364 * Add linux mremap support code from HJ Liu
6365 *
6366 * V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6367 * Integrated most documentation with the code.
6368 * Add support for mmap, with help from
6369 * Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6370 * Use last_remainder in more cases.
6371 * Pack bins using idea from colin@nyx10.cs.du.edu
6372 * Use ordered bins instead of best-fit threshhold
6373 * Eliminate block-local decls to simplify tracing and debugging.
6374 * Support another case of realloc via move into top
6375 * Fix error occuring when initial sbrk_base not word-aligned.
6376 * Rely on page size for units instead of SBRK_UNIT to
6377 * avoid surprises about sbrk alignment conventions.
6378 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6379 * (raymond@es.ele.tue.nl) for the suggestion.
6380 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6381 * More precautions for cases where other routines call sbrk,
6382 * courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6383 * Added macros etc., allowing use in linux libc from
6384 * H.J. Lu (hjl@gnu.ai.mit.edu)
6385 * Inverted this history list
6386 *
6387 * V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6388 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6389 * Removed all preallocation code since under current scheme
6390 * the work required to undo bad preallocations exceeds
6391 * the work saved in good cases for most test programs.
6392 * No longer use return list or unconsolidated bins since
6393 * no scheme using them consistently outperforms those that don't
6394 * given above changes.
6395 * Use best fit for very large chunks to prevent some worst-cases.
6396 * Added some support for debugging
6397 *
6398 * V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6399 * Removed footers when chunks are in use. Thanks to
6400 * Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6401 *
6402 * V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6403 * Added malloc_trim, with help from Wolfram Gloger
6404 * (wmglo@Dent.MED.Uni-Muenchen.DE).
6405 *
6406 * V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6407 *
6408 * V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6409 * realloc: try to expand in both directions
6410 * malloc: swap order of clean-bin strategy;
6411 * realloc: only conditionally expand backwards
6412 * Try not to scavenge used bins
6413 * Use bin counts as a guide to preallocation
6414 * Occasionally bin return list chunks in first scan
6415 * Add a few optimizations from colin@nyx10.cs.du.edu
6416 *
6417 * V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6418 * faster bin computation & slightly different binning
6419 * merged all consolidations to one part of malloc proper
6420 * (eliminating old malloc_find_space & malloc_clean_bin)
6421 * Scan 2 returns chunks (not just 1)
6422 * Propagate failure in realloc if malloc returns 0
6423 * Add stuff to allow compilation on non-ANSI compilers
6424 * from kpv@research.att.com
6425 *
6426 * V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6427 * removed potential for odd address access in prev_chunk
6428 * removed dependency on getpagesize.h
6429 * misc cosmetics and a bit more internal documentation
6430 * anticosmetics: mangled names in macros to evade debugger strangeness
6431 * tested on sparc, hp-700, dec-mips, rs6000
6432 * with gcc & native cc (hp, dec only) allowing
6433 * Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6434 *
6435 * Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6436 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6437 * structure of old version, but most details differ.)
6438 *
6439 */
6440// GCOV_EXCL_STOP
#define cinuse(p)
Definition malloc.c:2304
#define USE_LOCK_BIT
Definition malloc.c:1832
#define is_aligned(A)
Definition malloc.c:1627
#define dlrealloc
Definition malloc.c:829
#define compute_tree_index(S, I)
Definition malloc.c:2936
#define MALLOC_ALIGNMENT
Definition malloc.c:621
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)
Definition malloc.c:3119
#define is_page_aligned(S)
Definition malloc.c:2740
#define DEFAULT_GRANULARITY
Definition malloc.c:676
static void init_top(mstate m, mchunkptr p, size_t psize)
Definition malloc.c:3996
#define treemap_is_marked(M, i)
Definition malloc.c:2982
#define chunksize(p)
Definition malloc.c:2310
#define M_MMAP_THRESHOLD
Definition malloc.c:733
static void * tmalloc_small(mstate m, size_t nb)
Definition malloc.c:4616
#define dlmalloc_set_footprint_limit
Definition malloc.c:841
#define FENCEPOST_HEAD
Definition malloc.c:2301
static void * mmap_alloc(mstate m, size_t nb)
Definition malloc.c:3921
#define ok_address(M, a)
Definition malloc.c:3067
struct malloc_tree_chunk tchunk
Definition malloc.c:2454
#define dlindependent_calloc
Definition malloc.c:843
#define NTREEBINS
Definition malloc.c:2621
#define set_size_and_pinuse_of_free_chunk(p, s)
Definition malloc.c:2332
#define SIZE_T_ONE
Definition malloc.c:1615
struct malloc_tree_chunk * tbinptr
Definition malloc.c:2456
#define disable_contiguous(M)
Definition malloc.c:2713
#define set_inuse(M, p, s)
Definition malloc.c:3109
#define internal_free(m, mem)
Definition malloc.c:3906
#define small_index(s)
Definition malloc.c:2883
#define M_GRANULARITY
Definition malloc.c:731
#define CALL_MORECORE(S)
Definition malloc.c:1736
#define chunk_plus_offset(p, s)
Definition malloc.c:2317
#define left_bits(x)
Definition malloc.c:2988
#define page_align(S)
Definition malloc.c:2721
#define CALL_DIRECT_MMAP(s)
Definition malloc.c:1776
#define chunk_minus_offset(p, s)
Definition malloc.c:2318
unsigned int binmap_t
Definition malloc.c:2243
#define ok_inuse(p)
Definition malloc.c:3071
#define FOUR_SIZE_T_SIZES
Definition malloc.c:1619
#define unlink_large_chunk(M, X)
Definition malloc.c:3810
#define CALL_MMAP(s)
Definition malloc.c:1777
static void dispose_chunk(mstate m, mchunkptr p, size_t psize)
Definition malloc.c:4476
#define align_offset(A)
Definition malloc.c:1630
#define SIZE_T_SIZE
Definition malloc.c:1609
#define enable_mmap(M)
Definition malloc.c:2705
#define RELEASE_MALLOC_GLOBAL_LOCK()
Definition malloc.c:1836
#define ensure_initialization()
Definition malloc.c:2679
#define is_mmapped(p)
Definition malloc.c:2308
#define fm
#define segment_holds(S, A)
Definition malloc.c:2746
#define replace_dv(M, P, S)
Definition malloc.c:3726
#define small_index2size(i)
Definition malloc.c:2884
static void * prepend_alloc(mstate m, char *newbase, char *oldbase, size_t nb)
Definition malloc.c:4047
static void * tmalloc_large(mstate m, size_t nb)
Definition malloc.c:4543
#define TOP_FOOT_SIZE
Definition malloc.c:2788
#define DEFAULT_MMAP_THRESHOLD
Definition malloc.c:692
#define dlmallopt
Definition malloc.c:834
static int sys_trim(mstate m, size_t pad)
Definition malloc.c:4405
#define dlmemalign
Definition malloc.c:827
#define set_free_with_pinuse(p, s, n)
Definition malloc.c:2336
#define mem2chunk(mem)
Definition malloc.c:2267
#define CHUNK_ALIGN_MASK
Definition malloc.c:1624
#define unlink_first_small_chunk(M, B, P, I)
Definition malloc.c:3706
#define granularity_align(S)
Definition malloc.c:2725
#define check_inuse_chunk(M, P)
Definition malloc.c:2852
unsigned int bindex_t
Definition malloc.c:2242
#define is_extern_segment(S)
Definition malloc.c:2527
#define check_free_chunk(M, P)
Definition malloc.c:2851
#define dlrealloc_in_place
Definition malloc.c:830
#define ABORT
Definition malloc.c:627
#define internal_malloc(m, b)
Definition malloc.c:3905
#define smallmap_is_marked(M, i)
Definition malloc.c:2978
static void init_bins(mstate m)
Definition malloc.c:4013
#define CALL_MUNMAP(a, s)
Definition malloc.c:1778
#define EXTERN_BIT
Definition malloc.c:1798
#define ok_next(p, n)
Definition malloc.c:3069
#define DLMALLOC_EXPORT
Definition malloc.c:532
#define minsize_for_tree_index(i)
Definition malloc.c:2965
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags)
Definition malloc.c:3955
static void ** ialloc(mstate m, size_t n_elements, size_t *sizes, int opts, void *chunks[])
Definition malloc.c:5072
#define INITIAL_LOCK(l)
Definition malloc.c:1833
static struct malloc_state _gm_
Definition malloc.c:2684
#define leftmost_child(t)
Definition malloc.c:2459
#define next_pinuse(p)
Definition malloc.c:2325
#define least_bit(x)
Definition malloc.c:2985
struct malloc_chunk mchunk
Definition malloc.c:2239
#define NO_SEGMENT_TRAVERSAL
Definition malloc.c:718
#define leftshift_for_tree_index(i)
Definition malloc.c:2960
#define USE_MMAP_BIT
Definition malloc.c:1771
#define dlbulk_free
Definition malloc.c:845
#define set_inuse_and_pinuse(M, p, s)
Definition malloc.c:3114
#define use_noncontiguous(M)
Definition malloc.c:2712
#define MFAIL
Definition malloc.c:1644
#define FORCEINLINE
Definition malloc.c:816
static void * sys_alloc(mstate m, size_t nb)
Definition malloc.c:4145
#define dlmalloc_trim
Definition malloc.c:835
#define is_inuse(p)
Definition malloc.c:2307
#define disable_mmap(M)
Definition malloc.c:2709
#define MAX_SMALL_REQUEST
Definition malloc.c:2627
struct malloc_state * mstate
Definition malloc.c:2655
static void * internal_memalign(mstate m, size_t alignment, size_t bytes)
Definition malloc.c:4990
#define MAX_REQUEST
Definition malloc.c:2272
#define pinuse(p)
Definition malloc.c:2305
#define INUSE_BITS
Definition malloc.c:2297
#define dlmalloc_footprint
Definition malloc.c:5461
#define gm
Definition malloc.c:2685
#define dlmalloc
Definition malloc.c:826
#define smallbin_at(M, i)
Definition malloc.c:2888
#define next_chunk(p)
Definition malloc.c:2321
#define align_as_chunk(A)
Definition malloc.c:2269
#define MIN_REQUEST
Definition malloc.c:2273
#define POSTACTION(M)
Definition malloc.c:2810
#define should_trim(M, s)
Definition malloc.c:2778
#define dlmalloc_max_footprint
Definition malloc.c:5466
#define ok_magic(M)
Definition malloc.c:3086
#define HAVE_MORECORE
Definition malloc.c:663
static size_t release_unused_segments(mstate m)
Definition malloc.c:4358
#define treebin_at(M, i)
Definition malloc.c:2889
unsigned int flag_t
Definition malloc.c:2244
static int change_mparam(int param_number, int value)
Definition malloc.c:3251
#define is_mmapped_segment(S)
Definition malloc.c:2526
#define PREACTION(M)
Definition malloc.c:2806
#define dlpvalloc
Definition malloc.c:832
#define SIZE_T_BITSIZE
Definition malloc.c:1610
#define check_top_chunk(M, P)
Definition malloc.c:2856
static size_t internal_bulk_free(mstate m, void *array[], size_t nelem)
Definition malloc.c:5197
#define dlfree
Definition malloc.c:825
#define ACQUIRE_MALLOC_GLOBAL_LOCK()
Definition malloc.c:1835
#define PINUSE_BIT
Definition malloc.c:2294
#define dlindependent_comalloc
Definition malloc.c:844
#define calloc_must_clear(p)
Definition malloc.c:2345
struct malloc_chunk * mchunkptr
Definition malloc.c:2240
#define is_small(s)
Definition malloc.c:2882
#define set_lock(M, L)
Definition malloc.c:2715
static int init_mparams(void)
Definition malloc.c:3160
#define USAGE_ERROR_ACTION(m, p)
Definition malloc.c:2841
static int has_segment_link(mstate m, msegmentptr ss)
Definition malloc.c:2764
#define check_mmapped_chunk(M, P)
Definition malloc.c:2854
#define request2size(req)
Definition malloc.c:2280
#define pad_request(req)
Definition malloc.c:2276
#define dlmalloc_usable_size
Definition malloc.c:837
#define dlmalloc_inspect_all
Definition malloc.c:842
#define dlmallinfo
Definition malloc.c:5492
#define MIN_CHUNK_SIZE
Definition malloc.c:2262
#define is_global(M)
Definition malloc.c:2686
#define mmap_align(S)
Definition malloc.c:2734
struct malloc_segment * msegmentptr
Definition malloc.c:2530
#define insert_large_chunk(M, X, S)
Definition malloc.c:3741
#define SYS_ALLOC_PADDING
Definition malloc.c:2738
struct malloc_chunk * sbinptr
Definition malloc.c:2241
#define MIN_LARGE_SIZE
Definition malloc.c:2625
#define USE_NONCONTIGUOUS_BIT
Definition malloc.c:1795
#define compute_bit2idx(X, I)
Definition malloc.c:3023
#define HALF_MAX_SIZE_T
Definition malloc.c:1621
#define use_mmap(M)
Definition malloc.c:2704
#define CORRUPTION_ERROR_ACTION(m)
Definition malloc.c:2837
#define chunk2mem(p)
Definition malloc.c:2266
#define DESTROY_LOCK(l)
Definition malloc.c:1834
#define clear_pinuse(p)
Definition malloc.c:2312
#define unlink_chunk(M, P, S)
Definition malloc.c:3888
#define dlposix_memalign
Definition malloc.c:828
#define MAX_SIZE_T
Definition malloc.c:587
#define overhead_for(p)
Definition malloc.c:2340
#define malloc_getpagesize
Definition malloc.c:1595
static struct mallinfo internal_mallinfo(mstate m)
Definition malloc.c:3569
static struct malloc_params mparams
Definition malloc.c:2676
#define CHUNK_OVERHEAD
Definition malloc.c:2253
#define MORECORE_CONTIGUOUS
Definition malloc.c:671
#define MMAP_FOOT_PAD
Definition malloc.c:2259
#define insert_small_chunk(M, P, S)
Definition malloc.c:3659
static msegmentptr segment_holding(mstate m, char *addr)
Definition malloc.c:2750
struct malloc_tree_chunk * tchunkptr
Definition malloc.c:2455
#define ok_pinuse(p)
Definition malloc.c:3073
#define check_malloced_chunk(M, P, N)
Definition malloc.c:2853
#define dlmalloc_footprint_limit
Definition malloc.c:5471
#define mark_inuse_foot(M, p, s)
Definition malloc.c:3104
#define MALLOC_FAILURE_ACTION
Definition malloc.c:657
#define M_TRIM_THRESHOLD
Definition malloc.c:729
#define SIX_SIZE_T_SIZES
Definition malloc.c:1620
#define CMFAIL
Definition malloc.c:1645
#define dlcalloc
Definition malloc.c:824
#define insert_chunk(M, P, S)
Definition malloc.c:3884
static void add_segment(mstate m, char *tbase, size_t tsize, flag_t mmapped)
Definition malloc.c:4089
#define DEFAULT_TRIM_THRESHOLD
Definition malloc.c:683
#define prev_chunk(p)
Definition malloc.c:2322
#define idx2bit(i)
Definition malloc.c:2973
struct malloc_segment msegment
Definition malloc.c:2529
#define MAX_RELEASE_CHECK_RATE
Definition malloc.c:699
#define is_initialized(M)
Definition malloc.c:2690
#define MCHUNK_SIZE
Definition malloc.c:2248
#define assert(x)
Definition malloc.c:1461
#define dlmalloc_stats
Definition malloc.c:5500
static void internal_malloc_stats(mstate m)
Definition malloc.c:3614
static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, int can_move)
Definition malloc.c:4915
#define dlvalloc
Definition malloc.c:831
#define check_malloc_state(M)
Definition malloc.c:2855
#define CALL_MREMAP(addr, osz, nsz, mv)
Definition malloc.c:1791
#define RTCHECK(e)
Definition malloc.c:3094
#define NSMALLBINS
Definition malloc.c:2620
#define MALLINFO_FIELD_TYPE
Definition malloc.h:79
int fd
Definition impl.c:25
MALLINFO_FIELD_TYPE hblkhd
Definition malloc.c:776
MALLINFO_FIELD_TYPE ordblks
Definition malloc.c:773
MALLINFO_FIELD_TYPE smblks
Definition malloc.c:774
MALLINFO_FIELD_TYPE fsmblks
Definition malloc.c:778
MALLINFO_FIELD_TYPE hblks
Definition malloc.c:775
MALLINFO_FIELD_TYPE arena
Definition malloc.c:772
MALLINFO_FIELD_TYPE keepcost
Definition malloc.c:781
MALLINFO_FIELD_TYPE fordblks
Definition malloc.c:780
MALLINFO_FIELD_TYPE uordblks
Definition malloc.c:779
MALLINFO_FIELD_TYPE usmblks
Definition malloc.c:777
struct malloc_chunk * fd
Definition malloc.c:2235
size_t prev_foot
Definition malloc.c:2233
size_t head
Definition malloc.c:2234
struct malloc_chunk * bk
Definition malloc.c:2236
struct malloc_tree_chunk * child[2]
Definition malloc.c:2449
struct malloc_tree_chunk * fd
Definition malloc.c:2446
struct malloc_tree_chunk * parent
Definition malloc.c:2450
bindex_t index
Definition malloc.c:2451
struct malloc_tree_chunk * bk
Definition malloc.c:2447
struct malloc_segment * next
Definition malloc.c:2522
size_t size
Definition malloc.c:2521
flag_t sflags
Definition malloc.c:2523
char * base
Definition malloc.c:2520
binmap_t treemap
Definition malloc.c:2632
size_t exts
Definition malloc.c:2652
mchunkptr dv
Definition malloc.c:2636
mchunkptr smallbins[(NSMALLBINS+1) *2]
Definition malloc.c:2641
size_t magic
Definition malloc.c:2640
size_t topsize
Definition malloc.c:2634
size_t release_checks
Definition malloc.c:2639
void * extp
Definition malloc.c:2651
size_t trim_check
Definition malloc.c:2638
mchunkptr top
Definition malloc.c:2637
binmap_t smallmap
Definition malloc.c:2631
flag_t mflags
Definition malloc.c:2646
size_t footprint
Definition malloc.c:2643
msegment seg
Definition malloc.c:2650
size_t max_footprint
Definition malloc.c:2644
size_t footprint_limit
Definition malloc.c:2645
size_t dvsize
Definition malloc.c:2633
char * least_addr
Definition malloc.c:2635
tbinptr treebins[NTREEBINS]
Definition malloc.c:2642
size_t magic
Definition malloc.c:2668
size_t mmap_threshold
Definition malloc.c:2671
flag_t default_mflags
Definition malloc.c:2673
size_t granularity
Definition malloc.c:2670
size_t page_size
Definition malloc.c:2669
size_t trim_threshold
Definition malloc.c:2672
#define HAVE_MMAP