Monday, January 14, 2013

Memory management

My college professors constantly encouraged us to "Go back to first principles."

Consider a computing task that runs for some amount of time and then halts.  If a task dynamically allocates more memory than is available, it must re-use some (or crash!)  This is irrespective of the means of re-use, whether manual deallocation as in malloc/free or automatic deallocation with a garbage collector.

The amount of allocated memory at any point in the program is the difference between the total amount allocated up to that point and the total amount deallocated up to that same point.

allocationfinal = allocationtotal - freetotal

or

freetotal = allocationtotal - allocationfinal

If we free memory in discrete amounts, then the average amount freed each time is simply the total amount freed divided by the number of times we freed memory (by definition). The average amount retained each time is easily computed by subtracting the amount freed from the memory size. This is all simple arithmetic.

In particular,

freetotal / deallocation count = freemean (by definition)

and at deallocation,

memory size - freemean = retainedmean (ditto)

The total amount of memory that a task uses might vary when the task runs at different times and different memory settings. A task could use reflection to adjust memory consumption according to resources, or the task might change resource consumption because of external circumstances such as time or memory alignment. But we expect that simple deterministic tasks will consume resources in a repeatable manner. If that is true, then the total allocation and the amount of reachable storage at any time should not depend upon the amount of memory. If you reduce the amount of memory, you'll just need to recycle it that much more to make up the difference.

For a fixed amount of freed memory, the deallocation count times freemean is constant. So for some task that frees a certain amount of memory, the product of the deallocation count and freemean will lie on a hyperbola. The product of the deallocation count and the memory size will not lie exactly on a hyperbola, but it will be pretty close, especially if the memory size is quite a bit larger than retainedmean. Again, this is simply the consequence of arithmetic. Whether we use a garbage collector to deallocate or some other memory management technique doesn't matter.

In the Economics of Garbage Collection, Singer and Jones investigate the issue of garbage collection from a microeconomics background. They introduce what they call the allocation curve of the program. A benchmark program is run with several different heap sizes and the number of garbage collections is counted for each run. The red curves in this figure show the results:
We see the expected pseudo-hyperbolas. (The blue lines are elasticity, which is not relevant here.)

Singer and Jones created these charts by empirically measuring execution of benchmark programs. But it is pointless to "measure" an arithmetic relationship. The positions of the points on the charts are deterministic located at the point where the deallocation count times the amount freed is constant. If a point does not fall on the expected line, this can only be because the total allocation or the retainedmean have changed. One of the desiderata of a good GC benchmark is having the total allocation and retainedmean be invariant across different heap sizes. The benchmarks used by Singer and Jones are not invariant across different heap sizes (or the pseudo-hyperbolas would be perfect), but the variations are generally small, so the charts are pretty close.

If we partition our allocation into "small" and "large" allocations, then we will have a pseudo-hyperbola for each case. Singer and Jones's figure 5 illustrates this:

Singer and Jones note that some benchmarks have a pronounced "knee" in the curve. In this blogpost I show how the appearance of a "knee" is an artifact of presentation, and not a property of the data.

Singer and Jones note that the upper extreme of the curve is the point at which the amount of memory a program has available is equal to what the program needs. No deallocation need be done. The extreme lower end is the point where the amount of memory given is just under the maximum amount of live storage needed. Singer and Jones note that the curve approaches an asymptote, but they do not identify the asymptote as the value of retainedmean. (Of course the curve does not reach the asymptote unless the peak memory usage is the same as the mean.)

In this post, I plot the GC count vs. the memory size for various runs of MIT Scheme. The plot is in log-log space, so the product of the GC count and memory size falls on a line rather than a hyperbola. The axes are swapped from the convention of Singer and Jones, so the asymptote is vertical rather than horizontal. A non-zero value of retainedmean displaces the hyperbola to the right and makes the upper end approach a vertical asymptote rather than intersect the axis. The blue "unadjusted" line is simply a plot of GC count * memory size. The green "adjusted" line is GC count * (memory size - retainedmean).


No comments:

Post a Comment