Massive memory overhead: Numbers in Python and how NumPy helps

Let’s say you want to store a list of integers in Python:

list_of_numbers = []
for i in range(1000000):
    list_of_numbers.append(i)

Those numbers can easily fit in a 64-bit integer, so one would hope Python would store those million integers in no more than ~8MB: a million 8-byte objects.

In fact, Python uses more like 35MB of RAM to store these numbers. Why? Because Python integers are objects, and objects have a lot of memory overhead.

Let’s see what’s going on under the hood, and then how using NumPy can get rid of this overhead.

Measuring memory usage

If we profile the code snippet above, here’s the allocations used:


As you can see if you hover or click on the frames:

  • ~8MB was allocated for the list.
  • ~28MB was allocated for the integers.

Side note: You would get the same memory usage if you did list(range(1000000)), but I structured the code this way to make it clearer where each chunk of memory usage came form.

The list taking that much memory isn’t surprising–a Python list is essentially an array of pointers to arbitrary Python objects. Our list has a million entries, pointers on modern 64-bit machines take 8 bytes, so we’re back to 8MB of RAM.

But why are the integers themselves taking 28MB?

What makes an integer in Python

We can measure the memory usage of Python objects in bytes using sys.getsizeof():

>>> import sys
>>> sys.getsizeof(123)
28

Since a small integer uses 28 bytes, now we know why a million integers take 28MB of RAM. But why do Python integers take so much memory?

Every object in the default Python implementation, CPython, is also a PyObject C struct or one of its variants. Here’s what PyObject looks like:

typedef struct _object {
    _PyObject_HEAD_EXTRA
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
} PyObject;

So any Python object includes at a minimum:

On a 64-bit operating system–the default these days–those first two items mean a minimum of 16 additional bytes of overhead for every single object.

Even if you can find your data into one byte, that doesn’t matter, you still suffer from that overhead.

Note: Whether or not any particular tool or technique will help depends on where the actual memory bottlenecks are in your software.

Need to identify the memory and performance bottlenecks in your own Python data processing code? Try the Sciagraph profiler, with support for profiling both in development and production macOS and Linux, and with built-in Jupyter support.

A memory profile created by Sciagraph, showing a list comprehension is responsible for most memory usage
A performance timeline created by Sciagraph, showing both CPU and I/O as bottlenecks

Switching to NumPy

To save you that overhead, NumPy arrays that are storing numbers don’t store references to Python objects, like a normal Python list does. Instead, NumPy arrays store just the numbers themselves.

Which means you don’t have to pay that 16+ byte overhead for every single number in the array.

For example, if we profile the memory usage for this snippet of code:

import numpy as np

arr = np.zeros((1000000,), dtype=np.uint64)
for i in range(1000000):
    arr[i] = i

We can see that the memory usage for creating the array was just 8MB, as we expected, plus the memory overhead of importing NumPy:


Side note: This isn’t how you would write idiomatic or efficient NumPy code, I’m just structuring the code this way for educational purposes.

NumPy to the rescue

Going from 8MB to 35MB is probably something you can live with, but going from 8GB to 35GB might be too much memory use. So while a lot of the benefit of using NumPy is the CPU performance improvements you can get for numeric operations, another reason it’s so useful is the reduced memory overhead.

If you’re processing large lists of numbers in memory, make sure you’re using NumPy arrays. And if memory usage is still too high, you can start looking at ways of reducing memory usage even more, like in-memory compression.

Learn even more techniques for reducing memory usage—read the rest of the Larger-than-memory datasets guide for Python.