Skip to content

Caching leads to important memory leak #933

@mstimberg

Description

@mstimberg

Simulations that recreate objects from scratch repeatedly show a significant memory leak with the new caching system, with the cache preventing objects from being garbage-collected. This type of simulation is often used for parameter explorations or optimization problems, as it can be easier to write than the store/restore mechanism.
Here's a simple script that repeatedly runs the CUBA example:

from brian2 import *

def run_sim():
    taum = 20 * ms; taue = 5 * ms; taui = 10 * ms
    Vt = -50 * mV; Vr = -60 * mV; El = -49 * mV
    eqs = '''
    dv/dt  = (ge+gi-(v-El))/taum : volt (unless refractory)
    dge/dt = -ge/taue : volt
    dgi/dt = -gi/taui : volt
    '''
    P = NeuronGroup(4000, eqs, threshold='v>Vt', reset='v = Vr',
                    refractory=5 * ms, method='linear')
    P.v = 'Vr + rand() * (Vt - Vr)'
    Ce = Synapses(P, P, on_pre='ge += (60 * 0.27 / 10) * mV')
    Ci = Synapses(P, P, on_pre='gi += (-20 * 4.5 / 10) * mV')
    Ce.connect('i<3200', p=0.02)
    Ci.connect('i>=3200', p=0.02)
    s_mon = SpikeMonitor(P)
    run(1 * second)
    return s_mon.num_spikes


for _ in range(100):
    spikes = run_sim()
    print('Total number of spikes: %d' % spikes)

Plotting the memory usage (via memory_profiler) shows that there is a quite obvious problem (but the cached version at least runs faster ;) ):
mem_comparison

I've looked into things a bit and I think I fixed the main part of the issue, but I'll test things a bit more before opening the PR.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions