Skip to content

FIX: close mem leak for repeated draw #11972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 6, 2018

Conversation

jklymak
Copy link
Member

@jklymak jklymak commented Aug 29, 2018

PR Summary

Closes #11956

See test in #11956, but repeated drawing kept growing the size of transform._parents ad infinitum with dead weak refs. These aren't too big, but add up if you run for quite a while...

PR Checklist

  • Has Pytest style unit tests
  • Code is Flake 8 compliant
  • New features are documented, with examples if plot related
  • Documentation is sphinx and numpydoc compliant
  • Added an entry to doc/users/next_whats_new/ if major new feature (follow instructions in README.rst there)
  • Documented in doc/api/api_changes.rst if API changed in a backward-incompatible way

@anntzer
Copy link
Contributor

anntzer commented Aug 29, 2018

May be worth checking whether this helps #9141 (unlikely, but also a matter of dead weakref leakage)...

@QuLogic
Copy link
Member

QuLogic commented Aug 29, 2018

I believe WeakSetDictionary was explicitly removed due to performance reasons: #5664 I'm not sure whether this is still a concern.

@anntzer
Copy link
Contributor

anntzer commented Aug 29, 2018

I can repro the performance issue

from io import BytesIO
import time
import matplotlib; matplotlib.use("agg"); matplotlib.rcdefaults()
from matplotlib import pyplot as plt
import numpy as np

dts = []
for _ in range(10):
    start = time.perf_counter()
    fig, ax = plt.subplots(8, 8)
    fig.savefig(BytesIO())
    dt = time.perf_counter() - start
    dts.append(dt)
    print("dt", dt)
    plt.close("all")
print("median", np.median(dts))

goes from ~1.6s to ~2.0s (apparently we spend a lot of time fiddling with transforms...), so that's quite significant.

Perhaps we can use the callback arg to weakref.ref (https://docs.python.org/3/library/weakref.html#weakref.ref) to manually prune the dict ourselves when the weakref is about to be deleted (you need to key on the id() of the weakref, not of the object itself, as it is not available anymore when the callback is called.
Of course then the question (if that is indeed faster) is where does the overhead of WeakValueDictionary come from...

@jklymak
Copy link
Member Author

jklymak commented Aug 29, 2018

Ha, figures.

I spent a bit of time trying to figure out why there were so many unique transforms being created, but gave up. That whole machinery is opaque, at least to me.

@anntzer
Copy link
Contributor

anntzer commented Aug 29, 2018

I have a grand plan to rewrite the whole thing in C++ at some point :p

@tacaswell tacaswell added this to the v3.1 milestone Aug 29, 2018
@jklymak
Copy link
Member Author

jklymak commented Aug 29, 2018

So if I do

for child in children:
    for key in child._parents.keys():
           if child._parents[key] is None:
                child._parents.pop(key)
     child._parents[id(self)] = weakref.ref(self)

Then I get median dt = 0.814 s

If I do it without I get the memory leak an dt = 0.764, so only 7.5% increase in execution time, versus the WeakDictionary which has a 30% increase. (I got 1.012 s)

I think its pretty crazy that a bit of transform book keeping can add so much to the draw time.

But, is the above modest increase in book keeping time OK?

@WeatherGod
Copy link
Member

WeatherGod commented Aug 29, 2018 via email

@anntzer
Copy link
Contributor

anntzer commented Aug 29, 2018

My proposal above was to do something along the lines (untested) of

ref = weakref.ref(self, lambda ref: child._parents.pop(id(ref)))
child._parents[id(ref)] = ref

which should auto-remove the dead weakrefs.
On the other hand this means that there's a loop of child to itself (via the closure in the lambda), so we're just dependent on the GC at that point.

@jklymak
Copy link
Member Author

jklymak commented Aug 29, 2018

@WeatherGod Actually, the code above still has the memory leak in it. Needs to be longer:

        for child in children:
            badkeys = []
            for key in child._parents.keys():
                if child._parents[key]() is None:
                    badkeys += [key]
            for key in badkeys:
                child._parents.pop(key)
            child._parents[id(self)] = weakref.ref(self)

Which gives a mildly longer run-time: 0.8348 s

@anntzer, your solution just gives a key error, and I don't follow it well enough to know how to fix it.

ref = weakref.ref(self, lambda ref: child._parents.pop(id(ref)))
KeyError: (4581245960,)

@tacaswell
Copy link
Member

How about:

ref = weakref.ref(self, lambda ref, sid=id(self), target=child._parents: target.pop(sid))
child._parents[id(self)] = ref

@jklymak
Copy link
Member Author

jklymak commented Aug 29, 2018

@tacaswell that seems to work, and has dt=0.782, so only imperceptibly slower than without that change.

@jklymak jklymak force-pushed the fix-mem-leak-repeated=draw branch 3 times, most recently from 3882397 to 2f0a96c Compare August 29, 2018 23:03
@jklymak
Copy link
Member Author

jklymak commented Aug 29, 2018

This doesn't fix #9141 unfortunately, but maybe someone had a similar misapprehension about what happens to wekrefs that are put into a dictionary...

@efiring
Copy link
Member

efiring commented Sep 3, 2018

This seems reasonable, but I think it leaves pickling/unpickling incomplete, because the new deletion behavior is lost. This would not be the case if a WeakValueDictionary were used throughout. WeakValueDictionary lookups are indeed slow, though. Here is a comparison between a WeakValueDictionary, yy, and a normal dictionary, xx, each with a single entry:

In [42]: %timeit yy['a']
479 ns ± 37.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [43]: %timeit xx['a']
45.7 ns ± 2.08 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

In [44]: xx == yy
True

The numbers are essentially unchanged for 100-entry dictionaries.

@efiring
Copy link
Member

efiring commented Sep 3, 2018

Looking at the code for WeakValueDictionary gives me a mild suspicion that doing without it involves a risk of extremely rare errors. Here is an excerpt from the __init__:

        def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
            self = selfref()
            if self is not None:
                if self._iterating:
                    self._pending_removals.append(wr.key)
                else:
                    # Atomic removal is necessary since this function
                    # can be called asynchronously by the GC
                    _atomic_removal(d, wr.key)
        self._remove = remove

@jklymak
Copy link
Member Author

jklymak commented Sep 4, 2018

For my edification, though, why do we keep getting so many child/parent pairs for these transforms? it seems this is something that we could readily deal with explicitly when the axes is cleared or destroyed. But maybe I'm being naive...

@efiring
Copy link
Member

efiring commented Sep 4, 2018

I've never understood the transform framework well and I still don't, but here is the way it looks to me right now. First, the "parent-child" terminology is misleading; "parents" are transforms that depend on their "children", so that if a "child" is modified, the immediate parents, and their parents, etc. must be marked invalid, because the net result of the chain to that point will be modified and must be recalculated. Second, in the chain of transform steps taking a line in data coordinates to its representation in display coordinates, there is a series of links that does not need to change for each new line. The Axes box and the canvas dimensions aren't changing, for example. Therefore, there are links in the transform chain that are reused, and one end of the reused chain becomes a "child" that gets a new "parent" (starting a fresh sequence of links) each time a new line is added or an old line is replaced. Unless the old, no-longer-used "parents" are deleted from the _parents dictionary of that "child", the dictionary keeps growing.

Although I have described the transform sequence as chain, it can be more complicated. In particular, two chains, one for x and one for y, can be combined into a blended transform (a "parent"). Each of the respective "child" ends of the x and y chains then has a _parents dictionary with an entry pointing to the same parent, the node representing the merger of the two chains.

@jklymak
Copy link
Member Author

jklymak commented Sep 4, 2018

I still think there is a root cause here that I don't understand.

If I check the transform for the line, which is the only thing that is changing each draw in the test code, then I get the same transform each time. Its not being invalidated, and its not changing. So its children never get set.

So some other transform keeps reseting its own children in this case (or making a new "parent"). My strong suspicion is that it is the data cursor (or whatever we call the data display in the bottom left corner). I'm not sure where to get its transform info, but it doesn't seem like it should be calling transforms.set_children all the time either... I suspect somewhere it keeps needlessly making new versions of its transform information.

@efiring
Copy link
Member

efiring commented Sep 4, 2018

What is the test code that you are using?

@efiring
Copy link
Member

efiring commented Sep 4, 2018

I suspect the problem will turn out to be pervasive, not restricted to one little plot element. I can trigger massive transform node generation, and get a glimpse of one source, with the following procedure.

First, on master, edit transforms.py, inserting 2 lines after line 170 so that the body of set_children is

        for child in children:
            if len(child._parents) > 99:
                raise Exception
            child._parents[id(self)] = weakref.ref(self)

Next, in ipython, execute

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.cla()
ax.cla()

That should be enough to raise the exception. Then you can see from the long traceback the chain of function calls that is involved in the transform node generation, and you can use the %debug magic to see the child at which the Exception was raised, count the number of live references in _parents, (16 in this case) etc.

The number "99" is obviously arbitrary. If you use a number like 50, you only need one ax.cla(). Presumably one could go through a long sequence of numbers to find all of the call chains that lead to large numbers of parents.

This is not just a rabbit hole, it is a rabbit metropolis.

@anntzer
Copy link
Contributor

anntzer commented Sep 4, 2018

Re: #11972 (comment) I think you mean the use of _atomic_removal? Looks like it went in in python/cpython@e10ca3a re: one thread's removing dead weakrefs incorrectly killing a new entry set by another thread.
I don't think we have ever made any guarantees regarding the multithreading safety of matplotlib (even outside of the event loop integration, let's say pure agg)(?) so while we should keep an eye on the issue I think this PR is still an improvement over the current situation.

Copy link
Member

@efiring efiring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost ready--but I think the _parents construction in __setstate__ needs a similar treatment to include the callback.

# pass a weak reference. The second arg is a callback for
# when the weak ref is garbage collected to also
# remove the dictionary element, otherwise child._parents
# keeps growing for multiple draws.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative comment: "Use weak references so this dictionary won't keep obsolete nodes alive; the callback deletes the dictionary entry. This is a performance improvement over using WeakValueDictionary."

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WRT __set_state__ no prob - I'll do tonight...

@jklymak
Copy link
Member Author

jklymak commented Sep 4, 2018

@efiring agreed that its a bit of a warren.

I understand why cla might make a bunch of new children. What I don't understand is why

import numpy as np
import matplotlib.pyplot as plt

fig, ax = plt.subplots()
l = ax.plot(np.arange(10))


plt.show(block=False)

for i in range(1, 5000):
    l[0].set_ydata(np.arange(0, 10 * i, i))
    plt.pause(0.0001)

does, which is what the OP was doing.

Its something with plt.pause because I can't get the error if I just call fig.canvas.draw_idle() (but nor can I get the figure to display in nbagg qt5agg). So while I think the fix here is appropriate, maybe the other thing to say in #11956 is that plt.pause() is not meant for "industrial-grade" plots that are supposed to run for many days. Though exactly what is better to use is not at the tip of my brain.

@efiring
Copy link
Member

efiring commented Sep 4, 2018

Right, without the pause it won't draw. I agree that logically, in the example you give above, there should be no need for transform generation with each loop, because the axes limits, ticks, etc. are all constant. But everything is redrawn with each loop. Axes.draw is called. There is a lot of getting of transforms and transform parts, and evidently new transforms are being generated each time. It would be interesting to track down where and why this is happening.

@tacaswell
Copy link
Member

If we can track down why repeatedly drawing is creating new transforms that may yield some nice performance improvements as well.....

@jklymak
Copy link
Member Author

jklymak commented Sep 4, 2018

... is there somewhere that all the stale stuff is documented? I don't see that most draw methods care if the artist is stale or not. I modified some of them to not draw if stale=False, but that just meant that the next draw those artists disappeared. So, I assume that there is a cla that gets called on each draw. But frustratingly I can't readily find it. It seems that we don't want to cla on every pause, but rather just update elements that are stale, but again, I'm probably not understanding what stale is supposed to do.

@jklymak
Copy link
Member Author

jklymak commented Sep 4, 2018

.. actually, sorry, I see now - unless you use blitting and animation, of course the whole image needs to be recomposed on subsequent draws because otherwise it doesn't know what the "background" looks like and you can't erase the "old" state cleanly.

Still not sure why that means new transforms need to be made each draw, so that is still a mystery...

@tacaswell
Copy link
Member

Stale keeps track if there has been a change to the artist that would require re-rendering the whole figure. In principle it could be used for 'auto-blitting' (which requires tracking what bounding boxes are stale, what artist overlap with that, which makes the stale box bigger, and figuring out where the updated artists will land, and then blanking just those (regions) re-drawing just the the things that change) but currently it is just used to decide if we should call draw_idle on the figure.

cla nukes our internal data structures so if you call it you lose all of the artists. The line in Agg that clears the canvas before at the start of each draw is

def draw(self):
"""
Draw the figure using the renderer.
"""
self.renderer = self.get_renderer(cleared=True)
# acquire a lock on the shared font cache
RendererAgg.lock.acquire()
toolbar = self.toolbar
try:
self.figure.draw(self.renderer)
# A GUI class may be need to update a window using this draw, so
# don't forget to call the superclass.
super().draw()
finally:
RendererAgg.lock.release()

@jklymak jklymak force-pushed the fix-mem-leak-repeated=draw branch from 2f0a96c to f23d891 Compare September 5, 2018 22:01
@jklymak jklymak force-pushed the fix-mem-leak-repeated=draw branch from f23d891 to 3325bde Compare September 5, 2018 22:03
@jklymak
Copy link
Member Author

jklymak commented Sep 5, 2018

@efiring, I thinkI fixed the setstate properly. OTOH, I don't have a lot invested in squashing this bug for folks who pickle their plotting environment (a feature I think is over the top in requiring us to jump through hoops).

@tacaswell
Copy link
Member

a feature I think is over the top in requiring us to jump through hoops

It is something we support and can not break. The biggest use case is using multi-processing to build figures where generating (but not drawing) the artists is very expensive.

Why is set_children being called so many times?

@jklymak
Copy link
Member Author

jklymak commented Sep 5, 2018

Why is set_children being called so many times?

Its still unclear to me!

@jklymak
Copy link
Member Author

jklymak commented Sep 5, 2018

    def set_children(self, *children):
        """
        Set the children of the transform, to let the invalidation
        system know which transforms can invalidate this transform.
        Should be called from the constructor of any transforms that
        depend on other transforms.
        """
        # Parents are stored as weak references, so that if the
        # parents are destroyed, references from the children won't
        # keep them alive.
        print('set children!', id(self))
        for child in children:
            # Use weak references so this dictionary won't keep obsolete nodes
            # alive; the callback deletes the dictionary entry. This is a
            # performance improvement over using WeakValueDictionary.
            ref = weakref.ref(self, lambda ref, sid=id(self),
                                        target=child._parents: target.pop(sid))
            child._parents[id(self)] = ref

and then run:

import numpy as np
import matplotlib.pyplot as plt

fig, ax = plt.subplots()
l = ax.plot(np.arange(10))
plt.show()

and wiggle the cursor around - set_children gets called continuously... (this doesn't mem leak because it just keeps setting the same parents)...

@jklymak
Copy link
Member Author

jklymak commented Sep 5, 2018

More digging arround - if you __add__ transforms, it creates a new transform. I suspect we have lots of transA + transB running around, and that calls set_children on both transA and transB each time its called.

@efiring efiring merged commit 4754b58 into matplotlib:master Sep 6, 2018
@Bibushka
Copy link

Bibushka commented Sep 7, 2018

@jklymak sorry to barge in. Have you tried the fix you gave in the commit? I still get between 0.1-0.3 MB increase in memory use with every call to draw. I made the changes yesterday, left the app running for 15 hours, memory usage still went up by 2.2 GB (redraw every 30 seconds). Is the fix still in progress?

@jklymak
Copy link
Member Author

jklymak commented Sep 7, 2018

@Bibushka I tested w/ the code below and get results like those below for as long as I want. If you are testing w/ different code, or have a different set up perhaps there is another memory leak, or there is something wrong with the test?

Total allocated size: 1246.4 KiB
Total allocated size: 1314.6 KiB
Total allocated size: 1382.5 KiB
Total allocated size: 1452.1 KiB
Total allocated size: 1523.4 KiB
Total allocated size: 1592.6 KiB
Total allocated size: 1662.1 KiB
Total allocated size: 1245.8 KiB
Total allocated size: 1311.5 KiB
Total allocated size: 1379.6 KiB
Total allocated size: 1448.9 KiB
Total allocated size: 1518.5 KiB
Total allocated size: 1587.7 KiB
Total allocated size: 1657.2 KiB
Total allocated size: 1315.6 KiB
Total allocated size: 1381.1 KiB
Total allocated size: 1380.8 KiB
Total allocated size: 1450.2 KiB
Total allocated size: 1521.5 KiB
Total allocated size: 1573.0 KiB
Total allocated size: 1661.8 KiB
Total allocated size: 1250.8 KiB
Total allocated size: 1316.2 KiB
Total allocated size: 1384.3 KiB
Total allocated size: 1459.2 KiB
Total allocated size: 1512.2 KiB
Total allocated size: 1601.0 KiB
Total allocated size: 1662.2 KiB
Total allocated size: 1019.5 KiB
Total allocated size: 1055.0 KiB
Total allocated size: 1088.9 KiB
Total allocated size: 1123.7 KiB
Total allocated size: 1159.1 KiB
Total allocated size: 1194.6 KiB
Total allocated size: 1230.2 KiB
Total allocated size: 1265.9 KiB
Total allocated size: 1301.3 KiB
Total allocated size: 1339.3 KiB
Total allocated size: 989.7 KiB
Total allocated size: 1021.2 KiB
import matplotlib.pyplot as plt
import numpy as np
import os
import linecache
import sys
import tracemalloc
import time


def display_top(snapshot, key_type='lineno', limit=2):
    '''
    function for pretty printing tracemalloc output
    '''
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))

tracemalloc.start()

y = np.random.rand(100)
x = range(len(y))


fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y,'b-')
plt.show(block=False)

t0 = time.time()

while True:
    try:
        ax.clear()
        ax.plot(x,np.random.rand(100),'b-')
        plt.pause(0.0001)
        snapshot = tracemalloc.take_snapshot()
        display_top(snapshot)
        time.sleep(0.05)
    except KeyboardInterrupt:
        break

@Bibushka
Copy link

Bibushka commented Sep 10, 2018

I'm kind of a nub and my setup is not as fancy. I use memory_profiler's profile to track the memory changes, see results below:

from PyQt5 import QtCore, QtWidgets
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT
from matplotlib.figure import Figure
import numpy
import random
import sys
from gc import collect
from memory_profiler import profile


x = []
y = []


class Ui_main_window(QtWidgets.QMainWindow):
    def __init__(self):
        super(Ui_main_window, self).__init__()
        self.setObjectName("main_window")
        self.centralwidget = QtWidgets.QWidget(self)
        self.centralwidget.setObjectName("centralwidget")
        self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
        self.gridLayout.setObjectName("gridLayout")
        self.chart_canvas = MyCanvas(self.centralwidget, width=6, height=3, dpi=100)
        self.gridLayout.addWidget(self.chart_canvas, 3, 0, 2, 6)
        self.toolbar = NavigationToolbar2QT(self.chart_canvas, self.centralwidget)
        self.toolbar.update()
        self.gridLayout.addWidget(self.toolbar, 2, 0, 1, 4)
        self.setCentralWidget(self.centralwidget)
        self.menubar = QtWidgets.QMenuBar(self)
        self.menubar.setGeometry(QtCore.QRect(0, 0, 539, 21))
        self.menubar.setObjectName("menubar")
        self.retranslateUi(self)
        QtCore.QMetaObject.connectSlotsByName(self)
        self.show()

    def retranslateUi(self, main_window):
        _translate = QtCore.QCoreApplication.translate
        main_window.setWindowTitle(_translate("main_window", "Main Window"))


class MyCanvas(FigureCanvas):
    def __init__(self, parent=None, width=6, height=3, dpi=100):
        self.fig = Figure(figsize=(width, height), dpi=dpi)
        self.axes = self.fig.add_subplot(111, frame_on=True)
        FigureCanvas.__init__(self, self.fig)
        self.setParent(parent)

        self.lines = []
        self.labels = []
        timer = QtCore.QTimer(self)
        timer.timeout.connect(self.update_figure)
        timer.start(3000)

    @profile()
    def update_figure(self):
        print("update_figure")
        (processed_time_values, processed_numeric_values) = self.value_processing()
        self.axes.cla()
        self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
        self.draw()

    def value_processing(self):
        global x, y
        x.append(random.randint(0, 20))
        y.append(random.randint(0, 20))
        return x, y


app = QtWidgets.QApplication(sys.argv)
GUI_main_window = QtWidgets.QMainWindow()
main_window = Ui_main_window()
app.exec_()

Results:

update_figure

Line # Mem usage Increment Line Contents
55 65.2 MiB 65.2 MiB @Profile()
56 def update_figure(self):
57 65.2 MiB 0.0 MiB print("update_figure")
58 65.2 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.2 MiB 0.0 MiB self.axes.cla()
60 65.2 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.3 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.3 MiB 65.3 MiB @Profile()
56 def update_figure(self):
57 65.3 MiB 0.0 MiB print("update_figure")
58 65.3 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.3 MiB 0.0 MiB self.axes.cla()
60 65.3 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.4 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.4 MiB 65.4 MiB @Profile()
56 def update_figure(self):
57 65.4 MiB 0.0 MiB print("update_figure")
58 65.4 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.4 MiB 0.0 MiB self.axes.cla()
60 65.4 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB @Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB @Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.5 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.5 MiB 65.5 MiB @Profile()
56 def update_figure(self):
57 65.5 MiB 0.0 MiB print("update_figure")
58 65.5 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.5 MiB 0.0 MiB self.axes.cla()
60 65.5 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB @Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB @Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.6 MiB 0.0 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.6 MiB 65.6 MiB @Profile()
56 def update_figure(self):
57 65.6 MiB 0.0 MiB print("update_figure")
58 65.6 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.6 MiB 0.0 MiB self.axes.cla()
60 65.6 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.8 MiB 0.1 MiB self.draw()

update_figure

Line # Mem usage Increment Line Contents
55 65.8 MiB 65.8 MiB @Profile()
56 def update_figure(self):
57 65.8 MiB 0.0 MiB print("update_figure")
58 65.8 MiB 0.0 MiB (processed_time_values, processed_numeric_values) = self.value_processing()
59 65.8 MiB 0.0 MiB self.axes.cla()
60 65.8 MiB 0.0 MiB self.axes.plot(numpy.array(processed_time_values), numpy.array(processed_numeric_values))
61 65.8 MiB 0.0 MiB self.draw()

@tacaswell
Copy link
Member

@Bibushka The x and y lists are growing without bound in your example. While your example is closer to your actual use case, the Qt GUI and timer make it much more complex. Can you break up creating the numpy arrays and callingplot into two lines?

@jklymak Are you sure that loop is actually causing re-draws? I think you need a draw_idle or draw call in there.

@jklymak
Copy link
Member Author

jklymak commented Sep 10, 2018

@tacaswell. Plt.pause does the redraw not think. It definitely updates the plot ;-)

@tacaswell
Copy link
Member

Ah, 🐑 nvm

@Bibushka
Copy link

@tacaswell i don't think it's the timer, nor the lists that cause the problem. The problem is that one extra point in the plot shouldn't make the memory jump by 0.1MB by each call of self.draw() and the fact that this memory isn't cleared when i use self.axes.cla().

@Bibushka
Copy link

I have managed to switch to PyQt5 instead of PySide2 and it seams to have stopped the memory leak, never would've imagined a library could cause such a huge problem

@tacaswell
Copy link
Member

See #12089 and the follow on PRs.

This should be fixed in 3.0 and will be fixed in 2.2.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

apparent memory leak with live plotting
7 participants