Thanks for your insights, Ian! 

A somewhat slower trifinder which requires less memory might be even faster in 
the end as creating the trifinder itself takes a lot of time (almost a minute 
in our case). 

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu

> -----Original Message-----
> From: Ian Thomas [mailto:ianthoma...@gmail.com]
> Sent: Tuesday, August 12, 2014 4:35 AM
> To: Hartmut Kaiser
> Cc: Andrew Dawson; Carola Kaiser; matplotlib-users
> Subject: Re: [Matplotlib-users] Crash when using
> matplotlib.tri.LinearTriInterpolator
> 
> Here are the results of my investigation.  There is probably more
> information here than anyone else wants, but it is useful information for
> future improvements.
> 
> Most of the RAM is taken up by a trifinder object which is at the heart of
> a triinterpolator, and is used to find the triangles of a Triangulation in
> which (x,y) points lie.  The code
>     interp = tri.LinearTriInterpolator(triang, data)
> is equivalent to
>     trifinder = tri.TrapezoidMapTriFinder(triang)
>     interp = tri.LinearTriInterpolator(triang, data, trifinder=trifinder)
> 
> Using the latter with memory_profiler
> (https://pypi.python.org/pypi/memory_profiler) indicates that this is
> where most of the RAM is being used.  Here are some figures for trifinder
> RAM usage as a function of ntri, the number of triangles in the
> triangulation:
> 
> ntri   trifinder MB
> ----   ------------
> 1000         26
> 10000        33
> 100000      116
> 1000000     912
> 2140255    1936
> 
> The RAM usage is less than linear in ntri, but clearly too much for large
> triangulations unless you have a lot of RAM.
> 
> The trifinder precomputes a tree of nodes to make looking up triangles
> quick.  Searching through 2 million triangles in an ad-hoc manner would be
> very slow; the trifinder is very fast in comparison.  Here are some stats
> for the tree that trifinder uses (the columns are number of nodes in the
> tree, maximum node depth, and mean node depth):
>    ntri      nodes  max depth  mean depth
> -------  ---------  ---------  ----------
>    1000     179097      37        23.24
>   10000    3271933      53        30.74
>  100000   36971309      69        37.15
> 1000000  853117229      87        48.66
> The mean depth is the mean number of nodes that have to be traversed to
> find a triangle, and the max depth is the worst case.  The search time is
> therefore O(log ntri).
> The triangle interpolator code is structured in such a way that it is easy
> to plug in a different trifinder if the default one isn't appropriate.  At
> the moment there is only the one available however
> (TrapezoidMapTriFinder).  For the problem at hand, a trifinder that is
> slower but consumes less RAM would be preferable.  There are various
> possibilities, they just have to be implemented!  I will take a look at it
> sometime, but it probably will not be soon.
> Ian Thomas


------------------------------------------------------------------------------
_______________________________________________
Matplotlib-users mailing list
Matplotlib-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/matplotlib-users

Reply via email to