-
Notifications
You must be signed in to change notification settings - Fork 2.4k
optimize memory usage when creating log.html? #4739
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
First of all, have you tested using The log.html is highly optimized in terms of file size, but I'm certain there would be room for memory and performance optimization. The problem is that we are planning to rewrite the whole thing in the somewhat near future (#4304) and trying to enhance the current log file doesn't make much sense. If someone has knowledge about profiling this kind of JavaScript and HMTL code, it would be great if they could take a quick look are there some easy wins. Anyone interested? If not, I believe it's best to close this issue and concentrate on the new tech instead. |
Thanks, @pekkaklarck |
I see. I thought "rendering" referred to viewing the lig file in a browser. Creating the log file has been profiled quite a bit and I'm afraid there are no easy wins. That said, would you be interested to profile where memory goes in your exact case? I've used https://pypi.org/project/filprofiler few times and it has worked great. |
Thanks, let me share the results unicast via slack.. |
Have you @oboehmer investigated this further? Just the peak memory usage doesn't tell much, we needed to know where the memory is spend. The Fil profiler I mentioned above could help with that. Typically huge memory usage is related to having log of keywords, loop iterations, or other such constructs in output.xml. Handling them can be optimized in Robot, but typically a better solution is having less constructs like that. This can mean moving logic from resource files to libraries, making sure looping controls (e.g. WHILE) don't loop unnecessarily iterations (or moving the to libraries), and so on. On Robot data level something that can work really well is "flattening" using either |
We have a newer issue about profiling memory usage (#5371). I'll close this as its duplicate. |
I noticed robot/rebot can use multiple Gigabytes of memory when rendering log.html from complex/long output.xml. We noticed this in a container environment where we cap memory consumption to 4GB, and noticed some robot suite execution being killed by the OS. The output.xml in question contained more than 1.8 million keyword executions!!!).
I have no idea on the feasibility/effort of this ask, but it would be great if we could explore options to reduce memory footprint for this task.. We have already recommended keyword flattening to our user base
It is not a high-priority item, but would be good to review optimization areas when it comes to memory footprint..
The text was updated successfully, but these errors were encountered: