Discussion:
LabView leaks memory when used with TestStand to log results...
(too old to reply)
David2004
21 years ago
Permalink
LabView leaks memory when used with TestStand to log results to a
database over 3 days eventually crashing the PC

Hi,

We are attempting to setup an endurance test rig, that will eventually
run for over 70days non-stop, using TestStand 3.0 and LabView 7.1 and
after only 3 days we are already seeing LabView soaking up over 300 of
MB of system resources to the point where it crashes the PC.

The PC is a 3.4GHz Dell with 256MB RAM and running Windows XP with no
other software running besides TestStand and LabView (ie.. plenty of
resources/power!).

The problem originally arose when we had 'on-the-fly' html Report
Generation turned on in TestStand whereby after running an endurance
test for over 16hrs the PC was found to be using over 300MB of memory
and then the test ground to a halt and eventually the PC crashed.
Myself and Andy Long (Cyth Systems)asumed that this was related to
having a large and rapidly growing html file open for the 'on-the-fly'
html Report Generation (confirmed when we tried to open the resulting
30MB html file on another unrelated PC that also ground to a halt
using 100's of MB to open the file).
We thought we had solved this problem when Andy Long set us up with
Database logging of results and turned off the html report generation,
and for a while everything seemed well but now we find that after
running this endurance test for 3 days we have the same large memory
usage and the PC ultimately runs out of memory and crashes.
It looks as though the changover from live html report generation to
database logging of results has bought us a little extra time but the
original problem is still there. It may be that the problem is infact
unrelated to the method of logging the results (red herring?).

This endurance rig needs to run for over 70 days for some tests so the
fact that the PC runs out of resources after only a few days is a
major concern and a tricky problem for us to pinpoint.

The additional hardware is;
- NI PCI GPIB card
- NI PCI CAN 2-port card
- USB to 8 port serial adaptor

Various error messages are generated all as byproducts of the rapidly
diminishing memory available and the PC grinding to a halt. The main
fault being the windows 'Virtual Memory Low' message. Other faults
include serial com port failures and GPIB comms failures most
definately due to hardware/software time-outs occuring as the LabView
software starts to run more and more slowly.

Any help you can offer is much appreciated.

Regards,
David
Ray Farmer
21 years ago
Permalink
Hi David,

A couple of questions;

1 Are you using the SeqEditor to execute your sequence(s)?

2 Where is the loop, in the MainSequence or in the ProcessModel?

Regards
Ray Farmer
Ray Farmer
21 years ago
Permalink
Hi David,

One point, although you have switch off the on-the-fly logging, you
are still storing results so after you have logged your results in
your database, you will need to empty the ResultList array. Otherwise
this is going to grow each iterations.

Regards
Ray Farmer
David2004
21 years ago
Permalink
Hi,

That would explain why we only saw a small improvement as we had only
reduced the amount of data logged.
Is there an easy way of emptying the ResultList array?

Thanks,
David
Ray Farmer
21 years ago
Permalink
Hi David,

Using an expression, use
RemoveElements(Locals.ResultList, 0,
GetNumElements(Locals.ResultList)).

Another thing, do you have tracing on. The Labview status window is
going to stack those results every iteration, so you will want to
switch off tracing to minimize storing all that text within the OI.
That doesn't get cleared until you start a new execution.
If that's the problem, you will have to modify the operator interface.

Regards
Ray Farmer
Ray Farmer
21 years ago
Permalink
Hi David,

The format for the index value is probably "0". You can comfirm this
by using the browse button when you edit the expression step. With the
Browse dialog open select the Functions Tab and you will find the
function as part of the Array functions. The format of each of the
parameters will be shown in the help window.

The Global Tracing options are under the menu item Configure | Station
Options.

Regards
Ray Farmer
Ray Farmer
21 years ago
Permalink
Hi David,

I thought I had posted an example, but it looks like it didn't get
sent.
The actual syntax for the second parameter is "[0]".
I'll post the example later, as it not on my current PC.

Regards
Ray Farmer
David2004
21 years ago
Permalink
Hi,

Thanks, that fixed the new function.
I placed it at what appears to be the logical place in the main
sequence (just before the test moves to the next UUT) unfortunately it
didn't help with the memory leak, though it may be due to me not
placing it at the best point in the sequence.

I've now been trying the other suggestions.
From TaskManager I can see that the LabView process is where the
memory leak is occuring at a rate of around 100KB every few seconds.

I'm now trying turning off various options and this seems to be
helping me find the leak.

- disabled tracing in TestStand
- disabled database logging
- disabled report generation

Then I found the option in the Model Options and selected
- discard results when not required by model

So far this appears to have plugged the leak as the LabView process
stays at 119,508KB.

As I need to have database logging and tracing turned an I then
re-enabled these options and the again the memory use by the LabView
process seems to be increasing, only this time at around 100KB every
10seconds although I need to run it for a while to see if this levels
off or continues to increase as before.

I will then try each of the options in turn to identify exactly which
of these options is the source of the leak.

Regards,
David
Scott Richardson (NI)
21 years ago
Permalink
David -
The purpose of this option is that the process model needs to know
whether the results are still after the testing of a UUT is complete.
If a customer adds a custom step or component that processes the
results at the end also and if this option automatically discarded
results, a user might not know why no results come back and it would
be difficult to trace down the reason.

On the flip side, if a user does not know about the option, they might
not understand why memory is not released while testing a UUT for a
long time.

I am sorry that we did not resolve this for you more quickly.

Scott Richardson (NI)

David2004
21 years ago
Permalink
Hi,

1. No, I'm using a LabView .vi to start a TestStand Operator Interface
and I run the sequences from there.

2. The loop is in the MainSequence, the ProcessModel being unchanged.

Regards
David
Scott Richardson (NI)
21 years ago
Permalink
David -
To help figure out where the memory is being used up you might want to
do the following tests. When doing these tests, use the OS Task
Manager and look at the values for the LabVIEW process for columns
Memory Usage, Threads, GDI and User Objects, and Handles to see if
they climb over time. You will have to add some of these columns to
the Task Manager because some are off by default.

1) Turn report generation off, and database logging off, and turn off
result collection in the Station Options. Run your UUT test to see
whether the tests/sequences and the OI themselves are using up memory.

2) Add back on the fly database logging, but make sure that the Model
Option to conserve memory is enabled. This instructs the on-the-fly
process mdoel callbacks to discard results after being processed
because they are not needed later when the UUT completes it long
testing run.

What database are you using?

Also, the OI does not hold onto any results, it should only display
the state of the executing sequence at a single instance of time.

Scott Richardson (NI)
David2004
21 years ago
Permalink
Hi Scott,

Thanks for the advice, I'll give this a try.
I did already do a quick check with Task Manager though and if I
remember it was the LabView process that was using 300+ MB of system
resources. I'll try this again and take a closer look.

I'm not using any additional database software on the rig though, I'm
just running LabView under TestStand to generate the standard built-in
TestStand results database. I then copy the database to another
machine to analyse it later using a query setup in Excel.

I'm also looking at adding some database querying into the OI so that
we can see some real time results for a few key steps. Do you think
that this is something that is likely to use up excessive amounts of
system memory (considering that the database is already open in memory
as we are saving the results to it)?

Regards,
David
Scott Richardson (NI)
21 years ago
Permalink
David -
Access a database from the OI typically should not cause a memory
issue only a potential performance issue because you are using up CPU
to do a query while logging data. Also, MS Access has some locking
behaviors that can be problematic when more than one connection is
open to a database because its locking mechanism is not as good as
normal databases like SQL or Oracle.

Since the memory use appears to be increasing and not peaking, I
assume that this is not the problem: I know that MS Access does have
an internal cache of memory. Its maximum value is based on the amount
of physical memory that the system has. There is a registry key
MaxBufferSize that can specify to cap that memory use. For more
information on this see
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/bapp2000/html/acbachap07.asp
and go to the section entitled "Adjusting Windows Registry Settings to
Improve Performance"

The key issue is to narrow down when the memory increases and more
importantly when it does not. By seeing when it does and does not,
you can figure out what specifically is causing the increase. It may
be just a use issue or a configuration issue and sometimes it may be a
problem that needs to be fixed.

Scott Richardson (NI)
Scott Richardson (NI)
21 years ago
Permalink
David -
To help figure out where the memory is being used up you might want to
do the following tests. When doing these tests, use the OS Task
Manager and look at the values for the LabVIEW process for columns
Memory Usage, Threads, GDI and User Objects, and Handles to see if
they climb over time. You will have to add some of these columns to
the Task Manager because some are off by default.

1) Turn report generation off, and database logging off, and turn off
result collection in the Station Options. Run your UUT test to see
whether the tests/sequences and the OI themselves are using up memory.

2) Add back on the fly database logging, but make sure that the Model
Option to conserve memory is enabled. This instructs the on-the-fly
process mdoel callbacks to discard results after being processed
because they are not needed later when the UUT completes it long
testing run.

What database are you using?

Also, the OI does not hold onto any results, it should only display
the state of the executing sequence at a single instance of time.

This is independent of the database logging feature, but there are
some known problems with Microsoft's Internet Explorer control (used
by OIs and the sequence editor) where it incorrectly holds onto
memory. I know that TestStand 3.1 has made some changes in how it
uses the control to limit the controls memory increases, but TestStand
3.0 did not workaround these problems as well.

Scott Richardson (NI)
Loading...