Hello everyone! I'm needing some help with the data logging feature. I have set up a data logging to monitor a series of faults and will be using this to trend data to monitor machine down time. I would like to minimize data to just when the faults occur. I've tried using a log trigger but it seems to take a while before the log displays the information on the webserver, is this normal?
Yes, the built in logging utilizes a caching feature to minimize the number of writes to the SD card and improve its longevity.
If you "roll your own" logging, the caching does not take effect.
If the fault is cleared out too fast the data is not logged either,it takes about 10 seconds before it will recognize it and be in the log. I would like to store all the data in 1 csv file if possible for this, but have not been able to set this up yet. Does the update rate, samples, and files matter since I am only doing a trigger snapshot? Will decreasing data tags in crimson help out.
Hardware: Micrologix 1400
Graphite Hmi
data logging setup:
1 sec update
3600 samples
168 files
I have forgotten the details, but when you are using a trigger type of log, the number of samples and update time still affect when a new file is created (I might be mis-remembering this...)
So the values are still useful. I remember suggesting to red lion that they alter the text next to each field depending on whether you are doing interval or trigger based logging, but that didn't get changed.
In any case the number of total tags you are monitoring with the logger and with the application as a whole can probably have an effect on the missing triggers.
If the tag is not staying on long enough to be captured by the communication cycle, it isn't going to trigger any action no matter how you try to set up the logging. To optimize comms efficiency, it is always best to pack all the PLC data into as few files as possible and in contiguous blocks of elements.
If all your bits and integers can be packed into N23:0 through N23:100 and all your reals fit into F24:0 through F24:54, then theoretically, the HMI can read all that data in two requests. If you have data scattered across 12 different files, then obviously that is going to slow things down.
You could also try doing your own log file creatiion. You can set up a trigger on your tags that call a program that opens a file, appends it, and closes it.
Matter of fact, I would suggest adding trigger events to the tags if for nothing else than to increment a dummy counter and perhaps record a timestamp just to prove the tag change is being picked up.
If you do your own data logging, there is no behind the scenes caching, so you will be beating up the SD card more often...keep this in mind.
I recently had some major headaches with a Graphite logging where I ended up replacing the SD card to get it working right. Mine would log great for 4 to 8 hours, then quit....I was using Transcend 2GB cards that I had bought off Newegg. I ended up getting some 16GB Sandisk Ultra cards off the shelf at a big box store to solve my problem. So, the quality of the SD card can have an impact, and one that is not easy to figure out. Also, you cannot go above 16GB, since that is the largest size that can be reformatted FAT16 and partitioned down to 2GB.