Data Collection

Join Date
Apr 2002
Location
Just a bit northeast of nowhere
Posts
1,117
Hi guys!

/Anecdote Start/

About a year ago, I set up a small data aquisition system with one machine, using a $200 OPC software package over the plant intranet and an Access database. The front end was Excel, and I did a pretty fair job of presenting things like cycle times, output rates, fault tracking etc.

Management thought it was nice, used some of the data to validate other work being done on the machine, and soon after the whole thing was dropped and forgotten.

Flash forward to present...

My mechanical counterpart and I just completed a machine rebuild, and the old political "is it really running as fast as it should be?" question surfaced. This is particularly important to us, since we have new management in our facility, and as the first finished project of the year, we want to present it as a complete success.

So, I pull out my dusty data collection notes and set up a similiar system as I did a year ago on our new machine (Unitronics V280 with database and Ethernet). I added some refinements, set up a new front end, printed out a few shifts' worth of data and went into our regular meeting with the bosses.

Now, I should point out that the data proved our machine was running at rate. I point that out because, as soon as the managers took one look at our data, our particular machine was completely forgotten.

Question #1: "How did you get this data?"
Question #2: "How fast can you set this up on evey other machine in the plant?"

/Anecdote End/

Anyhow, my mostly-brushed off SCADA OPC project of '03 has now become the new company savior of '05, and I'm calling integrators and digging into this while trying to keep from smirking.

All of which brings me to the question part of this essay:

Obviously, we want to collect all the basics -

production rate
cycle time
uptime
downtime
fault time
cycle counts
parts counts
fault counts
fault history (sampled on faults)

1. Are there any other types of data you consider "basic"? If you were designing a SCADA system, what would you want to see?

2. We're taking a hard look at RSView (me, an AB product!) Any comments, for or against?

3. Any particular pitfalls you can suggest that we avoid? Stories of your experiences with this sort of thing are very very welcome :D


Thanks alot, and I look forward to your suggesstions.

TM
 
Timothy -
I'm currently working on a very similar system.

Quite a bit of our production is along assembly lines consisting of various roller and belt conveyors, with multiple 180 turns and chain transfers. Three belt conveyors make up the main "front", "middle" and "back" sections of these lines. In addition to your 'basic' list, our production folks also want to see when, how long and how often these sections are "backed up" due to slow work in the following section.

Line stops have also resurfaced as a hot issue, so I'll be collecting the when, where, how long, how often, etc for each line stop.

I'm using RSSql to perform all of the data collection and deposit it into a SQL database. All the plcs involved are either AB or Modicon. The IS guys are going to write queries against the data base. Some of the users involved are savvy enough to use Excel to "roll their own". I've already incorporated some of the data into a few RSView applications that are sprinkled around the shop floor.

Mark
 
I have used RSView which works well, plus you can do your own VB scripting, ect. But the cost for more tags adds up and then there is the support costs annually and all the builtin features that you never have time to learn Trending, Messenger, SPC, ect. So unless you have a time block to learn don,t promise anything. You can talk to many different processes using the built in OPC server and with RSLinx gateway multiple computers can be tied in.
=============================================================
personnaly
I like the basics just pull the data into an excell spreadsheet and manipulate it from there send shipping (raw usage, # pallets, inventories, ect),send engineering(timing, rates, up/down time, ect.), send production (# pieces actual by prt. no., predicted pieces by part number,ect.), send the CEO (number of dollars projected and actual), send scheduling (rates by part number)
send maint(uptime, maintenace schedules built-ins;grease, replacement parts, ect.), send your boss all of it. It turns into a living document after it catches on and you get to enjoy some heroics.

==============================================================

Just remmeber it is not a supervision replacer, (-Esuperviser-).
It is a production enhancement of realtime numbers. Fight the temptation to use as "BIG BROTHER is watching" .
It always amazes me what the numbers turn up and with subtle process tweeks (timers , counters, bit chasing, ect.) how effiecient the system can become .
===========================================================
 
Tim,

Each section in my main process is controlled either automatically by the PLC, or in a semi-automatic mode (the operators instigate the actions but the PLC controls the actions), or, in some cases, manually by the operators (really manual). The sensors never "sleep" so the process always knows what is going on... sometimes.

That is, at any given time, some sections can be running in automatic, while others are running in semi-automatic, and still others are running in manual. It's a big process; 4-stories tall and 2-blocks long.

There are many operators that are real quick to lay the heat on the PLC (meaning, the Process Developer) for any production problems.

When I present my reports regarding Down-Time caused by Control Process Faults, the report lists the faults according to the particular mode that the particular section was in. That is, the specific fault followed by the particular operating mode.

The result of the report is a "Come to Jesus" sorta thing.

The report clearly shows those faults which are clearly attributable to the idiot Process Developer, while, at the same time, showing those faults that are operator caused.

In either case, I, as the Process Developer, know that I have a problem to solve and the particular problem is known... or, at least, I have a good idea of where the problem area is.

The job of "idiot-proofing" a process is never done. My primary effort is directed at being sure that the process isn't being developed by an idiot! As in... "Who the Hell wrote this ****?" (as if I didn't know)

As far as Ken's comment about Big-Brother...

From my perspective, my report is not intended to be a "bird-dog". It only forces a bit of "truth in advertising". If a guy "falls asleep at the switch" then he had damned well better stand up like a man and take his medicine. That applies to that idiot Process Developer as well! In fact, even if it turns out that an operator caused a problem, there is still a chance that the problem was more process-caused (according to the program developed by that idiot Process Developer) than operator caused.

The hardest thing to account for in a control process is... "What could an operator do, what might he do, that I haven't considered?"

This problem is most apparent in those processes where there is the possibility of mixed operating modes (auto, man, & semi-auto).

So, sometimes, depending on the number of possibilites, it is simply a case of, handling what you have considered and then "trapping" those that you haven't. At least, that way, you can be made aware of, and then address, unhandled problem areas in the process as they become apparent.

This is good for everybody... and the bottom-line.
 
Have a good look at Citect. One HUGE advantage is they have drivers for almost everything so if you get a stranger or Modbus or something they will probably have a driver for it.

Trending is the best I have seen. www.citect.com
 
If you ever want to do OEE calculations (which front-office types LOVE), you also need to include some fields showing if a machine is SCHEDULED for production or not, and the TARGET RATE (SPEED, Parts Per whatver).

To make the system useful in terms of planning and bottleneck determination, you need to have downtime reason codes (actual pre-defined codes or bar-codes work best) so a particular downtime can be related to base reasons:

001 - Machine Mechanical/Electrical Failure
002 - Quality Control Shutdown
003 - Waiting on raw materials
004 - Waiting for removal of finished materials
005 - Operator Break
..
..
..
020 - Preventative Maintenence.

If you are (as is best) writing events to a database (honestly, SQL is the best choice for this, over access) you can feel free to include a free-text field or two for more detailed descriptions of downtime events (including parts replacements, problem resoltions, etc); but make sure you have a pre-defined list of root causes first.
 
I think you should also include a system to validate your data connections. Like a monitoring system to check if everything is ok.

The level of this validation will vary depending on the importance of the data of course.
Like a process parameter is more important to keep than a cycletime reading. So if you don't care if you loose data than this is not needed.

What we have done to ensure we always keep the links alive, is to send a "heartbeat" from each plc to the sql server.
Then each plc sends a request to the aquisition system (link ok?), the aquisition system checks the sql history to see if it contains any
stored "heartbeats" for this plc, if it does, the system replies with "ok", if not it replies with "not ok".

I think this is a good way of "catching" plc / communication / server errors "on the fly".

Regards
Borte
 
Make sure if you don't use just a SCADA package for storing data. They don't work well for large data storage/manipulation. RSHistorian or INSQL from WW.
 
Tim,

rdrast's post reminded me of a couple of items on my daily production report.

It includes the following information on a per-shift basis...
  • Percent of Shift Production in terms of Shift Quota Target
  • Percent of Shift Production in terms of Up-Time available
  • Percent of Shift Up-Time in terms of real-time.

The report also carries forward a running total on a Shift-Month-to-Date, Shift-Quarter-to-Date and Shift-Year-to-Date basis. It also includes a running total of all shifts combined for each of the catagories mentioned (ie, a Daily Month-to-Date, etc).

The numbers translate very easily and nicely into any kind of graphs that you might want for evaluating production trends.

All too often the name of the game is "The Blame-Game". I had to create this part of the report because of the on-going battle between Production and Maintenance as to who was responsible for what.

Sometimes these numbers vindicate those that are being accused.
Other times, these numbers give credence to those making the accusation. In any case, the truth is layed out for all to see.

Management is very happy with the "Percent of Shift Production in terms of Shift Quota Target". That immediately gives them cause to either pass on an "ataboy" or begin the "finger-pointing-game".

The Production Department is especially happy with the "Percent of Shift Production in terms of Up-Time available" because, if they're on the ball, it shows that they did the best they could with the time available. Of course, if they've been on the dole, it shows that as well.

The Maintenance Department is especially happy with the "Percent of Shift Up-Time in terms of real-time" because, as long as the process is available, Maintenance can show that the cause of any slow-down in production is in the hands of Production. Of course, if available Up-Time is down, and Production can show that they did the best they could with the time they had, well, in that case, the issue lays squarley on the shoulders of the Maintenance Department.

The point is that I've created the report to expose all of the data for all to see. The report is printed off in the production area. They can see what the numbers are. They know when they need to do something before someone lays the hammer on them.

The Maintenance Department chooses to review the data only at the Weekly Production Meeting even though the data is available on a daily basis.
 
Beryl, I do not agree. I have several large Citect systems handling large amounts of trended data, alarm files, event files etc etc. One of these runs a power station and trending on many points takes place takes place every 20 milliseconds. The longest "look" is every minute. Huges amounts of data to the extent that they were able to put some trended data together on a graph and watch an engine failure take place over a period of several minutes.

I also do all the required reporting from within Citect, using the PLCs tp collect and store the data until called by Citect.

Unless you and I are referring to different types of data of course.
 
BobB, I'm not knocking Citect, or any SCADA system, but I totally agree with Beryl on this. Even though a SCADA may be able to store data and report on it, it is still in the SCADA system, which is not designed as a data repository.

By pumping off production and machine info (even alarms and sometimes historical trended data) to an actual database, you make that information vastly more managable, usable, and valuable.

Managable: Let the companies IT department maintain the Database server (they are already). That leverages "someone else" to back up the information, and maintain the hardware it resides on. Also, this allows the company to determine and or set permissions and access rights. Beyond that, it is incredibly easy to add storage, or span data across servers enterprise wide if needed, without changing a single thing in your SCADA/DAQ System. With Data translation Services, even the database can be changed from SQL to Oracle to SyBase to a custom module in SAP without changing your application.

Usable: Keeping gigabytes of data is silly if it's not easily accessable. Or is only accessable in a specific format. Dropping the exact same data into SQL lets anybody (with rights) extract whatever they need, and only what they need, at any time. User-level front ends like Crystal Reports are pretty easy to learn, which relieves you (and me, and other SCADA developers) from having to write and tweak production reports. Let the end users create their own queries and reports. Let them do regression analysis. Let them decide how long to keep data live.

Even if they are only comfortable with using Excel, it isn't all that difficult to use the query builder in Excel to pull in data from SQL on demand.

Valuable: The ease of interfacing to a database, and the multiple ways of extracting, protecting, and distributing that information using standard NON-Control-Systems-Level Tools (web pages, emails, printed reports, etc) makes the information that you provide to the database much more noticable by the management ladder in companies (and often even), which makes your services to provide that data more important.

------------------
Don't forget, actual production/quality/error datalogging is a heck of a lot more involved than just monitoring speeds or temperatures. It involves time based, event based, and user initiated data transactions. But the most important thing in any datalogger is the ability to extract and use relevant information right now. Trends don't do that.

Dumping all your time-stamped speed samples into Oracle though lets the Process Engineer ask "What speed was the machine running when we had this bad sample at 0425?", while the Production Superviser can ask the exact same data "What was the average run speed during the third shift last Monday". Oh, and the line trend is still available :)
 
Update

Howdy Y'all! (I'm feeling Texish today)

I spoke to our Rockwell rep and he demo'd the Bizware Plantmetric software for me, and I was deeply impressed. Stuff looks great, all the logging we need, and a basic software installation with enough horsepower to connect to five machines is a paltry $3500.

Now, $3500 is not so paltry I want to throw it in the shredder by rushing, so I downloaded the demo of CitectSCADA...

I must confess, I'm less than impressed. OPC setup does not support browsing so everything has to be ID'd manually. In fact, the Knowledgebase recommends using a separate browsing OPC client to get this information (WTF?). Seems like an obvious thing to have in a data transaction system, but perhaps not...

Also, RDRast and Beryl were dead-on - no built-in database to compare to RSSQL in the Bizware product. Not every PLC I have to connect has the ability to generate a database (some do, some don't) but I can't rely on my data being stored elsewhere for Citect's sake.

I'm meeting the Citect guys Friday afternoon, to see if this is an accurate assessment. They may have a sister product more suited to what I'm trying to accomplish, so I'm still open to ideas.

But at this point, Rockwell is looking better and better.


TM
 
Tim,

Wonderware's DTAnalyst product is worth a look. I wrote my own downtime tracking system about 5 years ago in Intouch (still in use today) and am seriously considering replacing it with DTA. OEE calculations, Web-based reports, and just about any different way of looking at data that your IE can possibly think of. Feel free to email me personally if you have specific questions, as I have it installed and running as a test on several machines in the plant.

Jeff
 

Similar Topics

Hi everyone, got a Siemens S7-1200 collecting data off one our furnace lines. I'm getting some interesting activity I'm working on cleaning up in...
Replies
6
Views
684
Never done this before We have a customer that would like to collect data, dwords, from 4 devices and read to an HMI. Then send that data to a PLC...
Replies
10
Views
2,054
I wanted to get some opinions and maybe a point in the right direction for products on networking machines for data collection. Im not all that...
Replies
15
Views
4,795
Hello All, I am looking for a way to get the data/information from METASYS BAS of Johnson Controls. I would like to get this data into Inductive...
Replies
4
Views
2,342
I am looking for straight Forward software for data collection for Allen Bradley PLC's.
Replies
11
Views
3,656
Back
Top Bottom