Scan Times

format C:

Member
Join Date
Apr 2004
Location
rittal cabinet / e-bay
Posts
55
Our program scan times are normally around 23ms on a SLC500. There are no inputs that are so high speed that a scan time of 23ms causes a problem and the programs run OK!

I was talking with a programmer who was shocked at our 23ms scan time and pointed out that he writes code to try to keep scan times below 10ms.

What is the general thinking here on scan times?



Cheers,

F
 
format C: said:
Our program scan times are normally around 23ms on a SLC500. There are no inputs that are so high speed that a scan time of 23ms causes a problem and the programs run OK!

Scantime is a result of what you need to do in the logic. Although there is some ways to screww it up bad, there is a limit at what you can do to reduce it. You will often make it harder to troubleshoot for the ordinairy kind of guy if you use some functions reducing the amount of steps but but having and abstract way of working (like index addressing)

I allways go for clarity. If I need something fast guess what they have HS inputs.


format C: said:
...he writes code to try to keep scan times below 10ms.

You either give him a raise for he is sooo smart or a Krispy Kreeme donought so he doesn't talk when his mouth is full.
 
23 ms isnt so bad.

An example where it could be worth it to minimize scan time, is a production machine that has to go thru a number of steps to finish a piece. Each step has to "see" a transition in the IO that triggers the next step. If there are 10 steps in a sequence, and the minimal production time for a piece is 10 seconds (not counting time lost for the program processing the steps), then a saving of 10 ms in cycle time would yield an increase in production:

(10 x 0.01 sec)/10 sec = 0.01 = 1%.

And that could equal 1% more money in the bank !
 
Let's see. This programmer is saying that because HIS applications allow scan times below 10 msec that ALL applications should have scan times below 10 msec. Therefore, because a Porsche 911 can go 0 - 100 kph in 5 seconds then a VW Jetta GL should be able to do the same thing.
Blanket statements often involve apples to oranges comparisons. Scan times are very application specific and, as a practical matter, only need to be as low as the application requires. Throwing 1,000 man-hours at a program to cut 10 msec of scan time when you don't need to is bad economics.

Keith
 
If your process is running fine at 23 mS scan time, then there is no compelling reason to try to reduce it. There are only a few thing things a programmer can do to reduce PLC scan time. If you're doing a lot of floating point math, you could look at whether some of it could be done with integers. Integer math executes significantly faster than floating point.

Another thing that programmers sometimes do to reduce scan time is to break the program into subroutines and only execute some of the subroutines when necessary. This introduces more variation into the scan time, but can reduce the average scan time.

If you're tempted to try to reduce scan time by breaking the program into subroutines, consider this scenario. You start with a program running at 23 mS scan. You divide it into a main program and three subroutine calls with each subroutine called every third scan. In scan 1, call subroutine 1, in scan 2, call subroutine 2, in scan 3, call subroutine 3, in scan 4, call subroutine 1 again, and so on. By doing this, you get the average scan time down to 10 mS. Now, for any given output coil, the rung of logic controlling its operation is solved every third scan or once every 30 mS. By reducing the average PLC scan time, you increased the time between updates of that point.
 
I'd hire JesperMP. I like his attitude.

JesperMP said:
23 ms isnt so bad.

No, but Format C should calculate how much a 23 verses 10 or even 5 milliseconds scan is costing him.

JesperMP said:
23 ms isnt so bad.
An example where it could be worth it to minimize scan time, is a production machine that has to go thru a number of steps to finish a piece. Each step has to "see" a transition in the IO that triggers the next step. If there are 10 steps in a sequence, and the minimal production time for a piece is 10 seconds (not counting time lost for the program processing the steps), then a saving of 10 ms in cycle time would yield an increase in production:

(10 x 0.01 sec)/10 sec = 0.01 = 1%.

And that could equal 1% more money in the bank !

The rest of you are too complacent or misguided.

If a PLC has a scan time of 10 milliseconds, there will be an average delay of 5 milliseconds for each input. Now lets say this 1% saves ,as in JesperMP's example, of $1,000,000 each month. That is $10,000 and it is definitely worth the effort to shorten the scan times. Note, we can't possibly save the whole $10,000 since scan time can't be reduced to zero, yet. Since we have determined that it is worth while to improve the system the next question is how. Re- writing code takes time/$$$$ and can cause down time/$$$$ so consider buying faster hardware.

One of the things I like about the S7 is the JL instruction that allows one to jump directly to the code that is executing, thereby skipping the need to scan non-running code. This allows one to build very fast and efficient state machines.

Steve Baily said:

If you're tempted to try to reduce scan time by breaking the program into subroutines, consider this scenario. You start with a program running at 23 mS scan. You divide it into a main program and three subroutine calls with each subroutine called every third scan. In scan 1, call subroutine 1, in scan 2, call subroutine 2, in scan 3, call subroutine 3, in scan 4, call subroutine 1 again, and so on. By doing this, you get the average scan time down to 10 mS. Now, for any given output coil, the rung of logic controlling its operation is solved every third scan or once every 30 mS. By reducing the average PLC scan time, you increased the time between updates of that point.
[/b/


This is misguided. I agree that it is good to break the program down into subroutines but to execute just because it is 'their turn' is not right. Again, look at the example in Hugh Jacks book. In the example there are state bits that are just turning off or on a few coils in the example, but in a real program there can be quite a bit of code in one state. The code for individual states should be in a subroutine if it gets very big it should be executed/scanned only when that state is active.

In the future you will all be programming FPGAs and scan times will not be a problem as all rungs will execute in parallel. All that needs to happen is write the programming software.
 
Peter-
It sounds like you are discounting the mechanical system in this situation. This may not be typical but it has been my experience that machine speeds are not limited by processing times but by mechanical limits. Either there is not enough power available to move faster or adding more power will compromise the mechanical integrity of the machine. Adding processing power is something that is done up front. So if you have a case where the physical machine can go faster but the processor can't keep up, by all means start cutting scan time. But scan time is not the first place I would look to gain performance.

Keith
 
Wow! Again Peter comes with a sea of words in a desert of facts.

You mention something like a coil. What is the make and brake time of a coil? A LOT MORE THAN YOUR 10 ms.

This is not to diminish the idea of have the lowest REASONNABLE scantime.

99% of the processes will never even "see" the difference in 23 or 10 ms scantime.

The ones that really need this have usually either been very poorly programmed OR using the wrong hardware. They could need to be fitted with a PC instead of a PLC.

(Don't get mad Peter I'm just teasing you)

Having a programmer pop up the prententious number of 10ms instead of 23 is just plain silly.
 
Peter,

My illustration was intended as a caution. The point you seem to have missed is that the clever programmer who found a way to reduce PLC scan time could very well have decreased overall performance.

It should also be noted that cutting 10 milliseconds from a 10 second sequence is a reduction of 0.1 percent, not 1 percent.

Edit:
Rereading Jesper's post, I now understand what he's saying. If the ten second sequence has ten events where an input triggers a transition, then a ten millisecond improvement in throughput for each transition results in a 100 millisecond cumulative improvement which is 1 percent.
 
Last edited:
Scan times by themselves are typically meaningless. Pierre makes a good point, it should be about through put.

Scanning faster than I/O can update can cause as many problems as it solves. Solving processes (i.e. PID and status monitoring) faster than the real world can react often results in "spaghetti" code being added during start-up.
 
Scan time trials

I just completed a project using a Unitronics Vision 280, the flagship of the line. I got the premium model with all the bells and whistles.

Now, one weakness of this product line is that it uses the same processor for scanning the HMI portion of the code as the PLC portion. This means scan times can get pretty hefty, averaging 60 milliseconds in my case.

I have over 80 subroutines in this program, covering 5 stations and the overhead system logic. Becuase I use integer state logic, only 20 of these are scanned at any given time. In the longest single routine, I used jumps to bypass the nets that aren't needed for that machine state.

My point is, I started out with an empty controller at 60-70 ms, and now with a completed program I'm at 60-70 ms, so most of that value is screen overhead.

Now, the bottom line - this was a retrofit job, replacing an obsolete Superior Electric motion controller. When finished, the machine is running at the same production output rate as it was when delivered, about 7 or 8 years ago.

I personally think scan time is important only in dedicated servo control applications, and certain very-high-speed situations. Ultimately, the machine is constrained by it's physical limits far more than the speed the controller is running at.

Incidentally, if I'd used the CANbus option and networked some M90s into my stations, using the V280 for system overhead, I'd have had a much faster average scan time, methinks :) So there are always ways to improve it.

TM
 
Do I need to explain ROI to you?

Steve, I know you know better. I just don't like to see even bad examples posted because some rookie that doesn't read carefully may use it.

Pierre, I am not mad. My post was meant to be provocative and to make people think.

I am just disappointed that no one mentioned calculating the ROI although JesperMP came close. This should alsway be done whether one is upgrading the PLC, software or mechanics and obviously they should upgrade where the ROI is highest.

Tim provides an excellent case. Where is the ROI? Why upgrade. It could be just insurance in case the old motion controller failed but there must have been a reason for the upgrade?

To just say that Format C's 23 millisecond scan is OK is not right.
The question that someone before me should have asked is what is the 23 millisecond scan cost him? If it is nothing then nothing needs to be done. It is that simple, but none of you asked that question?
 
In my experience, there is no "correct" scan time. I've worked on machines in the late 80's that had scan times of 90ms and they worked just fine, and I've also worked on machines where 20ms would have been unacceptable. In any case though, I always programmed as efficiently as possible to keep the scan to a minimum (I've never seen a case where the scan could be "too short" though, although a couple of people mentioned it here).

The first thing I do on a new project is look at the entire process and try to find the "weak link" in the I/O. For instance, it could be some sort of cam that is sensed by a prox switch. Then, I look at the machine spec and check the minimum machine cycle time. With this info, I can determine what my maximum acceptable scan time can possibly be. For instance, if the prox is on for only 30ms at full machine speed, I would need a scan time of about 15ms to guarantee that I catch every pulse (assuming that nothing can be done mechanically to change this). From my experience, and based on the CPUs I use, my scans range from 6 to 12 milliseconds, so I would be all set. But if it turned out that the prox was on for only 10ms, then I would have to investigate a special interrupt card or something like that.

As for return on investment, the only way I can see that it's worth it to reprogram a machine is if you could actually run the machine faster (and make more parts) if the scan was shorter. Usually though, a machine's cycle time is limited by the actual machine setup and the process that it has to perform. If it physically takes 2 seconds for a cylinder to extend and rotate a part, then does it matter that I scanned it 150 times vs 200 times while it did this? Not likely. I suppose it could be argued that I could retract the cylinder a few milliseconds sooner after hitting the limit switch if the scan time were a bit less, but I could do the same thing by tweaking the air throttle a tiny bit. To me, if I use Format's example, if the machine is meeting the spec at 23ms, then there is nothing to be gained with a 10ms scan.
 
Re: Do I need to explain ROI to you?

Peter Nachtwey said:
Pierre, I am not mad. My post was meant to be provocative and to make people think.
(y)

I have an application with 2 servos with encoder AND Tachometers. The tachs go directly to these old servos and the 2 encoders go in the PLC.

Unfortunatly, the positionning of the servos is directly proportionnal to the scantime because the speed AND direction are through anaolog modules.

It works fine and my client is pretty happy about it BUT we had an issue where the width of the product would varie after some of my visits.?????

Bizarre!

I found that whenever I would make ANY online-changes the PLC scantime would jump from 18ms to 85 ms.

Yaks, that one was hard to find. I just did not look at scantime first.
 
I have had complex controls for 10 stations plus monitoring and 2 axis linear robots. these kinds of applications usually ran 25 - 35 ms scans. I have had to control two stations with minor controls with less than 10 ms of scan time. It all depends on needs and size. I have had instances where spotting a slot in a rotating disk to stop a rotary arm or opening a robot gripper to a certain position was issue with longer scans but putting the least necessary controls for those in a separate routine with specific input reading at the start and output writing at the end allowed calling the routine more than once per scan giving greater precision where needed for the price of a slightly longer overall scan. It's all in what you need.
 

Similar Topics

Hi please can anyone help. A PLC system has an input filter delay of 6ms, relay outputs with a quoted delay of 10ms and, when monitored, the...
Replies
1
Views
1,197
I have a 1747-L552 that has been getting occasional scan times over 1 second. The normal average on this processor is around 20ms but about once a...
Replies
3
Views
1,552
I have a printing machine running RIO with ASB modules in each section talking to a SN series B card in a SLC 500. I converted the Panelview...
Replies
6
Views
2,322
I have a large program with a high scan time I am looking to reduce the scan time so I can pick up a prox. on a screw shaft with a fast sink card...
Replies
10
Views
3,000
I have a compact logix controller that contains one main program consisting of 15 -20 subroutines IF I click on that one main program and select...
Replies
1
Views
3,083
Back
Top Bottom