Hi,
I am looking at the setting up some new ethernet comms which is originating from an SLC 5/05 to four new identical CompactLogix L24ER's (using SLC register mapping), both read and write (yes I know reading at both ends would be more efficient but I currently have no control over the software in the L24ER's).
The SLC will initiate:
1 INT write to each L24
1 FLOAT write to each L24
1 INT read from each L24
1 FLOAT read from each L24
So 16 MSG instructions in total
(if I was designing the L24 software I would have packaged my data as contiguous INT arrays, and manipulate at the other end for decimal places, and therefore halve the MSG requirements, but there we are...)
In order of timeliness (i.e how fast I need to them to update):
1. Float writes (fast)
2. Int reads (fast)
3. Float reads & Int writes (slow-ish)
My question is:
What is the most efficient way to schedule the messages? Is it:
a) Two separate incrementing and looping indexes that loops between 0 - 3 (one for fast and one for slow), that space the messaging equally so that all reading & writing to a given device is executed together (fast on the one loop, slow on the other).
i.e Fast Index = 0 = Device A Fast Write & Fast Read
Slow Index = 0 Device A Slow Write & Slow Read
Fast Index = 1 = Device B Fast Write & Fast Read
Slow Index = 1 Device B Slow Write & Slow Read
and so on...
If I did this would I need to also include an XIO on the .EN for each message as a rung condition to prevent the possibility of adding it to the queue multiple times if the index value comes back round before it has a chance to become .dn?
There is of course no guarantee that the Fast & Slow Indexes won't cross over sometimes causing them to try to fire at the same time.
b) Have each MSG .dn or .er XIC trigger the next MSG and scrap the whole fast/slow idea. Just 16 MSG's with only one firing at any one time as a cascade. I see why this is a nice approach, but if a device is out of action, I'm concerned about waiting till the timeout occurs before moving onto the next MSG. Is there a way to ensure the timing is not altered if a device is not responding? If so this seems the best approach as your are guaranteeing maximum speed at all times without the risk of overloading.
c) just not worry so much and let all MSG blocks re-fire as soon as they are .dn. Is all of the above not now a legacy issue and a properly designed ethernet network is more than capable of handling anything a PLC can throw at it?
Thanks
I am looking at the setting up some new ethernet comms which is originating from an SLC 5/05 to four new identical CompactLogix L24ER's (using SLC register mapping), both read and write (yes I know reading at both ends would be more efficient but I currently have no control over the software in the L24ER's).
The SLC will initiate:
1 INT write to each L24
1 FLOAT write to each L24
1 INT read from each L24
1 FLOAT read from each L24
So 16 MSG instructions in total
(if I was designing the L24 software I would have packaged my data as contiguous INT arrays, and manipulate at the other end for decimal places, and therefore halve the MSG requirements, but there we are...)
In order of timeliness (i.e how fast I need to them to update):
1. Float writes (fast)
2. Int reads (fast)
3. Float reads & Int writes (slow-ish)
My question is:
What is the most efficient way to schedule the messages? Is it:
a) Two separate incrementing and looping indexes that loops between 0 - 3 (one for fast and one for slow), that space the messaging equally so that all reading & writing to a given device is executed together (fast on the one loop, slow on the other).
i.e Fast Index = 0 = Device A Fast Write & Fast Read
Slow Index = 0 Device A Slow Write & Slow Read
Fast Index = 1 = Device B Fast Write & Fast Read
Slow Index = 1 Device B Slow Write & Slow Read
and so on...
If I did this would I need to also include an XIO on the .EN for each message as a rung condition to prevent the possibility of adding it to the queue multiple times if the index value comes back round before it has a chance to become .dn?
There is of course no guarantee that the Fast & Slow Indexes won't cross over sometimes causing them to try to fire at the same time.
b) Have each MSG .dn or .er XIC trigger the next MSG and scrap the whole fast/slow idea. Just 16 MSG's with only one firing at any one time as a cascade. I see why this is a nice approach, but if a device is out of action, I'm concerned about waiting till the timeout occurs before moving onto the next MSG. Is there a way to ensure the timing is not altered if a device is not responding? If so this seems the best approach as your are guaranteeing maximum speed at all times without the risk of overloading.
c) just not worry so much and let all MSG blocks re-fire as soon as they are .dn. Is all of the above not now a legacy issue and a properly designed ethernet network is more than capable of handling anything a PLC can throw at it?
Thanks