Hello Everyone,
I have a customer location that has a 1769-L33ER Processor. There are 2 remote RIO cabinets that use Point I/O Ethernet adapters and 1 that RIO that uses the 1769-ANETR.
Today I got a call that the PLC was down. I got into the network and found the PLC was faulted. The Fault was caused by the processor was not able to communicate to the Slot 1 module (1769-IF4) in the 1769-ANETR RIO. I cleared the fault and everything went on like normal.
That got me wondering about what could have caused that fault. What is the methodology a processor uses to determines a fault like this? Is there a certain number of retires? Certain amount of time that it will wait for a response? Does the RPI have any affect on that? Could I have the RPI set too tight? I have them set to the standard times that come up when you add the module. Given the RIO racks are on fiber 1000' from the processor maybe the communications delay is causing some issues? Any thoughts on this?
If you have any idea I would love to hear them. Thanks to everyone!!
I have a customer location that has a 1769-L33ER Processor. There are 2 remote RIO cabinets that use Point I/O Ethernet adapters and 1 that RIO that uses the 1769-ANETR.
Today I got a call that the PLC was down. I got into the network and found the PLC was faulted. The Fault was caused by the processor was not able to communicate to the Slot 1 module (1769-IF4) in the 1769-ANETR RIO. I cleared the fault and everything went on like normal.
That got me wondering about what could have caused that fault. What is the methodology a processor uses to determines a fault like this? Is there a certain number of retires? Certain amount of time that it will wait for a response? Does the RPI have any affect on that? Could I have the RPI set too tight? I have them set to the standard times that come up when you add the module. Given the RIO racks are on fiber 1000' from the processor maybe the communications delay is causing some issues? Any thoughts on this?
If you have any idea I would love to hear them. Thanks to everyone!!