bjkallmeyer
Member
Hello. I have a wierd one, or at least I think a wierd one. It all started about 3 weeks ago.....
I have a powerflex 40 with a 22-comm-d module. It is on a devicenet (obviously) network of 20 drives. It is node 16, near the middle of the network. It all the sudden started dropping out, which caused a running feedback fault on the PLC. What's strange is sometimes it will show a fault 81, sometimes it will not. When it doesn't actually trip the drive out, operator can ack the fault and then keep on running. When the drive actually trips on fault 81, then maint has to be called to reset the fault. The way our plc is set up, is pretty much as soon as we don't see a running feedback from the drive, then it triggers a plc/HMI fault for the fan it was controlling. There is a comms loss timeout setup in the drive as default for 5 secs. So my thinking was we were missing that running feedback bit for less than 5 secs. I've added a fault delay to the program, to allow them to run if the running feedback doesn't stay on for more than 5 secs.
So, the event happens completely at random. It can happen 2 hours between, and up to a week between faults. So, I started out by replacing the 22-comm-d, no go. I then disconnected the HIM module thats mounted on the front of the bucket. No go. Went ahead and replaced the HIM module as well as the base and cable. Still no go. I then replaced the drive. Nope. I replaced the dnet cables going to/from the next node. Upgraded firmware on DNET card as well as PLC firmware (needed it anyway). And....nope. Replaced the terminating resistors on the dnet network. Added the recommended grounding to the I/O common (as mentioned in rockwell technote) And still no better. This is the only drive on a network of 20 that is experiencing any issues.
Now I've added some historian points to try and catch this gremlin, but even that's a little funky. I can see the fault status on the trend, but I have a running feedback tag that never goes low when the fault happens. Thinking maybe a stale tag when this experiences issues. I've just recently added some of the status bits from the 1756-dnb (DeviceFailureRegister, DeviceIdleRegister, ActiveNodeRegister, ScannerStatus, ScrollingDeviceAddress, ScrollingDevicesStatus, DeviceStatus[16]. The last fault we experienced, where it actually tripped the drive on fault 81, my new historian data didn't show this node as experiencing any issues.
So that's where I'm at. All these possible fixes came from myself (drive, HIM, cables, resistors and firmware) and Rockwell tech support. I am now at a loss. It appears there is some type of interference that is still present. Next step from Rockwell is to add a motor choke on the output of the drive, which is currently on order.
Sorry for the long story, but I feel too much is better than too little. Let me know if anyone can be of some assistance
I have a powerflex 40 with a 22-comm-d module. It is on a devicenet (obviously) network of 20 drives. It is node 16, near the middle of the network. It all the sudden started dropping out, which caused a running feedback fault on the PLC. What's strange is sometimes it will show a fault 81, sometimes it will not. When it doesn't actually trip the drive out, operator can ack the fault and then keep on running. When the drive actually trips on fault 81, then maint has to be called to reset the fault. The way our plc is set up, is pretty much as soon as we don't see a running feedback from the drive, then it triggers a plc/HMI fault for the fan it was controlling. There is a comms loss timeout setup in the drive as default for 5 secs. So my thinking was we were missing that running feedback bit for less than 5 secs. I've added a fault delay to the program, to allow them to run if the running feedback doesn't stay on for more than 5 secs.
So, the event happens completely at random. It can happen 2 hours between, and up to a week between faults. So, I started out by replacing the 22-comm-d, no go. I then disconnected the HIM module thats mounted on the front of the bucket. No go. Went ahead and replaced the HIM module as well as the base and cable. Still no go. I then replaced the drive. Nope. I replaced the dnet cables going to/from the next node. Upgraded firmware on DNET card as well as PLC firmware (needed it anyway). And....nope. Replaced the terminating resistors on the dnet network. Added the recommended grounding to the I/O common (as mentioned in rockwell technote) And still no better. This is the only drive on a network of 20 that is experiencing any issues.
Now I've added some historian points to try and catch this gremlin, but even that's a little funky. I can see the fault status on the trend, but I have a running feedback tag that never goes low when the fault happens. Thinking maybe a stale tag when this experiences issues. I've just recently added some of the status bits from the 1756-dnb (DeviceFailureRegister, DeviceIdleRegister, ActiveNodeRegister, ScannerStatus, ScrollingDeviceAddress, ScrollingDevicesStatus, DeviceStatus[16]. The last fault we experienced, where it actually tripped the drive on fault 81, my new historian data didn't show this node as experiencing any issues.
So that's where I'm at. All these possible fixes came from myself (drive, HIM, cables, resistors and firmware) and Rockwell tech support. I am now at a loss. It appears there is some type of interference that is still present. Next step from Rockwell is to add a motor choke on the output of the drive, which is currently on order.
Sorry for the long story, but I feel too much is better than too little. Let me know if anyone can be of some assistance