A-B Compact Logix Ethernet Lock out

JaqLondres

Member
Join Date
Aug 2012
Location
Burnaby
Posts
4
I have 2 1769-L35E PLCs talking to Red Lion HMI and to each other using built-in Ethernet cards. Sometimes Ethernet card on PLC processor stops responding (both of them, but not at the same time). PLC scans program and there is no indication of any errors. Disconnecting PLC from network for few seconds resets problem. I cannot use serial port for diagnostic, because is in use for Modbus communication. Viewing diagnostic page on Ethernet card does not suggest any errors - no repeats, no discards. I changed method of communication between PLCs from consumed-produced to messages, but it did not help.
Any suggestions?
 
Welcome to the Forum !

My first guess is that there's a connection capacity issue, but you're still going to have to troubleshoot starting at the Ethernet side.

What state are the network LEDs in on the controller that stops communicating ?

When a controller has stopped communicating, you need to test Ethernet connectivity in several ways: ordinary PING, HTTP (type the address into a Web browser), TCPing, and an RSLinx Classic/RSWho browse.

Before the controller stops communicating, do an inventory of both TCP and CIP Connection counts. The embedded diagnostics web pages are the best place to do so.

I don't have the TCP and CIP Connection limits for the 1769-L35E in front of me at the moment, but it wouldn't be the first time a CompactLogix has been pushed beyond its communication capacity.
 
Compact Logix Ethernet Lock Out

I agree, the way it behaves points to communication overload. There are 5 connections altogether, way below limit. Network LED is blinking, but PLC is scanning program. Just there is no communication - cannot ping or go on-line with the card in any way. As I mentioned before, embedded diagnostic website does not indicate any errors, but to view the diagnostic I need to reset connection first. I tried to trap errors using error handlers, but cannot reset module PLC is not communicating with.
 
In my experience, a controller that has run out of CIP Connections can still respond to ICMP PING, HTTP, and often TCPing on Port 44818.

A controller that's run out of TCP connections (less likely) can still respond to ordinary ICMP PING.

What you're describing may be a port failure at the Ethernet level, rather than at the TCP or CIP level.

Do you have a managed switch that can tell you something about the link conditions, or if packets are still being exchanged on the port ?

Do you have a spare output you could program to flash or otherwise indicate when MSG instructions are failing ?
 
A_B Compact Logix Ethernet Lockout

I have just installed managed switch, but lockout is still happening about twice a week, usually when I am not around. I witnessed it twice, so it is real. Operator knows when it fails because HMI shows "dashes" instead displaying numbers. I tried to inquire for status periodically using GSV instructions and tried to reset module, but this did not work.
 
I had a similar issue on a ML1400. Under the PLC selection of the network settings, I put a transaction timeout of 4000ms and a Comms delay of 150ms.

You can also try to auto reset the comms:

Set up a tag isdeviceonline (1)
Set two triggers
Active Off Delay 0ms Action: DisableDevice(1)
Active Off Delay 1000ms Action EnableDevice(1)

Note, I was using a wireless connection, so you may be able to decrease the time some.
 
It might be the HMI broadcast after all...:confused:
Too bad you do not have a modular system which allows segregating the data transfer networks between "time critical" and "less than critical" information.
Back to the number of connections supported by 1769-L35E:
this controller type supports only 32 CIP connections over EtherNet/IP and the number decreases signifficantly when lower RPIs are requested (16ms RPI -> 18 connections, 8ms RPI -> 10 connections, etc.; Online software will use up connections also!
I really think you are pushing the limits of the system even with a managed, IGMP Snooping Enabled switch.
 
I rechecked embedded diagnostic page of Ethernet card - it did lockout again today - and the only indication of trouble I found, is some small number of FCS errors.Ethernet card's processor is utilized about 28%, 2 CIP and 6 TCP connections (one being me online). Since all devices are in full duplex and all new hardware (Ntron switches, Compact Logix PLC, Red Lion HMI) that may point towards cabling. I'm going to check them.
 
That's very useful information, thank you for the followup !

I can't say with authority that this "lockup" behavior is consistent with increasing FCS error counts (those are Ethernet checksum errors), but any system with failing Ethernet cables can exhibit "flaky" behavior.

I recently had a bad Ethernet cable on a ControlLogix system that gave me a lot of head-scratching. The pinout was fine, and the link LED worked all the time. But we found that we had switched pairs: one conductor from each of two different twisted pairs made up an actual Ethernet link pair.

Remember also that "Autonegotiate" is an all-or-nothing configuration: both the switch and the device need to be set for Auto-Negotiate, or both set for a fixed Speed/Duplex. If the switch is not configurable but features Auto-Negotiation, you absolutely need to set the devices for Auto-Negotiate.
 
Auto Negotiate

I agree with Ken Roach.

We had an installation with 2-Compact Logix and a mix of 3rd party devices.
The system worked great in SE Michigan at start-up. The system worked great at start up in mid USA.

Later, On-Site, The PLCs were connected to plant network for critical manufacturing build recipe requirements, and then the **** hit the fan.

Ethernet Lock-ups became intermittent, but frequent, could not diagnose even via alternate DF1 serial connection. Power cycles were required about every other day.

On-line Ethernet diagnostic showed: a never ending stream of FCS errors.

On-site, we tried using managed switches with diagnostic capabilities to try to find the source of FSC errors, but were still stymied.

The customer was at limited volume, so this was inconvenient, but resolve was required. Then, one day, after much screaming, by us to our RA vendor, we had a ROOM FULL of RA distributor engineers, AND RA networking engineers. We were off-site in our home State, we made a VPN connection to the problem site. We uploaded and saved the PLC program and settings.
We examined and observed that the PLC network settings that had the check box for auto negotiate was un-checked. HHmmm? We checked the BOX for Auto Negotiate, had the remote customer stop the equipment, performed a download, with the new auto-negotiate setting, and the restarted the processor.
Using diagnostics we saw ZERO FCS errors. And the lock-up NEVER happened again.
As an integrator, we were totally undressed and embarrassed, that after weeks of trying this, and trying that, and spending monies on managed switches, that a simple “check-box” with the PLC configuration solved ALL of the Ethernet lock-up problems.
 
Remember also that "Autonegotiate" is an all-or-nothing configuration: both the switch and the device need to be set for Auto-Negotiate, or both set for a fixed Speed/Duplex. If the switch is not configurable but features Auto-Negotiation, you absolutely need to set the devices for Auto-Negotiate.

What about mixing and matching using managed switches?

For awhile we had two 5/05s on our controls lan fixed at 10mb so we configured the associated port that way, but most other ports on mutiple other switches were set to fixed at 100mb/full via recommendations from RA trying to solve FTView SE 4.0 Site Edition issues. They said set all PCs, and affected switches on the SE devices to fixed 100mb because auto-negotiate is not standardized and some PCs and other devices will frequently break from standard and cause little bottlenecks and errors result.

Is it okay that we have one FTView SE 5.1 SE Standalone set up on a PC with LAN card set to auto or fixed at 100mb/full, and it may poll devices across switches set to 10mb?

We still have occasional problems but far fewer since IT fixed some basic problems on their side of the world (gateway, dns, router, gosh I mis DH+)..
 
We still have occasional problems but far fewer since IT fixed some basic problems on their side of the world (gateway, dns, router, gosh I mis DH+)..

This is most likely the source of your issues. Need IT on a seperate LAN. Those protocols can be very difficult to get and keep working on the same wire as EIP and other industrial protocols.
 
I asked if our controls LAN was physically isolated or just a VLAN, and the reply I got was "Yes, both", but I am still waiting for a drawing of it...you should see the cabling and switches and the total physical mess on "their side". I think there is a router/gateway separating the two.

The few I/O we have isolated by managed ports work just fine, we started out with IGMP set up correctly before adding those machines to the network. Later, though, a switch reverted and fouled up the IGMP system and did cause some minor "slow-downs" on our HMI stuff but the I/O was fine at the local machine levels.
 
If your situation is anything like mine and it sounds like it is you may want to consider maintaining a drawing of the plant stuff yourself and share with them if need be. IMO IT is not organized enough and does not kep good enough drawings to help us troubleshoot so I do it myself.

Your stuff should all be on a physical seperate network and the best way I have found is to create a DMZ that hols all assets that are need by the plant network and the corporate network such as a historian,recipe manager,batch manger,etc.

What assets do you have on your plant network that the corporate network needs access to? I have found it best to have a firewall that you control for the plant network that only lets data flow to and from those pre defined shared assests and when problems arise that DMZ can be taken out of the picture quickly and easily. The firewall you control will keep any IT snafu from bothering you. Here we control everything on our side of the DMZ and it is locked down with no IT access.

I have to spend 20-30% of my time helping IT with their issues because in most cases they can't figure it out. Keeping IT off my network in a complete way was the best move I ever made to ensure uptime.
 
If your situation is anything like mine and it sounds like it is you may want to consider maintaining a drawing of the plant stuff yourself and share with them if need be. IMO IT is not organized enough and does not kep good enough drawings to help us troubleshoot so I do it myself.

I started that process...but ...well ...following the patch cables in one data center alone could take several weeks if I did nothing else.

We have decent documentation of where all the machinery is connected to fiber converters and a good map of the fiber "backbone" along with all the Hirschmann switches we keep up with. My counterpart is attending a Rockwell ethernet I/P planning course this week, I am taking the same class next month to help us.

The PLC Kid said:
Your stuff should all be on a physical seperate network and the best way I have found is to create a DMZ that hols all assets that are need by the plant network and the corporate network such as a historian,recipe manager,batch manger,etc.

What assets do you have on your plant network that the corporate network needs access to?
...nothing really that I can think of...we do the data collection (Ignition/RSSQL) and send that to them, I thought what they're doing is using sharing our fiber backbone to get to some wireless access points for some forktruck mounted barcode readers and some network printers. But they did say the other day we are physically separated, so I am not sure...Now, we do rely on IT for access to our subnet through the gateway, for access over a Cisco VPN client for remote over the www access.

The PLC Kid said:
I have found it best to have a firewall that you control for the plant network that only lets data flow to and from those pre defined shared assests and when problems arise that DMZ can be taken out of the picture quickly and easily. The firewall you control will keep any IT snafu from bothering you. Here we control everything on our side of the DMZ and it is locked down with no IT access.

I have to spend 20-30% of my time helping IT with their issues because in most cases they can't figure it out. Keeping IT off my network in a complete way was the best move I ever made to ensure uptime.

I think we are close to being educated enough now that we could do exactly as you recommend and have a line in the sand of distinct ownership
and responsibility.
 
Last edited:

Similar Topics

I have a CompactLogix 5280 a I am trying to figure out which of the Ethernet ports are A and B. Not finding it in the manual and this is a new...
Replies
2
Views
1,188
Dear all Can any one guide me for mapping in compact logix in RS Logix 5000 which I will be using MOXA MB Gate 5105 for converting Ethernet IP to...
Replies
1
Views
1,520
I have a 1769-L32E Compact Logix processor. I am using the EtherNet port already. I want to know if I can use the serial port to connect to...
Replies
7
Views
1,874
Hello everyone, to get straight to the question, I have done simple programming with digital inputs and outputs but I have never programmed a PLC...
Replies
11
Views
3,490
Hello all! Have an existing 1769-L18ERM-BB1B V28. with full expansion to 8 modules 1--1734-OB8 and 7--1734-IA4. According to IAB this can not be...
Replies
0
Views
1,874
Back
Top Bottom