Advice on Network/Switch Configuration

scarince

Lifetime Supporting Member
Join Date
Jan 2009
Location
Dayton, OH
Posts
152
I have an assembly line that I've been adding to and upgrading over the last year or so. I am starting to have intermittent problems with ethernet communication errors in my Numatics G3 I/O on two different PLC's. It seems to happen when I start up Factory Talk View Studio and run the HMI application on my laptop. I've been fine up to this point but I have more I/O to add and I'm starting to worry.

The old network was DH+ on SLC 5/04's and everything new is ethernet. I've attached a diagram to try to represent the system. There are 11 PLC's sharing data in some way or another. The SLC's are talking to the various CLX PLC's via MSG's through the DHRIO bridge. The CLX are using produced/consumed tags to talk to each other.

I have the Stratix 6000's (1783-EMS08) installed at the "machine" or "station" level and they all run back to a Stratix 5700, but I have not enabled any management functions in any of the switches other than IGMP snooping.

My questions:

1) If I implement QoS in the 1783-EMS08, will the CIP traffic be tagged with a priority tag so that I can use the 802.1p Priority control?

2) I've tried to consider if I need to or how I might implement VLANs, but I don't even know if it's necessary. Does anyone have experience with determining if you need a VLAN and then how you deploy it in an application like this?

3) Does anyone have any general advice on what good practice would be for an application like this? Anything you think I should consider / avoid / investigate?

Thank you.
 
1)I'm not familiar with that specific switch, but that sounds plausible.

2) Adding everything to one VLAN is often necessary for 802.1Q. I'm not sure if splitting things into multiple VLANs would help. They are great if A talks to B, and C talks to D, and there is no cross talk. It sounds like you are in a situation where A talks to B, but C also talks to B. VLANs don't work well there.

3)if you want to limit traffic between the sub cells, additional NAT routers could be used, but it might not be necessary. I'm always an advocate of understanding what the root cause of the problem is before trying to fix it. I'd use wire shark or a network analysis tool to find out what is really going on.

according to the document linked below, it sounds like the ODVA recommends the use of DSCP at layer 3, instead of 802.1Q at layer 2.

https://memberplace.odva.org/Portals/0/Library/CIPConf_AGM2009/2009_CIP_Networks_Conference_Technical_Track_QoS.pdf
 
You may look into whether or not you are using multicast. Multicast on a flat network with no igmp querier will flood every port with every bit of traffic. If your switch is not capable of igmp querying then make sure you use unicast.
 
Scott,

I started checking into how some of the I/O is configured on some of the stations and you are correct.....the unicast option is not selected in many cases.

I also now notice lots of RPI's down at 10ms when 100ms would probably be fast enough.

Thanks for the tip.
 
You may also want to take a look at the amount of connections you are consuming and make sure you are not at you limits anywhere on the network the Ethernet / IP capacity tool is free and can help you with that http://www.rockwellautomation.com/g...egrated-architecture/tools/select.page?#/tab2

I would not jump into VLAN's and custom Qos setting right away lets crawl before we walk.

I looked at you diagram but I don't see a lot of IO so is this a complete diagram? If it's not a complete diagram with every device you may want to consider updating it to reflect all devices.

Multicast traffic is a killer in networks of any decent size. In Logix programs below Version 17 IIRC the default was Multicast and since Version 17 the default has been unicast so check to see how much multicast traffic you have and there also may be a lot of multicast from non rockwell devices also.

Enable unicast where you can and enable IGMP snooping A.S.A.P and as Scott.Maddox said you must have a querier which is normally the default gateway of the subnet which in a physical device is normally implemented on you core switch in your case would be your 5700.

The querier needs to be the lowest IP in the switch group so all your 6000 units should have a node address higher than your 5700.

Also check connection usage as it will bog down communications and cause lots of weird problems when connections get low. You said you have a lot of messaging going on well messages chew up connections as do produce /consume tags.

Each produce consume takes up 1 connection no matter of you produce /consuming a single BOOL or an entire array. many people don't know this and don't pack their produce consume and / or messaging data and end up chewing up almost all their connections.

You can also enable Qos but leave the default settings for now.

One point to keep in mind the 6000 series switch has some Cisco integration's bet is not a true Cisco switch like the 5700 and 8000 series are so you will not have the same level of performance or features so you may want to consider replacing those 6000 units with 5700 switch units by attrition.
 
I want to close the loop on this and report back on the results of following everyone's recommendation:

IGMP:
First I reviewed the configuration of the IGMP snooping. My core switch (the 5700) did *not* have the lowest IP address and so the querying function was being managed by one of the branch switches (one of the 6000's). I don't know if this was necessarily hurting me, but I readdressed the 5700 to the lowest IP address and insured that querying was turned on. I disabled querying on the 6000's and just have them snooping.

Unicast/Multicast:
There were multicast connections all over the place, at least 6 or 7. Reconfigured these for unicast.

RPI:
There was a lot of I/O configured for RPI's of 10 and 20 ms. My process will manage with 50 - 100 with no issues, so I set those up higher.

I was looking at the diagnostics screen of my 1756-ENTB and I noticed all these I/O connections that didn't make sense. It turns out that as my integrators brought these new cells in and configured MSG's to the DH+, they put my ENTB into their I/O tree and set really low RPI's. I think they did it to simplify figuring out the message path in the MSG instruction. I removed my ENTB from 3 other PLC's so those connections went away.

The net result is that the traffic that my ENTB was seeing dropped from 2,000 packets/second to 200. The CPU load dropped from 65% to 40%. The connection losses I was experiencing went away.

I suppose this is all common sense stuff, but the network kind of got away from me as we were adding things to it over the last year or so.

Thanks to everyone for your replies.......like so many others, this community has helped me out time after time. I really appreciate it.

B.
 
Glad you got it fixed. Also you may want to take a look at your I/O and make sure it's set for "Rack Optimized "where possible.

If its not rack optimized each remote I/O module can consume 1 connection over Ethernet but with rack optimized on it will only consume 1 for the rack with the exception of analog or special service cards which will still take 1 connection each.
 

Similar Topics

Hello Gents, I have a dilemma at work that I'm trying to weigh and decide on that I'd like some help with. In other places I worked, our PLC's...
Replies
4
Views
2,307
I am not sure why this is requested, but it was asked. Currently I have one PLC , with one output to a relay, turning on a field equipment (just...
Replies
7
Views
233
Hi , Where i can find Mitsubishi PLC Card end of line & replacement model details. i am looking for Q02CPU replacement model. Please advice. thanks
Replies
2
Views
145
Hi everyone id like to start by saying im not a PLC programmer so my technical terminology is limited. i would like advice on what software is...
Replies
4
Views
307
Connected Components Help Hi there everyone, I’m recently new to the PLC world and was hoping somebody might steer me in the right...
Replies
3
Views
451
Back
Top Bottom