Industrial Ethernet Design

KelbornTM

Member
Join Date
Aug 2018
Location
Orlando
Posts
4
Hello, first time poster.

I've been tasked with designing an industrial Ethernet network with about 200 nodes. I'm struggling with setting up a proper topography for the network as I'm not familiar with managed switches and the nodes need to be in DLR. Each node is either a VFD mounted directly to a motor or an armor block handling local IO.

The design I'd like to implement breaks the nodes into 13 groups, these groups make sense given the layout. Then I'll design a local control box with a 1783-ETAP set to ring manager, that will solve local DLR. Then bring all 13 ETAP connections back to a main PLC cabinet and plug them into a Stratix 5400 20 port managed switch.

So my idea is a sort hybrid DLR / Star configuration. My issue is most designs i see in Rockwell documentation don't show anything like this. Topography is shown as ether a star configuration or DLR, not both at once. Because of this, I'm hesitant to move forward with this idea and instead do multiple 5400's as they can handle 5 DLR rings.
 
Typically, I've seen the taps used for supervisory stuff, as you lose the redundancy if your tap is feeding the ultimate controller.

If I understand what you are trying to do, it *should* work.. but if you lose a link between your tap and the main PLC panel you will drop that entire group of I/O. So, that seems like it will defeat your whole point in using DLR on the face of it.

Could you perhaps sketch something out?
 
Yes redundancy would be lost per section of nodes. It would only be kept locally. Probably better to go with multiple 5400s.

Network.png
 
Ahh yeah, I would probably try to make it a true ring if possible. Use fiber if you need to cover distance and/or deal w/ noise immunity.

Now, 200 nodes.. That's a not insignificant amount of traffic for a single CLGX ethernet card. I presume you've done the calculations and you won't overwhelm it?
 
I am in the middle of fixing a plant that went crazy with junk switches and massive start networks. I am following the Allen-Bradley/Panduit/Cisco:
Deploying a Resilient Converged Plantwide Ethernet Architecture Guide. https://literature.rockwellautomation.com/idc/groups/literature/documents/td/enet-td010_-en-p.pdf


We are using the redundant start setup. We also have several DLR in MCC rooms for VFDs.


Only have a couple of sections in so far, but it's become so much better.


We are using the Panduit INZ boxes with either Stratix 8000 or 5700. Also have 3 main points where we are using Stratix 5410. All switches connected with 1G fiber and the three 5410 are using 10G fiber.


Only switch features we are using are VLan, Etherchannel and DLR for VFDs.
 
Last edited:
You are using DLR for resiliency, then you have single points of failure in ETAPs and Stratix. You should at least extend DLRs to Stratix, but with 200 nodes you will need more than 3 rings
 
We just went "all in" with a network design at a customer site, using 2 main switches each connected to 12 IE3400 (26 ports) switches in the plant
All connections are made with 2 x 8 core Fiber.

Some might say this is overkill, but is is pretty sweet, I was not involved setting up the 2 main switches, so not totally on top of the software design.

We eliminated all hub's in the plant, everything in the plant is connected to one of the IE switches no daisy chain anywhere.

We have a "hot" IE backup switch ready to replace if we have any problems.
 
Hello:

Normally a DLR ring should not have more than about 50 nodes.

Also, the Stratix 5400 supports up to three Rings. I think the configuration below would be more stable, and would allow you to have backup supervisor.

20200311_StratixQuestion_Reply.jpg
 
We just went "all in" with a network design at a customer site, using 2 main switches each connected to 12 IE3400 (26 ports) switches in the plant
All connections are made with 2 x 8 core Fiber.
[..]
We eliminated all hub's in the plant, everything in the plant is connected to one of the IE switches no daisy chain anywhere.
You dont have any redundancy. Using good quality components and fiber everywhere is a good way to minimize the risk of downtime, but does not eliminate it.
For the OP with 200 nodes, a star topology with switches only would not be good enough.

We have a "hot" IE backup switch ready to replace if we have any problems.
Something similar I have done with CPU "redundancy". That was when the customer got the price for genuine CPU redundancy with automatic switchover. He then decided that 10 minutes downtime would be acceptable.
 
Ahh yeah, I would probably try to make it a true ring if possible. Use fiber if you need to cover distance and/or deal w/ noise immunity.

Now, 200 nodes.. That's a not insignificant amount of traffic for a single CLGX ethernet card. I presume you've done the calculations and you won't overwhelm it?

My final count is 154 nodes. We've decided to split the devices between two L80 processors each with their own 5400 and two DLR rings. The devices will be distributed between 4 rings and 2 processors.

I'm not aware of how to do traffic calculations. Do you know of documentation I can look over for that?
 
Well, answer ID 474754 to start with.
If you are using L81Es, the 'node' max is 100 per Rockwell. Goes up from there.

Then, there's the connection bandwidth, which you can calculate using the Ethernet/IP capacity tool that's part of IAB.

I tossed a mix of 30 powerflex 753s and 40 'racks' of armorpoint w/ like 4 modules each and it calcs out to <10% I/O utilization. So probably no worries there. (Note, I assumed you would use the embedded port, run the tool w/ whatever you actual setup will be as Contr_Conn mentioned)
 
Last edited:
DLR can be expensive, those add-on cards for the VFD's get expensive quick.

My personal preference, and what I recommend to customers is to take an IT approach to plant networking. Strategically place IDF cabinets throughout the facility, managed switches and patch panels go here and cable runs go out to the control panels/field devices. Star topology with the IDFs being in a ring.

From a cost standpoint, it's probably better to invest in small number of high capacity 48 port managed switches then it is to buy DLR components and low
capacity industrial switches like a Stratix. Ethernet cabling is cheap.

From a knowledge and configuration standpoint, IT type switches and configuration will be easier to support. Especially when dealing with VLANs, routing and overall setup, a limited number of 48 port switches which are the same model are a breeze to track. In contrast mix in a Stratix, ETAPs, NATRs and random unmanaged switches and you have a quick mess of spaghetti.
 

Similar Topics

What's everyone using for their wireless ethernet radios? Just for a couple remote 1794flex racks a couple city blocks away. We've used Esteem in...
Replies
3
Views
900
Kindly, we have a huge stock of the following encoder cable. Is it possible to use it for industrial Ethernet connection also? (automation and...
Replies
3
Views
2,063
I am starting a project to upgrade our building utility system. The actual swap out will be done by outside contractor. The new system is IP...
Replies
17
Views
4,717
Hi, guys, how’s going? We get one SIMOTION D435 machine. The communication between D435, PLC and HMI is built by Ethernet cable. (D435 IP...
Replies
2
Views
4,916
Is there any difference between Profinet and Industrial Ethernet ? Same hardware is used for both ? Same softwares ? Thank you
Replies
17
Views
6,587
Back
Top Bottom