Sort of OT: Multicast

Steve Etter

Lifetime Supporting Member + Moderator
Join Date
Apr 2002
Location
Morristown, TN
Posts
965
I’m trying to get a better understanding of Ethernet/IP networking and how IGMP snooping works with multicast communications. Primarily, I’m trying to understand how to know what device will see a given multicast signal and which will not.

As I understand it, in a general sense, a managed switch that uses IGMP snooping controls where multicast signals are sent by first querying the devices connected to it and creating a table of which devices want to receive multicast signals. If they are on the list, they receive them, if not, they don’t.

Ok. So assuming I’m not too far off base here, how does a device know it wants to be on a specific list? What is the base configuration? Is it the sub-net mask or something else?

So, if I have two PLC Processors connected to a common network with the same sub-net mask but each behind a separate, IGMP snooping enabled managed switch, how can I be sure that multicast messages from a given remote I/O for one processor is not being seen by the other? How do the switches distinguish between the two?

I've read what I can find on the web but nothing I've found seems to explain what is used to define this differentiation.

I hope this makes sense.
 
I think your description is generally correct. I can show at startup of an EIP Class 1 connection that uses multicast:

IGMP.jpg

In the CIP world, you have the Forward_Open in frame 22, and the response in frame 24. The response is from the Target, and this is the common device which will publish data over multicast. As part of the Forward_Open response (Info: Success in Wireshark, frame 24) is the multicast address that it will use, to inform the Originator (i.e. the PLC) where to listen. Based on this multicast address - with specific rules given in the CIP spec on which multicast address to use, this triggers an IGMP join message in frame 25 (membership report). Notice the multicast addresses match: once the Originator learns which multicast address it wants to listen to for this communication, it issues the IGMP join so devices that are snooping can add this to their forwarding table. For example, on my switch,

IGMP_fdb.png

IGMP added a forwarding table entry for the multicast MAC address in use here, so it knows that port 7 (1.7 as shown) is to get this multicast data. There is a standard algorithm for mapping multicast IP addresses at layer 3 to MAC addresses at layer 2, which a switch needs to use. Recall switches don't work at layer 3, so do not forward based on IP (that's the job of a router). See, for instance, here.

What if IGMP is not in use? Then the Originator will still issue the IGMP membership report as in Frame 25, but no one is listening, or cares. So there is no forwarding table entry in the switch, and multicast is treated as broadcast and sent everywhere... may not be so nice.
 
Robert - Thanks for the in-depth response. I make no claim to be particulary knowledgable on this subject, but it seems we still haven't quite hit on what I'm looking for. Here is what I got from your response:

A PLC Processor will send out a Forward_Open call.

A common device on the network will respond with a Forward_Open response and within that response is the multicast address that the switch then uses to route future communications. Similarly, any common device on the network will respond in like fashion to this Forward_Open call.

Assuming that summary is correct, suppose I have several processors on a shared network, all on a common IP address and subnet scheme. As I understand it, each processor will issue its own Forward_Open call and any common device out there can respond and thereby establish a multicast address with multiple processors.

Is this right? If it is, are there techniques and settings for multicast devices to prevent this?

My goal here is to make sure that unwanted traffic from common devices never reach processors who shouldn't see that traffic. At the moment it looks like the only way to do this is to physically isolate each machine-based LANs.

Steve
 
...
how does a device know it wants to be on a specific list? What is the base configuration? Is it the sub-net mask or something else?

There must exist an IGMP querier on the network. I recommend one and only one querier with any version. If one already exists on the network, you will see membership packets as shown above. If one does not exist, you can designate one of the managed switches to act as the querier. I strongly suggest matching the versions of IGMP querier and all switches with IGMP enabled.

The querier builds the member list.

So, if I have two PLC Processors connected to a common network with the same sub-net mask but each behind a separate, IGMP snooping enabled managed switch, how can I be sure that multicast messages from a given remote I/O for one processor is not being seen by the other?

My favorite way to prove it, is with wireshark a laptop, and port mirroring. If you stop getting EIP packets when you enable it, you know it worked. I am a "show me the wires" kinda' guy and also novice with this stuff, but have been down the IGMP road recently. We basically have one subnet with a growing number (over 200) nodes some of which are Ethernet IP I/O. With well managed IGMP, we have no problems.

Also, where possible, you can negate the need to use IGMP if you update firmware to a level supporting Unicast for I/O...Off the top of my head, I don't recall the number, might be 19.0?

I wanna say if it gets disabled, the network will continue to filter until the exit power cycle, with the Hirschman switches we use, so make sure to save to NV any config. changes, and back them up.
 
Last edited:
A PLC Processor will send out a Forward_Open call.

Only if you configure it. It won't do it automatically - you as the designer have to tell the system what communications you want. If you want EIP Class1 that uses multicast, you will configure it, then the system will respond by issuing the Forward_Open to fulfill your requirements.

A common device on the network will respond with a Forward_Open response and within that response is the multicast address that the switch then uses to route future communications. Similarly, any common device on the network will respond in like fashion to this Forward_Open call.

True, it will respond if capable. Assuming it is...

Assuming that summary is correct, suppose I have several processors on a shared network, all on a common IP address and subnet scheme. As I understand it, each processor will issue its own Forward_Open call and any common device out there can respond and thereby establish a multicast address with multiple processors.

Is this right? If it is, are there techniques and settings for multicast devices to prevent this?

Each processor will only issue Forward_Open's to the devices it is attempting to communicate with. If you did not configure this, it won't do it. Also the free use of 'any' and 'can respond' is problematic, if you configure the communications between the devices and they all support what you are trying to do, then they will respond, otherwise they will ignore it. So no multicast address, no IGMP join, no multicast traffic... If IGMP snooping is enabled on a switch, multicast will only go the ports that have devices that request it. I see no reason why devices on your network will issue IGMP joins for multicast groups it does not won't according to your configuration.

I usually configure unicast communications whenever I can to avoid this whole issue. IGMP can take many seconds to update, and requires configuration - what to do with unknown multicast? Send to query ports, all ports? How to manage redundancy with multicast? Lots of issues to deal with here.
 
After extensive "experimenting" with managed switching, machine level segregation, and unicast capable firmware implemetation, our "unwanted traffic" mitigation combines all the above, however it adds one more component, device functionality subnet segregation.
The "critical" (Priority #1) traffic is, of course, the I/O traffic, hence the CPUs and I/O devices (Remote I/O Comms Modules, VFDs Comms.Modules, etc.)were firmware upgraded as to permit Unicast functionality and "isolated" on "Level 0" (I/O Class) subnets.
The Produce/Consume CPUs were also segregated on "Level 1" (Peer-to-Peer) subnets since the Multicast efficiency is ensured at the "Producing" CPU level (the user specifies the number of Consumers at the time of the Produced tags configuration).
"Level 0 and 1" subnets are exclusively un-managed "switched" unless used within redundant applications.
The "broadcast" functionality devices or software such as HMIs or SCADA applications were completely "disconnected" from "Levels 0 and 1" and networked onto a "Level 3" subnet where managed switching enforces the most efficient data transfer.
I am aware that this approach is not always feasible or there probably are more "elegant" solutions out there, however, it is up to the user to decide the intended topology attributes' priorities; for us, the main goals were data transfer efficiency and future development flexibility/capacity.
 
Last edited:
For the moment, I’m focused on CompactLogix processors, PowerFlex drives (communicating via COMM-E cards), Point I/O Ethernet Comms modules, and PanelView Plus HMI’s. I am then using the programming tools available with RSLogix5000 Professional, Connected Components Workbench, and FactoryTalk View Studio. All of these, of course, are Allen-Bradley products.

While it’s certainly possible there are settings and programming features within these tools of which I am currently unaware, but at this time I don’t think I have much choice regarding what communications settings I get to use. The Ethernet protocol is Ethernet/IP and about all I get to do from the processor side is to set the IP address and sub-net mask. I do, however, with the newer versions of the Logix5000 software get to select unicast addressing for configured remote devices but this is only within the processor itself. Once again, I don’t see (or understand) how this limits the common field devices so that they only respond to calls from a specific processor. As best I can determine, every like processor that is configured to use Ethernet/IP and needs to communicate with remote Ethernet devices is going to send out a generic Forward_Open call and every remote device is going to try to respond. Sure the Processor knows what to pay attention to, but that’s not the point. I looks like the traffic is going get back to it anyway and have to be rejected.

If that is not true, what are the specific differences and how do the field devices know to respond to one call but not another? In other words, how can setting the I/O configuration in the processor (to unicast, for example) prevent the actual field device from responding to calls from other processors? It seems this would need to be a field device setting. Nothing in many of these field devices (I believe) is processor specific.

My main concern has to do with the PanelView HMIs. It’s my understanding that these tend to flood lots of comms on the network. If so, I really want to make sure none of that traffic reaches unintended processors. At this time, I am not aware of any setting in either the processor or the PanelView that will prevent this.

dmargineau - your subnet segregation is the sort of solution I am hoping to avoid, if at all possible, but the only type of solution I am seeing, too.

I hope I’m not just being dense here.

Steve
 
Sorry OkiePC - I didn't see your response earlier.

I'm relatively satisfied that I know what to do with respect to IGMP snooping and querying, my questions have to do with what's actually happening and what can we do from a device level to prevent the traffic rather than simply accepting that the managed switches are handling it. I figure there has to be a bottom line here; a specific setting from the remote device that defines "this message is intended for this processor only". If there isn't, what does the switch use to make this determination? From what I see so far, our processors send out non-specific requests for comms and then ignor those they don't need.
 
Steve,
EtherNet/IP Logix Remote I/O(including COMM-E modules) are unicast capable when the system CPU firmware is at revision 18 or higher; I believe the VFDs communications modules have to be at Rev.4 or higher to allow for unicast connections.
As for individual I/O modules, an unicast connected module (either Input or Output) could be "owned" by only one CPU; Input modules could be "listened" to by any subnet CPUs, however, this is implemented via RSL5K user application; the user "decides" if another CPU than the "owner" is to "listen" to the respective Input module.
Once implemented, unicast data transfer (including "listening")is exactly what the "unicast" attribute implies, data transfered from Point A to Point B (nowhere else!)
Produceed/Consumed (CPU-to-CPU)data could be either "unicast" or "multicast" ( X to Y and X to Z and X to W, etc), however, again, this is also user application decided.
Up until this point therefore, the user application is the enforcer of the "unwanted data" mitigation (if you do not want useless data transfer, just unicast I/O and do not Produce more than available Consumers)
Now comes the FactoryTalk View "bandwidth hogger"; due to its very own nature, an HMI application is a "broadcast creature"; a relatively "light" HMI application with a properly configured Runtime Communications Path (absolutely no "Copy from Design to Runtime" "feature" implementation unless designed from within the subnet!) will not have any "major" impact when present within a properly implemented I/O subnet; I have personally encountered two or three HMI applications hapily running on a perfectly functional EtherNet/IP I/O subnet.
However, bandwidth is band-width after all; you could "manage switching" only so far; development and system "feature" additions will eventually bog down communications on a single subnet; segregation will exponentially increase the system's data transfer capability.
Then again, this is the way I see it...:D
 
"the user "decides" if another CPU than the "owner" is to "listen" to the respective Input module...data [is] transfered from Point A to Point B (nowhere else!)
This is where I see the break-down; the difference between "hearing" and "listening". Sure, I can see that only one processor may "listen" to the data, but it seems all processors will "hear" and then have to ignor it. I'm trying to eliminate that extra traffic. Right now I don't believe that data is only pumped from Point A to Point B without something more specific to tells me "look, this setting right here, which only matches that specific processor, is what the managed switch uses to route it there and only there"

Produceed/Consumed (CPU-to-CPU)data could be either "unicast" or "multicast" ( X to Y and X to Z and X to W, etc), however, again, this is also user application decided.
It's easier to believe that this is truely and isolated conversation, since it is IP address specific. I don't know for sure if that is the only differentiation, but at least it makes sense that the switch would know not to route data intended for a specific IP address to any other on its network.

Now comes the FactoryTalk View "bandwidth hogger"; due to its very own nature, an HMI application is a "broadcast creature"; a relatively "light" HMI application with a properly configured Runtime Communications Path (absolutely no "Copy from Design to Runtime" "feature" implementation unless designed from within the subnet!) will not have any "major" impact when present within a properly implemented I/O subnet; I have personally encountered two or three HMI applications hapily running on a perfectly functional EtherNet/IP I/O subnet.
This, I suspect, is the one area where there is nothing better - at least for the time being.

The long and short of all this is not to necessarily identify best-practices but to understand how traffic is actually limited, especially traffic that has to be ignored by a device. I want to know what's going on under the hood rather than simply being dumb and happy while riding shot-gun.

Steve
 
As best I can determine, every like processor that is configured to use Ethernet/IP and needs to communicate with remote Ethernet devices is going to send out a generic Forward_Open call and every remote device is going to try to respond.

I guess the confusion here is the processor does NOT send out any generic EtherNet/IP Forward_Open messages. Suppose we have a simple network with a single processor and two IO blocks, say A and B (these can be drives, whatever).

Say you want to communicate with one of these, say block A, and NOT B. Just block A. You would configure in RSLogix5000 that you want to do have IO communications (say for example EIP Class 1) with IO Block A, and would then configure as such by adding the EDS file in newer revs of RS5K, or older verions maybe adding a Generic Ethernet Device, or whatever. With that entry, the processor will then send a Forward_Open ONLY to block A. Not to block B. Not anywhere else. This is why I say you have control with the configuration and this step is all Unicast communications. If Block A accepts this Forward_Open, then the process I have described earlier on will be implemented, per the Wireshark trace.

Block B would not be affected if we have IGMP querier and snooping enabled (i.e. a proper multicast filtering implementation). It would never see the Forward_open, the response, any of the unicast traffic if on a switched network, and none of the multicast traffic if with proper multicast management (for example, EtherNet/IP uses only IGMP but there are others, such as GMRP, PIM, etc.)

If we do not have a proper multicast filtering implementation, then this is when the problem starts. Block B, up to now almost completely ignorant of communications between the CPU and Block A, will now receive the multicast frames from the CPU<->Block A communications and have to deal with them. Since they are treated as broadcast, they consume CPU resources which is the fundamental problem. They will be dropped, but it takes resources to do this and too much of this traffic will consume all the CPU on the lowly little processor for IO Block B and it could crash 🙃
 
Probably Ken is already prepairing the correct answer for all (most of)your questions...👨🏻‍🏫...:nodi:
I have personnally spent more than enough time trying to answer questions that only the protocol/hardware rightful owners could truly answer... And they will probably never answer it anyway...That's why they're still in business afterall...:D
 
Please don't think I'm trying to be difficult here, I'm not. I'm truely trying to understand and somehow I still just don't get the multicast part.

I guess the confusion here is the processor does NOT send out any generic EtherNet/IP Forward_Open messages.....the processor will then send a Forward_Open ONLY to block A. Not to block B. Not anywhere else. This is why I say you have control with the configuration and this step is all Unicast communications.
This make sense where unicast communications are concerned. If the processor initiates device specific conversations rather than open generic ones, I can begin to see how that can work.

But it still sounds like multicast comms go out broadly and then both Block A and Block B will see it, yes?
 
But it still sounds like multicast comms go out broadly and then both Block A and Block B will see it, yes?

Your questions are reasonable. The process of EtherNet/IP Class 1 or IO communications progresses in stages, the first part is unicast through an explicit message as a Forward_Open. Once complete, then the connection progresses to data publishing. Many devices devices support unicast communications even for this phase, some do not.

But if we are using multicast, which is the default in many cases, you are correct: all devices will see the multicast communications if a suitable multicast management implementation is NOT in place (such as IGMP). But that's the whole point of IGMP - stop the other devices from seeing the multicast traffic they do not want to see. The registration process shown earlier in the screen capture shows how a device chooses what it wants to listen to.
 
Ok then. It does indeed look like we have come full circle but I think I have learned something.

Here is what I see:

On my network there will be three kinds of devices with respect to a correctly configured switch with IGMP snooping;

First, there are those devices who have no interest in anything happening on any of my PLC sub-LANs. IGMP snooping will recognize this and prevent traffic from reaching those devices.

Second, there are those PLC related field devices that have been configured for Unicast connections with their associated processor. Like those devices in the first catagory, only the specific communications from the processor will be sent to the device and then the specific responses from the device will be sent back to the processor. Thanks to IGMP snooping, none of these point-to-point specific communications will go anywhere else.

Finally, there are the multicast communications where non-specific calls are made to all devices on the network. These calls, made by the processors, are seen and then responded to by any device out there who is not specifically configured to do otherwise. These responses are then "heard" only by the originating processor but ignored if the processor is not specifically configured to "listen" to the response. IGMP snooping in this particular instance only limits the routing of device responses so they only go to the initiating processor. There is still a lot of undesired traffic (devices responding to multiple processors).
 

Similar Topics

I have an unusual task with a PanelView Plus 7 (running v11 firmware) and a CompactLogix. The customer wants to place a version number string in...
Replies
3
Views
610
Hello all, I was looking into different sorting techniques and actually found a bubble/insertion sort sample code file on Rockwell's site. Only...
Replies
20
Views
5,206
Hello all, I am attempting to sort a selection list via 'part-selection' buttons. Ex. a certain type of model is selected, I only want to have...
Replies
5
Views
1,332
I am storing values into an array. I am wanting to find an average of the highs and an average of the lows. Something like this... After 50...
Replies
36
Views
8,811
Hello, I been trying to sort some values in the the Do-more PLC but still have some issues. I have 20 elements from the data view with random...
Replies
9
Views
3,641
Back
Top Bottom