Marc, when I saw your topic I knew I’d be typing for hours. Sorry for the long post. Grab a coffee…
I’ve never been able to get the Nodename, View, Tagname approach for remote access names to work. We reference the IO or Data server in the remote node rather than View.
I’m in the midst of doing much the same thing as you. We’re running IT v9.0 but will soon go to 9.5. We’ve also eliminated KT cards altogether and gone with CLX bridges consisting of ENBT and DHRIO modules (that may not have been a wise choice – more on that later). Our HMIs are in redundant pairs, that is, we run the same app on 2 nodes. We also run the ABCIP DAServer on both nodes with the identical configuration, but only one is being used at any one time.
Here’s how we do it. HMI1 acts as the master or server. HMI2 is the client. The access name is set with the node name blank so the app looks to the local node for the DAServer. In the start up script we first check node name, then if node name is HMI2 we use the IOSetAccessName function to change the node name in all of it’s access names to HMI1. This does not allow for any failover.
I’ve done some tests of v9.5 failover. It works, but only as far as it’s connection to the DAServer. If the connection to the DAServer it’s talking to fails, then it fails over, but if the connection between the DAServer and the plc fails, the failover does nothing. After discovering this, and on the advice of members here and with guidance from WW tech support, we decided to set up heartbeats for the PLCs in our future tests.
First I set up separate access names for the different paths that the comms can take. Do not use the existing access names. You need independent data. In our test we set up Path1 (HMI1, DASABCIP, Path1) and Path2 (HMI2, DASABCIP, Path2). Path1 used the DAServer in HMI1, and Path2 used HMI2. Each new access name got one tag only, that read the second clock in the PLC5 S:23, PLCsecond1 and PLCsecond2.
I then wrote a data change script to watch each of the two tags. The script toggles a memory discrete tag. So now I’ve got 2 discretes, PLCheartbeat1 and PLCheartbeat2, toggling on and off. You could avoid this step by creating the heartbeat in the PLC, but then you rely on the processor to be in run mode for the heartbeat to work. With my method the heartbeat continues to function even if the processor is in program mode (I don’t want to fail over just because we’re in prog mode).
Next comes 2 condition scripts, each looking at PLCheartbeat1 and PLCheartbeat2. ON TRUE and ON FALSE scripts set (=1) a new pair of memory discrete tags, Path1OK and Path2OK. WHILE ON and WHILE FALSE (10 seconds) reset the tags to =0.
Now another single condition script looks at the condition “NOT Path1OK AND Path2OK AND NOT FailOverTrigger”. WHILE TRUE (10 sec) we set another new memory discrete called FailOverTrigger. This script will only run once after a 10 second delay since “NOT FailOverTrigger” is in the conditions.
Another single condition script uses the condition “Path1OK AND FailOverTrigger”. WHILE TRUE (10 sec) we reset (=0) FailOverTrigger. Again, this script will only run once after a 10 second delay since “FailOverTrigger” is in the conditions.
The FailOverTrigger tag is used as the access name’s failover condition. We selected “switch back…” with a 10 second deadband for both.
For your other questions… We use Suitelink whenever possible with WW software.
The Daserver will not create any network traffic unless it’s being asked by Intouch. Our idle (backup) DAServer on the HMI2 does nothing until failover. “Advise only active” does not affect that. It’s the traffic from the server in use that it affects. The server in use will create more traffic if you Advise All. The backup will sit there idling.
As for KT cards vs CLX hardware. We’re finding that KT cards were more efficient. The 2 nodes were absolute twins in our previous set up. Each node ran it’s own IO server and retrieved it’s own data from the PLCs. Since each node was running the identical app, this effectively doubled the necessary HD+ traffic, but we never had any troubles.
Jump to the new setup. Our old NT boxes were getting on 6 years old, so we bought new HP hot rods, but they didn’t have ISA slots. We decided to go the CLX bridge route. In our facility we have four such HMI pairs each talking to two PLC5s on four separate DH+ networks. On the advice of Rockwell tech support we dumped the idea of buying 8 new KT cards in favor of installing a pair of CLX bridges each consisting of an ENBT module and two DHRIO modules, giving us a total of 8 DH+ nodes. Each of the four existing DH+ networks was wired to one of the four DH+ nodes in each of the two bridges. The four DH+ networks remain separated. This gave us some redundancy. If one bridge fails we could switch to the other. IN THEORY!!!!
We found that we could not push all eight IT nodes through one bridge. We got time out alarms. We then spread the apps out amongst the bridges. 2 pairs running through bridge 1 and the other two using bridge 2, making sure that each DH+ module only had one active port working at any one time. This helped but still caused some time out alarms. That’s when we started to go the server/client route with the Intouch node pairs. This helped a lot, since it cut the traffic in half. We still get the occasional timeout alarm. FYI…our IT apps are not very big. Average tag count per node is between 2k and 3k. Update times are split. 75% are around one second with the rest set to around 5 seconds. We set up each apps topic names in the DAServer’s with unique update time (prime numbers in milliseconds).
I have since had the opportunity to talk to the Rockwell tech that recommended using the CLX bridges instead of KT cards. I had him stumped. After some consulting, he feels that the KT card may be more efficient since it uses the PC processor, where as the DHRIO module has it’s own processor which may be slower. I’m starting to think that when talking to DH+, using the hardware that was originally designed for the purpose, KT cards, is the way to go. The CLX DHRIO modules seem to be an afterthought to interface new technology with legacy hardware.