wildswing
Member
Hi fellas,
I'm currently running Wonderware's Intouch v9.0 P2 and am having trouble deciding on how to do an IO failover. My apps run on twin redundant nodes, each running identically configured AB CIP DAServers, talking to 2 independant (read: different) AB PLC5s through one of two redundant ethernet/dh+ bridges. This set up results in 4 possible paths for any one single access name (node1 das > bridge 1, node1 das > bridge 2, node2 das > bridge 1 & node2 das > bridge 2). I have 4 such HMI/PLC networks.
I realize that each node can go get it's own data. After all, it's only 2 Intouch nodes and 2 PLCs and we're only talking about maybe 2500 tags in each app. That's how I'm running them right now. However, when both nodes poll the 2 PLCs for the identical data I do see an increase in update times in View. Don't forget, I'm still on 57k "dial up" dh+ on the PLC side of the bridges. Setting it to 230k is not an option because the CLX DHRIO module will only support 230 on one channel and disables the other. I'd have to buy 4 more cards. I don't have room for them. Anyhow, when I set up one node as the IO server and the other as the client I see an improvement in update time. That's how I'd like to run, but won't go there until I have a good IO failover routine written.
I've been running a small test app with scripts and such as suggested by WW tech notes i.e. an access name for each possible path with one tag each (S:23 heartbeats), as well as tags to watch IOStatus, $SYS$Status and such for each access name, but keeping track of what path I'm using, how to indentify a failure, how to identify a healthy backup path and then how to failover and then failback is getting confusing. I think I'm over engineering this. I need to stop aiming and pull the trigger!
My vender tech rep tells me V9.5 has a built in failover. I've subsequently read the v9.5 user manual and it would simplify my choices it into either primary or secondary path, i.e node1 das > bridge 1 OR node2 das > bridge 2.
I just upgraded from v7.11 and KT cards to v9.0 with daservers and CLX bridges, so before I go through another upgrade, I have a few questions:
1 - How reliable is this new failover feature?
2 - A post in WW's tech forum states that either failover or failback changes the access name to "advise all", then asks if this is a bug or not. There's been no reply. Anyone have an answer?
Any suggestions, comments or insights would be much appreciated. Thanks in advance!
I'm currently running Wonderware's Intouch v9.0 P2 and am having trouble deciding on how to do an IO failover. My apps run on twin redundant nodes, each running identically configured AB CIP DAServers, talking to 2 independant (read: different) AB PLC5s through one of two redundant ethernet/dh+ bridges. This set up results in 4 possible paths for any one single access name (node1 das > bridge 1, node1 das > bridge 2, node2 das > bridge 1 & node2 das > bridge 2). I have 4 such HMI/PLC networks.
I realize that each node can go get it's own data. After all, it's only 2 Intouch nodes and 2 PLCs and we're only talking about maybe 2500 tags in each app. That's how I'm running them right now. However, when both nodes poll the 2 PLCs for the identical data I do see an increase in update times in View. Don't forget, I'm still on 57k "dial up" dh+ on the PLC side of the bridges. Setting it to 230k is not an option because the CLX DHRIO module will only support 230 on one channel and disables the other. I'd have to buy 4 more cards. I don't have room for them. Anyhow, when I set up one node as the IO server and the other as the client I see an improvement in update time. That's how I'd like to run, but won't go there until I have a good IO failover routine written.
I've been running a small test app with scripts and such as suggested by WW tech notes i.e. an access name for each possible path with one tag each (S:23 heartbeats), as well as tags to watch IOStatus, $SYS$Status and such for each access name, but keeping track of what path I'm using, how to indentify a failure, how to identify a healthy backup path and then how to failover and then failback is getting confusing. I think I'm over engineering this. I need to stop aiming and pull the trigger!
My vender tech rep tells me V9.5 has a built in failover. I've subsequently read the v9.5 user manual and it would simplify my choices it into either primary or secondary path, i.e node1 das > bridge 1 OR node2 das > bridge 2.
I just upgraded from v7.11 and KT cards to v9.0 with daservers and CLX bridges, so before I go through another upgrade, I have a few questions:
1 - How reliable is this new failover feature?
2 - A post in WW's tech forum states that either failover or failback changes the access name to "advise all", then asks if this is a bug or not. There's been no reply. Anyone have an answer?
Any suggestions, comments or insights would be much appreciated. Thanks in advance!