Anyone know what PLC this is?

From The Article said:
The PLC controlled Unit 3's condensate demineralizer - essentially a water softener for nuclear plants. The flood of data spewed out by the malfunctioning controller caused the variable frequency drive (VFD) controllers for the recirculation pumps to hang.

Such failures are common among PLC and supervisory control and data acquisition (SCADA) systems, because the manufacturers do not test the devices' handling of bad data, said Dale Peterson, CEO of industrial system security firm DigitalBond.

"What is happening in this marketplace is that vendors will build their own (network) stacks to make it cheaper," Peterson said. "And it works, but when (the device) gets anything that it didn't expect, it will gag."
I find it interesting that this is not an issue that I have noticed here. Is this type of failure truely "common among PLC and supervisory control and data acquisition (SCADA) systems", or is it just this guy talking out his ***?

Steve
 
Why they didn't have the device plugged into a managed switch that would kill the broadcast storm ought to garner an investigation.
 
At my last job we were dealing with a similar data storm problem that was periodically shutting down the network. The last PLC we'd put on the network was causing all the problems (supposedly). I personally felt it was being caused by windows trying to phone home (because it was a big network, but it was not on the internet). It wasn't resolved when I left, so I can't tell you what was really going on, it could have been the PLCs. I was not technically working on that project either, so I'm not positive, but I think the PLCs were GE. If not GE, then they were AB.



-jeff
 
There are a lot of places to point fingers in this incident, it sounds like. The controller that flooded the network, the network that didn't have flood control, the VFDs on a critical reactor system that were tied to the noncritical network. Interesting also how all the manufacturer names were redacted from the NRC report.

Want to be really impressed ? Go read about the fire that shut down the Browns Ferry Unit 1 reactor in 1975.
 
I'm also surprised at how something like this would happen. You would think the levels of redundancy in a nuke plant would take into consideration communications as well. The other instances also surprised me with their vulnerability to 'worms' or 'viruses'.

On my own systems, which don't control anything near to a nuke, the internet cannot be accessed, and machines at the operator level have the drives disabled so there is very little chance of 'Bubba' plugging in his CD he burned at home with pictures, music, and viruses and taking the plant down.
 
What surprises me based on this incident's description is why any process critical to the system's operation is not redundantly controlled...

I also agree with CroCop about the managed switch issue.

Finally, I'm not so sure the PLC or PLC manufacturer is directly at fault. This seems to be a overall control system design flaw.

I won't even get into the poor or non-existant final system testing, either.
 
I can't help but wonder if this is little more than smoke. The guy who claims the problem is a PLC is 1) a CEO rather than (probably) in the trenches and 2) from an "industrial system security firm" rather than sime kind of systems integrator. I imagine that his firm's interest is primarily to direct attention away from outside security threats rather than trying to identify the real source of the problem. Blaming any specific "black-box" device would do well in that respect.

Just my thoughts, though...

Steve
 
I agree Steve. We don't have any information beyond wild guesses. It COULD have been anything on the network, and it could be a problem any of us might see in the near future. So whatever caused it, it'd be interesting to know, especially if it was the PLC. But if it was the PLC, I'm going to need more info. Actual PLC, what was it doing, what does the manufacturer say needed to have been done to alleviate the problem, ya know stuff like that.

Untill we get that (day after never-ish?) might as well keep throwing out ideas for how to prevent datastorms and/or shutdowns due to them. In the system my coworkers were dealing with, I can tell you they had managed cisco switches and they weren't doing anything to prevent the storms. Course, we were relying on the expertise of the customers IT department to have them all set up properly ... (no seriously, from what I could tell, they were better at networking than me)


-jeff
 
Eddie Willers said:
There are a lot of places to point fingers in this incident, it sounds like. The controller that flooded the network, the network that didn't have flood control, the VFDs on a critical reactor system that were tied to the noncritical network. Interesting also how all the manufacturer names were redacted from the NRC report.

Want to be really impressed ? Go read about the fire that shut down the Browns Ferry Unit 1 reactor in 1975.

LOL, I was going through my I&C apprenticeship training at Brown's Ferry when we had the fire and the fire recovery. You can't hold a candle to a TVA nuke plant.

Robert
 
I believe that for a nuclear plant, you need SIL4 because of the possibly severe consequences of a malfunction.

SIL4 cannot be met by any of the standard PLCs or industrial networking standards in existence today. For SIL4 I believe you need to have at least triple redundancy for example.
SIL3 is the max at the moment with "normal" PLC and networking standards, and I doubt that SIL4 will ever be reached.
Only dual redundancy was mentioned.
I also think that all the most recent ethernet based systems are thoroughly tested against ethernet broadcast storms. So I think the ethernet mentioned was not one of the recent types and/or not all components were of the proper types. In other words, the entire setup was not proper for such a critical part of a nuclear plant.

So, to me some of the statements sounds like damage control (damage control as in limit the consequences for the responsible persons).
 
Hey, Bob,

I have worked some up at the Ferry. My memory is foggy, but I think that the original units did not even have PLCs? Is that what you remember. I think the PLCS were added later in one of the infinite "modifications" to keep the NRC happy.

There are redundnt systems for the critical core-cooling pumps at Browns Ferry. In the 1975 fire, they were down to the last redundancy, before they got it cooled down, just short of a core meltdown. Candles are no longer used for duct leak testing.
 
Last edited:

Similar Topics

At our facility we have some strange behavior from one of our ControlLogix PLCs. It runs three palletizers. We have had some rare but very...
Replies
3
Views
1,593
Anyone ever use OxpaPlc software? I was given a copy from a robot OEM to use to upload a new program. Looks like ladder logic. I am just looking...
Replies
2
Views
1,876
I have a customer asking for a ABB PLC trainer and looking at the ABB website they have a LOT of options I am looking for one with Ethernet 8...
Replies
2
Views
1,444
I'm working on a temperature/humidity test chamber that was made in China sporting a TATO PLC that looks exactly like a Micrologix 1000. Also has...
Replies
8
Views
3,280
Hi All Got lumped on me from a customer a issue with a horner HMI/PLC unresponsive and locking up. Sadly they only had the .PGM file not a project...
Replies
2
Views
1,806
Back
Top Bottom