monkeyhead
Member
So dealing with PLCs primarily as an end user where most of my work is in customizing existing processes, I spend a lot of time analyzing others work.
I was introduced to a new process that kinda shocked me in it's lack of error control. The designer wrote some really slick logic, but didn't spend much time on error control at all. This is totally atypical of what I normally deal with where the programmer builds a ton of error control in. The important point being that the base logic is so well written that things don't go wrong often, but on the rare occassion that they do, it's usually pretty ugly.
The thing is, without the excessive error control, the process just runs and runs... oh yeah, and it runs.
It just brought the question up to me, how much error control is really beneficial? Is it sometimes okay to expect the operator to have to deal with the side effects of problems as a trade-off for a low downtime process?
Sorry if this is another dull general discussion, but I'm interested in all of your takes on this.
I was introduced to a new process that kinda shocked me in it's lack of error control. The designer wrote some really slick logic, but didn't spend much time on error control at all. This is totally atypical of what I normally deal with where the programmer builds a ton of error control in. The important point being that the base logic is so well written that things don't go wrong often, but on the rare occassion that they do, it's usually pretty ugly.
The thing is, without the excessive error control, the process just runs and runs... oh yeah, and it runs.
It just brought the question up to me, how much error control is really beneficial? Is it sometimes okay to expect the operator to have to deal with the side effects of problems as a trade-off for a low downtime process?
Sorry if this is another dull general discussion, but I'm interested in all of your takes on this.