Brute force vs. "programming"

The companies I have worked directly for all engineers follow the projects from initial meetings to final commissioning on-site, this has proved to be a winning combination.

And why most people in automation have poor lives outside of work, are burnt out or simply moved on so they don't have to spend their life working away.

I worked for a company that had dedicated field Engineers and dedicated programmers in the office. The difference to what other companies tried to do was that the field engineer wasn't a mechanic that knew computers, but rather a programmer that wanted the money, travel or lifestyle it brought and both teams worked well together to tweak the program being deployed. Obviously, it helped that they had a massive code base and a lot of projects were derived from it so there was the occasional new product or new iteration, but usually even those had been a collaboration between the programmers and field engineers, either by creating feature requests or down to the coding and testing.
 
The case I just mentioned, the commissioning engineers were well trained engineers, the problem is once commissioning starts there is little chance for the site engineers to modify it extensively due to time, goes against the grain of the office engineers (it's ok for them they designed it so know where to look) & in many cases it is difficult to shoehorn them from their comfortable seat to visit site.
If you have good collaboration between site & office it's great and if they both work on the same project from start to finish it will work Not that I have experienced it as I have always done both, but in many companies that have separate engineers for commissioning these engineers have no input or knowledge of the system until they are thrown on-site.
I could mention a very good example of this, and it was a problem with standard blocks they had developed, however, it would take too much time to explain.
 
TL;DR


there is a recent thread with a topic something like "What process do you use to write programs." This thread seems to be overlapping with that, which is fine with me. Out the the procedural programming world, there are many approaches: waterfall; test-driven design; agile/scrum/many other names; etc. This will be a post about waterfall vs. agile.


As understood by most people, waterfall at its core is based on requirements; it is the traditional way software was done back in the day e.g. similar to building a process from a design specification. The relevant parties look at the process, and define what the process must do, along with maybe some items it "should do." A lot of effort goes into the requirements document, then it is passed on to the programmers. The theory most people have about waterfall is that you go through once the requirements document process, then once through the programming process, and when the software meets the requirements (e.g. write tests), the system will work. I was hired late on a project as a remote worker. I would get on telecons and all they would talk about was section this or that section of the requirements document, and I remember thinking, "is anyone writing code?" There was a critical design review (CDR) scheduled that would determine whether the code passed a set of tests, and the project was producing so little, after three years, that the pass/fail metrics for the CDR were dropped back to a PLC/process equivalent of "if a box of a particular size is placed on the end of the conveyor, the prox senses it and the conveyor turns on." So the CDR starts, and they are failing 80% of the metrics. This one test engineer, she saved the project by standing tall and saying "this failed" and not allowing exceptions (e.g. coders will say "yes it failed, but we found it was a single typo in line 1356 and it will work now," and sometimes that is accepted in place of a successful test). Oh yeah, and the conveyor did not start when the box hit the prox. So maybe a third of the way through the two-day CDR, the head honcho shuts the CDR down and starts interviewing people about the project. Remember, this has been going on for three years.



Some people get fired, new people are brought in, and the new head programmer switches to agile. He asks the young programmers "should we refactor?" and they give a frightened, confused, "yes?" Apparently his predecessor more or less wielded the requirements document like a metaphorical (not physical) club "make the code do what it says here and the system will work," and everyone was abused and traumatized.



So they switch to agile. Agile, to me, is defined by small bites and humility. Small teams work only on one small piece of the project for small periods of time - two weeks is long. Then everybody gets back together and reviews what was done and whether to accept it into the project. The coolest thing about this process is, in that environment, when everyone is in the room looking at proposed code and asking questions, it is most often the author of the code who ultimately decides to reject that code in its current form during its review, and often for a very small thing that they know can be better. That is one place where humility comes in.


The other place for humility is in the overall approach, and takes a bit to explain, and for that here is a common example of how an agile process works. An agile-based software company comes in to do a project, and the client says "here are our 100 requirements." the agile folk say "What are the most important three?" The client says "There are 100, and they all must be met." "Yes, we know, just identify the top three." So the client points out the top three, and Agile heads off to code them up. They come back in a week, and review them with the client, find where any tweaks are needed, then ask "what are the next three most important?" "But there are 97 more" is the response, and you can guess how this is going. But the punch line is, after Agile completes about 20 items and asks for the next three, the client says "Don't bother, the system is fine as it is now, you're done."


So the humility in that story is that the agile-minded assume they do not fully understand the process, and work on a small piece they think they might understand, and the process knowledge gained from coding each small piece builds up and informs the design and coding of the future pieces. Another aspect of the humility is the willingness to refactor, literally throwing away the code to that point and starting over, when they see it is not going to work.



On the project I was on, they refactored the overall design three times at the beginning, and after a few weeks they were halfway to the box/conveyor/sensor problem (that was a metaphor for the current audience; it was more complex than that) that had not been solved in the previous three years, and about a month later they were well past that and they have not looked back. Looking at the system they have now, I can see analogies to automated processes, SCADA, history, and the ability to go back automatically detect changes in a spec and regenerate the relevant products.


One interesting sidebar is that, while the most of the world treats waterfall as a once-through process and expects a working system at the end of it, I heard that the person who came up with the waterfall process says it needs to be run two or three times or more to get get final system right. So in one sense, properly run waterfall as lot in common with agile, it is only the sizes of the bites taken that differ.


P.S. I have greatly oversimplified "agile" here, There are scrums and scrum masters and product owners and a lot of other pieces. I expect some would disagree with my characterization, but that is what I saw and it was amazing.
 
I use brute force and finesse depending on so many things.
Which method is the easiest to code?
Which method runs the fastest. Will the time saved make up for the extra time to code, debug or change later?
I tend to use more brute force in automation whereas when programing in C or python I use more finesse.


You guys are talking tactics, not strategy. drbitboy finally mentioned knowing the process, the strategy part. A knowledge of algorithms is important and when to use them. A bubble sort if fine if the number of items to be sorted is small. Other methods are much better when the number of items is larger. If I had to sort 3 times I could do that straight in line without loops.
 
i'm answering several posit.

I did have a boss looking over my shoulder. yelling at me, calling me an idiot, even threatened to fire me in front of our customers because of something I knew I had to fix after the customer left. I finally quit.

I have written plc code.
plc 2, 5, slc 100/150, slc 500, Omron, mitsibushi.
the biggest program was 2400 rungs on a multi station walking beam. I have programmed for next loops for a message display, and even indexed addressing.

when i stated that a machine would not run, what I was referring to is a machine that was programmed by my co worker. he NEVER, EVER, EVER used the statement A>=B or A<=B, or A = B. everytime the machine faulted, there was 8 places the program could have stopped. an oem programmed a machine with 14 latch bits and 24 unlatch locations of the same bit. how do you debug programs like this? I rewrote that program and downtime went from 24 hours per 80 hour week to less than 10.

as machine builders/programmers, we need to always be aware of maintenance and program debug capabilities. I have always tried to be mindful of them.

I am not slamming anyone and do not mean to insult anyone, I do apologize if I have insulted anyone.
just trying to explain my posts.
james

James, if someone is busting your chops for any of your posts then they are not familiar with your history of helping people on this site. Over the years your posts have helped me and many others. Don’t let the b@$tards get you down.
 
There is a github link in this thread, at the bottom of that link is this quote. I think this sums up the original question for me.

Although design patterns can ease your life significantly, the best solution is always the simplest solution (and/ or procedural), no matter what fancy pattern you throw at it!

I started as a service tech, I think this has played immensely to my success as a programmer. To a downfall im mostly self taught, I didn't have the computer whiz mentor to be able to learn some of the complex but simple concepts that have been on display in recent discussions here. I think in the end, unless it is a proprietary piece of equipment and you receive training on a specific way it should be done, you owe it to the guy after you to be serviceable.

If the process is simple but the code is complicated and only the original author knows the ins and outs, then you just dealt a **** sandwich to the next guy the has to work on it after you are gone and forgotten. This happens alot, and for various reasons.
 
Although design patterns can ease your life significantly, the best solution is always the simplest solution (and/ or procedural), no matter what fancy pattern you throw at it!


Allow me to tease that out a bit.


This thread has gotten very broad as to what non-brute-force means, but the example in the OP was much simpler i.e. when a loop is the better choice, specifically whether to


  • write one rung using a loop hopefully correctly once but at least so it need be fixed or modified only once,
  • OR
  • write the same rung N* times with the only difference between rungs being changing one number e.g. the /BIT suffix in A-B syntax, so any change will need to be repeated N times?
An argument can be made that there are cases where the loop is the simplest route. If the commissioning/startup team, or five years down the road a maintenance tech or myself, have it subtract 3 instead of add 1 to each integer, do they not bless me when they see one rung with a loop, instead of a few dozen rungs, that they have to change manually?


Obviously if the tech does not understand a loop, then that is a problem. And there is still the possible false dichotomy issue where loops may be better implemented in ST, which is debatable**.


OTOH, an manual edit to a few dozen rungs is also going to be a nightmare: difficult to do right; difficult to proof for typos. One time I came up against this with my brother, I asked him if there was an ASCII format, and he gave me a .l5x file (XML), and I wrote a script to make the change in about the same time he could make the change by hand, and I would far more trust the scripted solution.

I guess my point is, even simple can be context-dependent.


Maybe a better question is, is there anyone here who would never do a loop, even at several hundred rungs? And on the other side, is there anyone here who goes to a loop early e.g. as soon it has fewer total instructions and/or uses the same screen real estate?



* N is large-ish, e.g. dozens


** a loop is a loop, and the prospect of having the program split across multiple programming environments would push me away from the loop if that was the choice, for the benefit of the people who have to look at it downstream.
 
I would never use a loop in a PLC, I might even take it a step furter and put some work into "even out" my code, so I have a stable scantime (this might be OCD related on my part) I try to avoid a single bit opening up for 80 rungs and tilting the scantime, is this a problem on today's PLC's, probably not, but probably something I brought over from the SLC's.

At the same time being lazy is a good quality to have as a programmer and I always look for the simplest solution to solve my task, but without loops :)
 
I would never use a loop in a PLC,


The PLC program is a loop... you can have loop functionality without a loop instruction by simply incrementing the pointer in the main loop. I've used this way several times for things that don't need to be checked that fast all the time (like which Profibus nodes are up on the bus).



This being said, wouldn't the scan time be varying if the loop exited easrly rather than with the same number of iterations?
 
The PLC program is a loop... you can have loop functionality without a loop instruction by simply incrementing the pointer in the main loop. I've used this way several times for things that don't need to be checked that fast all the time (like which Profibus nodes are up on the bus).



This being said, wouldn't the scan time be varying if the loop exited easrly rather than with the same number of iterations?

I think you know what I mean when I say I never use a loop in a PLC, but just to clarify, I would never jump back in the code in a PLC.

As for using a pointer has nothing to do with looping in your code, but the only way to go(in my opinion)

I prefer a longer more even scantime, over fast and slow, but then again do I ever have scan time problems in the never PLC's never, so this is probably just an old habit.
 
If speed of the code or avoiding over-long scan times is an issue then of course that overrides


It's interesting that Holmux said they "would never jump back in the code of PLC program;" while I suspect that's a case of "Fool me once ... (Cmdr. Montgomery Scott, ST:TOS)," it's not an unreasonable position.



And that is what I am looking for, i.e. when - or more importantly why - someone (never?) says, "I am not going to copy-paste-and-edit 200 rungs, and spend the next week finding typos, when I could have the PLC do the work for me." I'm comfortable with looping, or maybe I just haven't been burned badly enough, and I know the performance implications (cf. backed up by RonB's experiment earlier in this thread), so I suspect I have a lower threshold/tolerance for silliness.



Holmux and cardoscea also bring up a good point about splitting or staggering work across multiple scans as an alternative; I know my brother does this with PIDs. And the other day there was someone who inherited a non-working program reading serial data from a weight scale: a bunch of us worked with him to get the baud and other parameters set, and noticed the code used an ARD (A-B read N bytes) reading 11 characters at a time even though there was CRLF line termination and a constant 17 characters per "line." We left it as it was because the 11-byte reads were going to line up with the target data about twice a second, which was good enough, and the fix was easier than walking someone through switching from an ARD to an ARL (A-B read with line termination). Not exactly the same thing, but along the same lines of time being fungible.
 
My extra minutes (or even hours) duplicating rungs and making edits is less important than the potential for excessive downtime because I decided to get cute. I occasionally use indirection and looping for data handling stuff, but almost never for machinery control.

I like to be able to "Find All" or "Cross reference" an address and see distinct results that show all the stuff that affects those addresses.

I like to be able to customize just one or a few of the items that might otherwise be identical. Maybe all the pump stations are identical on paper, but this one sub-system out of a dozen got modified to accommodate a real life situation in which it makes more sense to adjust the associated code/addressing than to force standardized code to work.

Those are two reasons to avoid some of the advanced programming techniques that might save me a lot of time at development.

In my current job, my customers are never going to have to troubleshoot my logic, but someday I will, or one of my colleagues will, and having straightforward and understandable code is more helpful than saving a little development time.
 
I like to be able to customize ...



stations are identical on paper, but this one sub-system out of a dozen...


I have been thinking about this aspect as well.A related scenario, that I suspect is not too uncommon, is that one input channel in the middle of a hundred gets fried but there is a spare/unused channel out on another card and the tech can re-land the wires in about two minutes during the next PM. However it breaks the loop sequence, and now instead of blessing my conciseness the plant engineer is, or I am, cursing my lack of foresight.


Meh, I guess the boring coding is what interns were created for.
 
I have been thinking about this aspect as well.A related scenario, that I suspect is not too uncommon, is that one input channel in the middle of a hundred gets fried but there is a spare/unused channel out on another card and the tech can re-land the wires in about two minutes during the next PM. However it breaks the loop sequence, and now instead of blessing my conciseness the plant engineer is, or I am, cursing my lack of foresight.


Meh, I guess the boring coding is what interns were created for.

That is a scenario that is relatively common, and a method I use (one which some folks don't like) is input and output mapping. This adds unnecessary code and adds a step for the person reading the logic to first find the mapped tag, then cross reference it. There are several advantages to mapping the I/O and the ability to easily and quickly change a point in the map to invert a switch contact or choose a different analog source is one of them.

Mapping the I/O also lends itself well to code encapsulation and bench testing as well as isolating HMI indicators from the real world I/O addressing. It also let you write code for a machine before you know how it is going to be wired.

When I map discrete points, I always use XIC ---> OTE as opposed to COP or MOV instructions. Sure it takes more rungs, but the ability to change one bit in the middle of a group and when reading online to be able to find all those used addresses as individual bit references is more important in my opinion. It also executes faster in most PLCs.

But if I have 7 identical sub-systems...I should reword that as "allegedly identical" each will get its own logic routine. In the majority of those situations, in reality, there is some oddity about one or more of those sub-systems that causes me to end up altering its programming for a better end result.

On the side of the argument against elegance:

I have an example in mind of a program my co-worker had to deal with on a UV disinfecting machine from Europe a couple of weeks ago. Two of the lamp wipers would not operate. The PLC was Controllogix v15. The programmer used UDTs nested inside arrays of UDTs, looping and indirection and apparently this was done before AOI capability so they used subroutines with input and output parameters. We studied the code for several hours and discovered a fault bit was set but we were not able to decipher what was causing the fault (I was looking at his laptop via remote access).

Finally, after making sure the machine was in a safe state, I toggled some bits inside the UDT labelled with the word "reset" and one of the wipers was functional again. The other one would not clear...It is frustrating not to be able to monitor the ladder and cross reference right to the source...

I finally had to move on to other work and told him, just go scour the HMI and maybe they have a page somewhere that will identify the problem...ask the plant operators if they have some books...it shouldn't have to be that hard. I told him we could spend a day or two and "unroll" that cute code into "normal logic" without affecting how it worked, but he was not being paid for that...he was called there just to fix a couple of faulted wipers...
 
Last edited:
I have been thinking about this aspect as well.A related scenario, that I suspect is not too uncommon, is that one input channel in the middle of a hundred gets fried but there is a spare/unused channel out on another card and the tech can re-land the wires in about two minutes during the next PM. However it breaks the loop sequence, and now instead of blessing my conciseness the plant engineer is, or I am, cursing my lack of foresight.


Happens a couple of times a year for me with Siemens plcs - add a single rung to copy the value from the new input to the fried input, mark up the schematics accordingly.
 

Similar Topics

I know, why bother...If I had my druthers I wouldn't. I have a proven batching application developed and written in CLX, but the newest customer...
Replies
7
Views
2,725
Hi everyone i have a customer, who wants to show an alarm on the machine, if the I/O forces are enabled and set, on at ControlLogix L81E with...
Replies
3
Views
143
Hi there, I'm doing some extensive testing and commissioning with a slew of new Emerson PACSystems RX3i PLCs. It would be convenient to...
Replies
5
Views
80
Hello all, I have a question in regards to RSlogix 5000. I am having issues with the program force closing when I try to make online edits. We...
Replies
0
Views
95
Hello all, I have some parameter files that I'm using. Most of the tags are direct reference to the PLC, but a couple are HMI tags. If I change...
Replies
1
Views
472
Back
Top Bottom