Basic PLC question that challenges the best:

jhenson29;882353 This is what I was talking about with runtime memory allocation though: [URL said:
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/2529171083.html&id=[/URL]




Thanks. I found more information, quite interesting.
 
My 2 cents on all this rambling.

I agree with the implementation is important. Year ago I was bad mouthing Siemens S7 for its lack of indirect addressing in LD. I also MUCH prefered type in rungs of ladder in text on RS Logix.


None of the PLCs have very good debugging tools.
Rockwell does have trends but they aren't used enough. I am not sure if the trends can catch data on every scan.



Most PLC IDEs are awful and require too much switching between the mouse and keyboard. This is a complaint I have with our RMCTools but these things get fixed if I ***** enough.



Recursion is cool but I don't see where it is necessary in automation programming. The problem with recursion is that it is possible to have a stack crash. That isn't good.


Dynamic memory can be implemented in a limited way. The problem is that "garbage collection" may take too long during real time execution. The solution is to force the program to clean up memory before the next change if there is no room for new functions or function blocks. We use dynamic programming but mostly during the initial startup during configuring the different types of actuators. After that things are pretty much fixed.


Good programmers should write programs that take into account all the error conditions. 40 years ago a senior programmer told me the difference between a good and great machine control was how fast it could recover from error conditions. I was programming for saw mills at the time and sometimes the wood would get jammed. This caused downtime. It was only a small percentage usually but it still was a costly percentage. Being able to back up a queue then restart it going forward again was a must. I have never heard of anybody backing up a FIFO or queue to clear a jam up on this forum.
 
...
Don’t take this the wrong way. I’m not bashing electricians. I’m bashing the idea that programs have to written in ways for non-programmers to read and use. Sorry. Not interested.

This depends on the target market for the machine. The degree that the program should be readable is different, for most it would be enough if they can see in LAD which sensor is it waiting, but in that regards, HMI is lot cheaper and better than those were in 2003.

But in general, it has lot less value in it now days than before. A lot less.

But if company culture is that buy cheap equipment and troubleshoot yourself with programming tools, you wont be changing that one so quickly.
 
FBD is graphical. You have to read LEQ and ADD in LAD just the same. You still have to read variable names in either. They’re both graphical. No sense in assigning values to it.
You are right. I have to adjust my statement, so that LAD is 80% graphical, opposed to FBD which is 40% graphical.
Btw. what is the logic behind that a negation in FBD is shown with a little circle ?
Ladder tends to get messy Quick, [..]
That is what I feel about FBD.
Will probably get punished for this, but if you have a hard time to grasp AND,OR,NOT and the states TRUE and FALSE and needing virtual contacts & coils instead you should probably stick to your screwdrivers & multimeters.
It is the problem of interpreting the logic result as the status view changes rapidly before your eyes, while the customer is breathing down your neck. Ladder is much faster for this than FBD, and much much better than ST. ST has other advantages though.
Year ago I was bad mouthing Siemens S7 for its lack of indirect addressing in LD.
[..]
None of the PLCs have very good debugging tools.
Rockwell does have trends but they aren't used enough. I am not sure if the trends can catch data on every scan.
Just to update you on this, the current programming software, TIA, does indirect addressing in LD, and can also do traces that catches the changes each scan. Very useful, I use it frequently.
Good programmers should write programs that take into account all the error conditions. 40 years ago a senior programmer told me the difference between a good and great machine control was how fast it could recover from error conditions.
Amen.
 
Last edited:
You are right. I have to adjust my statement, so that LAD is 80% graphical, opposed to FBD which is 40% graphical.
Btw. what is the logic behind that a negation in FBD is shown with a little circle ?
..

It (invert dot/circle) is coming from electronics, as an example the symbol for NAND and AND gate is the same except for the invert circle.

NAND-Gate-Symbol.jpg AND-Gate-Symbol.jpg
 
It (invert dot/circle) is coming from electronics, as an example the symbol for NAND and AND gate is the same except for the invert circle.
Ok, I should rephrase that. I knew the background. I have been taught NAND and NOR logics way back when that was a thing.
I meant that (even when knowing the historical background), there is nothing logical or intuitive about that a little circle inverts the logical status of a signal.

Ladder is graphically logical when it comes to negation.
When not activated, an N.O. contact does not make a connection:
--| |--
Whereas an N.C. connection makes a connection by the "/" that bridges the gap:
--|/|--
 
I'm surprised that people who are using some of the later graphical representations instead of Ladder seem to be unaware that FBD is based on the logic symbol libraries, I cut my teeth over 50 years ago on them so it was easy for me, I must admit it took me a little time to understand on Mitsubishi IDE by double clicking on the terminal of an AND function you could change it to a NOT.
Once you know it is not too difficult to think that even if an input to a not is true it means false, however, ladder is still a little easier on the brain.
Also, I like the way Siemens also automate the lines rather than just the ladder symbols, some IDE's do not.
 
Ok, I should rephrase that. I knew the background. I have been taught NAND and NOR logics way back when that was a thing.
I meant that (even when knowing the historical background), there is nothing logical or intuitive about that a little circle inverts the logical status of a signal.

Ladder is graphically logical when it comes to negation.
When not activated, an N.O. contact does not make a connection:
--| |--
Whereas an N.C. connection makes a connection by the "/" that bridges the gap:
--|/|--

But FBD is basically plc version of electronics design when LAD is of electrical design. In that sense it makes totally sense. For a european the ladder symbol does not make any more sense than the fbd symbol to be honest. For a american... its like 1:1 with the electrical symbol.

I do believe that basic NAND is still the very basic building block in microelectronics in many cases for any logic.
 
The problem with recursion is that it is possible to have a stack crash.


The *real* issue with recursion is in my humble opinion mostly situated in the recursion itself.








Sorry, I just couldn't resist. Enjoying the discussion here very much though.
 
I've been studying MBSE for a few months and from my learning I have one question with regard to your statement, what drives your system design including programming; how do you validate the project?

This is actually a hard question to answer. In general, I’m driven by good software engineering practices. The key thing is to manage complexity. I tried writing more general information here, but none of it seemed really helpful or informative. “You need to keep high cohesion and low coupling...” blah blah blah

I’ll just give an example.

I work on processing lines made up of many individual machines. One thing I do here is create a framework to handle the machines interfacing with each other. Each machine plugs into the framework which acts as its interface to the processing line. It can write data into the framework and read data out of the framework. Any one machine does not deal directly with other machines.

So, as an example, we’ll look at a simplified version of the processing line speed reference. Each machine writes the maximum speed it can run into the framework (maybe it’s limited by motor size, or processing requirements, or the current state of the machine). The framework goes through all of the speeds, finds the lowest, and writes it back out to all of the machines. The machines receive this speed and limit themselves appropriately.

The code to loop through the line speeds is very simple. Find the lowest value.
The code to set a speed limit is very simple. Write the max speed.
The code to use the max line speed is also very simple.

Each individual piece is very simple and hard to mess up. It’s also easy to test each individual piece. Machines can be taken out, added, or replaced very easily without worrying about how it affects the system.

Usually, when I walk new programmers through creating programs this way, they’re amazed at the end because the end result is a complex system but it was made from all of these simple pieces.

The key idea here is that the complexity is not a function of the number of machines. And here I mean the complexity of writing and managing the code (space and time complexities here are linear). The code pieces are quite simple and you don’t have to hold very much in your head at any one time.

If your working on a particular machine program, you just worry about that machine. Not the line as a whole.

If you’re working on the framework, you don’t worry about specific machines. You’re dealing with abstract interfaces.

And now we have a system that’s better for a programmer to manage and change. But an electrician cannot look at it to see what’s going on. So, then you have to start dealing with how to get this information onto the HMI. Topic for another time.

Validation also becomes easier because the scope of your change is more limited. This is really, really important. The program has to written in a way to limit the scope of your changes.

This example is somewhat contrived, but it gets the idea across. Next steps are to break the machine out and manage that complexity. For example, you could have functions and sub functions within the machine, all writing to the their own framework interfaces and use reducers to collapse the data for top level use (...kind of sounds like a recursive data structure...). You can write domain specific code and swap out peripheral implementations (electric motor vs hydraulic motor, linear transducer vs encoder, etc) and on and on.

Safety related programming for safety functions is a little different. It’s more restrictive and has higher requirements of validation. But even here, I still use good practices and make abstractions to keep things as clean as possible.
 
This depends on the target market for the machine. The degree that the program should be readable is different, for most it would be enough if they can see in LAD which sensor is it waiting, but in that regards, HMI is lot cheaper and better than those were in 2003.

But in general, it has lot less value in it now days than before. A lot less.

But if company culture is that buy cheap equipment and troubleshoot yourself with programming tools, you wont be changing that one so quickly.

Programs should always be readable. The question is...by who?

Limiting programming to a level that can be understood by non-programmers is insane. It’s like asking someone to make a building, but only use legos because that’s all you know how to work on. Do you want a bad building that you’ll know how to work on or a building that will be built reasonably well, but will require additional expertise to modify and maintain?

In my experience, if a company wants to buy cheap equipment, they usually don’t pay for qualified people to work on it either.
 
Thank you jhenson29 for elaborating. It sounds very much like how I approach complex programming jobs. Break it down into smaller blocks until they are small enough to have a simple and easy to understand interface and straight forward implementation. The skill (or art if you will) of the designer/developer is in the ability to make the cuts into smaller bits at the right places. With some contemplation the correct places for separation into smaller pieces present themselves quite naturally.



If I build many variations of some machine (happens to me all the time), a good measure for having made the right cuts is if individual "blocks" remain relatively stable over a longer time. Parts that get changed over and over again are either not well thought out, or too big. I find the latter is quite common in some code that I get to read. Some coworkers on the other hand tend to think I go overboard with breaking down into bits.
 
Programs should always be readable. The question is...by who?

Limiting programming to a level that can be understood by non-programmers is insane. It’s like asking someone to make a building, but only use legos because that’s all you know how to work on. Do you want a bad building that you’ll know how to work on or a building that will be built reasonably well, but will require additional expertise to modify and maintain?

In my experience, if a company wants to buy cheap equipment, they usually don’t pay for qualified people to work on it either.

I dont think you get it... It has nothing to do with how cheap it is, nothing to do with the quality of work, workmanship or the building equipment material used.

Most end users do not want to hire an engineer nor can they afford one, they need to have Bubba fix the simple ****, they are not asking that Bubba make program changes and build machines but have the ability to either look at the code and say OK its the PExx5 that is not turning on and most companies will pay more to have a program wrote in some form that Bubba can read... or your HMI needs to have a popup PExx5 is not working, in my opinion either is possible but not always the case, if I have a customer that I am making a machine for I have to look at their needs NOT mine, they are paying me and its their code they are buying
 
I dont think you get it... It has nothing to do with how cheap it is, nothing to do with the quality of work, workmanship or the building equipment material used.

Most end users do not want to hire an engineer nor can they afford one, they need to have Bubba fix the simple ****, they are not asking that Bubba make program changes and build machines but have the ability to either look at the code and say OK its the PExx5 that is not turning on and most companies will pay more to have a program wrote in some form that Bubba can read... or your HMI needs to have a popup PExx5 is not working, in my opinion either is possible but not always the case, if I have a customer that I am making a machine for I have to look at their needs NOT mine, they are paying me and its their code they are buying

As complexity increases, a failure to manage it properly will result in a system that’s less likely to be functional and more fragile to changes. And while some system go long periods of time without changes, I would never bank on requirements not changing. Code should always be written with the expectation that it will change.

If the machine is soooo simple that this can be done and someone who doesn’t know anything about programming can look at the code to figure it out AND it’s not fragile...then fine. But we’re just back to taking about trivial examples.

My point is that managing complexity to provide as stable and robust system as possible negates Bubba’s ability to use the code to troubleshoot the problem.

So, you either give them a fragile system that probably has undiscovered bugs and will need to be worked on and troubleshot frequently. Or your give them a stable and robust program that can’t be used by a non-programmer for troubleshooting (but you push the troubleshooting to the HMI).

It’s not 100% one extreme or the other as I describe. But those are the opposing forces and I strongly come down on the latter.

[edit: I just wanted to add, you seem to imply you can have it both ways and I saying...no, you can’t. ]
 
Last edited:
Just to update you on this, the current programming software, TIA, does indirect addressing in LD, and can also do traces that catches the changes each scan. Very useful, I use it frequently.
Amen.
Yes, I am aware. We have S7-1500s we use for compatibility testing and supporting S7-1200 and S7-1500 customers.
Fixing the indirect addressing problem and creating ProfiNet were two HUGE advances for Siemens. Before these two improvements the amount of support Siemens customers required was much higher than any other PLC. We had to write Profibus and Ethernet blocks for the the customers.
 

Similar Topics

What ladder logic is required to have an HMI button that gives output when you push it then removes that output when you push it again? For...
Replies
9
Views
3,841
I will apologize in advance for this question. In a PLC scan I understand the inputs are read, then the PLC carries out a scan out the logic and...
Replies
11
Views
4,135
:banghead: I am a newbie trying PSIM and am facing difficulty with a counter in the Batch program. I observe that there is significance in the...
Replies
2
Views
2,184
Hello Everyone, I am brand new to PLC's (learned they even existed 3 days ago) and have gone through the tutorial on this site. I am a little...
Replies
4
Views
5,221
Please,: Explain the meaning of Scan cycle in a PLC when it is running a program?
Replies
2
Views
3,213
Back
Top Bottom