I am a mechanical engineer with a mechatronics background working closely with a co-worker learning to program Siemens S7-300/400 series PLCs. My co-worker was trained as an electrician/programmer here in Germany and worked in industry programming PLCs for something like 5 years where he programmed a Daimler Chrysler assembly line, and I think Volkswagen as well. I come from a different background, having programmed microcontrollers in C and some assembly. Due to his training he has a very specific procedure which he follows when programming and he sometimes has difficulty explaining why when I ask him why things are done a certain way. Sometimes he can explain the reasoning behind his methodology, other times “that’s just the way I learned it”. While my questions may be someone basic, I was hoping some of you with experience could fill in the why gaps and give me some insight.
1) Is it better to structure your program so that every rung of your entire program is read every program cycle? Or is it better to structure it so that certain portions are skipped (jumps) if unnecessary. For example, say you have different machine modes, manual and automatic. Should the code always run through every network, or is it ok to include a condition “if Automatic Mode – jump over Manual code”.
2) He says that only function calls should be made in OB1 (Organization Block 1, Siemens S7-300) and no code, and no conditions for calling code. Is this reasonable and why?
3) All of his variables are included in data blocks. All inputs from the machine are written to data block bits in a separate FC called at the beginning of the program, and all outputs are associated with data block bits as well, which are then assigned to the outputs in an FC called at the end of the program. In other words, all program processing on inputs/outputs is done on data block bits instead of the actual input/outputs from the process image table. He says this is done in case sensors or actuators physical locations are changed, so that you only need to change 1 bit in the assignment FC of the program and not everywhere you used I1.0 for example. Makes sense, any comments?
4) For every machine mode, he has a separate bit for each actuator. For example, if there are 3 machine modes, there are 3 output bits for the same actuator. This is so that bits don’t get overwritten by a network in manual mode when the machine is in auto mode, but auto mode is called first. The creation of the data blocks required for this takes FOREVER in my experience when you have a lot of I/Os. Is this common practice?
If it were me, as a simple example, I would program an FC that checks the machine mode, call it in OB1, then use logic conditions further down within OB1 that call the mode specific code only if the mode switch is in that mode. He frowns on this approach, but can’t really give me a good reason why, except, “that’s not how it’s done in industry”. In my opinion, my code organization is far easier to read and follow. Regardless, he feels his method is easier for maintenance to troubleshoot or modify later on. Can anyone comment on these things?
Sorry for the long post, any comments would be appreciated.
1) Is it better to structure your program so that every rung of your entire program is read every program cycle? Or is it better to structure it so that certain portions are skipped (jumps) if unnecessary. For example, say you have different machine modes, manual and automatic. Should the code always run through every network, or is it ok to include a condition “if Automatic Mode – jump over Manual code”.
2) He says that only function calls should be made in OB1 (Organization Block 1, Siemens S7-300) and no code, and no conditions for calling code. Is this reasonable and why?
3) All of his variables are included in data blocks. All inputs from the machine are written to data block bits in a separate FC called at the beginning of the program, and all outputs are associated with data block bits as well, which are then assigned to the outputs in an FC called at the end of the program. In other words, all program processing on inputs/outputs is done on data block bits instead of the actual input/outputs from the process image table. He says this is done in case sensors or actuators physical locations are changed, so that you only need to change 1 bit in the assignment FC of the program and not everywhere you used I1.0 for example. Makes sense, any comments?
4) For every machine mode, he has a separate bit for each actuator. For example, if there are 3 machine modes, there are 3 output bits for the same actuator. This is so that bits don’t get overwritten by a network in manual mode when the machine is in auto mode, but auto mode is called first. The creation of the data blocks required for this takes FOREVER in my experience when you have a lot of I/Os. Is this common practice?
If it were me, as a simple example, I would program an FC that checks the machine mode, call it in OB1, then use logic conditions further down within OB1 that call the mode specific code only if the mode switch is in that mode. He frowns on this approach, but can’t really give me a good reason why, except, “that’s not how it’s done in industry”. In my opinion, my code organization is far easier to read and follow. Regardless, he feels his method is easier for maintenance to troubleshoot or modify later on. Can anyone comment on these things?
Sorry for the long post, any comments would be appreciated.