calibrating the oxygen sensor using calculation

MaGoOoDy

Member
Join Date
Jan 2016
Location
Saudi Arabia
Posts
34
this is one of the assignment that i have been given in my training course, i really don't understand the way to do the calculation required for the calculation ( but the instructor himself provided to us for the sake to concentrate in the programming )
------------------------------------------------

This system is at the bottom of a coal mine, and it’s measuring the concentration of O2 in the air. The O2 sensor degrades over time and requires calibration by comparing its readings to
known values. Our sensor will read from 0‐40%.
We have calibration gases which are exactly 0% and 30% O2.

Our machine will have two cycles: sampling and calibration. When it’s sampling, it just measures the O2 concentration of the air passing by the sensor.

When we go into a calibration cycle, it needs to open the 0% gas valve and sample it for 30 seconds.
Next it will close the 0% and open the 30% and sample that for 30 seconds. Finally, it will use the average readings it took over those two periods and use them to “tune” its own scaling parameters.

CALIBRATION CALCULATIONS:
Input Min = O2_Zero_Average
Input Max = ( ( O2_Maximum_ Concentration / O2_Calibration_Gas_Concentration ) *
( O2_Test_Gas_Average ‐ O2_Zero_Average ) ) + O2_Zero_Average
O2_Maximum_ Concentration = 40(%)
O2_Calibration_Gas_ Concentration = 30(%)
O2_Test_Gas_Average = average reading sampled during 30% gas period
O2_Zero_Average = average reading sampled during 0% gas period



my inputs as of now :
N7:0 ‐ O2 sensor input signal
B3:0/0 ‐ Calibrate button
O:0/0 ‐ 0% gas valve (energize open)
O:0/1 ‐ 30% gas valve (energize open)
N7:1 ‐ Measured O2 concentration
N7:2 ‐ O2 input min (for SCP instruction, default value = 0)
N7:3 ‐ O2 input max (for SCP instruction, default value = 16383)


* i am using the sensor reading in N7:0 instead of analog channel only for the purpose of the simulation ( i don't have the device now )


my questions are :
- what i have done so far, i have figured out that the sampling could be done using FIFO instruction and 30 seconds timer to calculate the average. but how i will tune the SCP parameters ?????
- is this only the way to calibrate the sensors in PLC ( only through programming ) ?
- as far as i know we have to types of sensor, 2 wires and 4 wires type , is this applicable for both ?
 
As a starting point I suggest you work on paper and ignore the specific details of the code.

Write out the sequence of operations i.e.
1) Self calibrate selected
2) Sample 0%
3) Sample 30%
4) ect

Then write out more detailed sequences for each step. Eventually you will get to the stage where you can write code directly from the paper notes. Then its just a matter of using google and the help files to work out what the instruction is called. This way you separate thinking about solving the problem from thinking about what instruction you need.

A side bonus is that even if you fail to complete the assignment, you will still get pretty good credit for having a working outline of what you want to do.

If you have a detailed explanation of what you are trying to do, someone here can usually point out the instructions that you need to be able to implement that step.





For the second point:

Calibration of a sensor in a PLC normally follows one of the following methods:

Hard coded or HMI parameters for min/max values (maybe an offset, i.e +x) - this is the simplest. You would usually use that for a very stable sensor. Maybe a temperature sensor or pressure sensor. Hard coded is lazy, as you cant swap to a different make of sensor if the sensor is damaged. Using HMI parameters is best practice. This works well for linear sensors, and calibration is mostly manual where a technician enters the required values.

Self calibration against references which is what you have been given. I.E take readings of references and calculate the min/max parameters. The math for this can be quite complex depending of the linearity of the sensor.

Semi manual calibration where you have a self calibrate routine, but someone is manually applying the reference to the sensor. This is fairly common in things like portable PH probes which cannot self apply a reference.


For your third point:

The calibration does not change for 2 or 4 wire sensors, as the sensor is sending the same signal via a slightly different hardware method.
 
here is the illustration for what i understand from the assignment.








here the test criteria required to pass the assignment successfully .






......
 
As a starting point I suggest you work on paper and ignore the specific details of the code.

Write out the sequence of operations i.e.
1) Self calibrate selected
2) Sample 0%
3) Sample 30%
4) ect

Then write out more detailed sequences for each step. Eventually you will get to the stage where you can write code directly from the paper notes. Then its just a matter of using google and the help files to work out what the instruction is called. This way you separate thinking about solving the problem from thinking about what instruction you need.

A side bonus is that even if you fail to complete the assignment, you will still get pretty good credit for having a working outline of what you want to do.

If you have a detailed explanation of what you are trying to do, someone here can usually point out the instructions that you need to be able to implement that step.

i am really felling confused with the description of the assignment, if i had to calibrate the sensor so i think the calibration should be done regularly regardless if i pressed the button or not other wise the sensor some where will give somme wrong/bad reading.

another thing how how can i get the readings from the sensor for 30 sec ...?
my plan is to use a 30 sec timer to hold-in the (0%) valve, and while the (0%) valve is energized i have to get 30 readings over this 30 sec period then close the valve ( de-energize it ), then do the same for the (30% valve), how i collect the value of each valve, do i need to use a timer with (1 sec) preset then reset it each 1 second, so i can use the (DN bit) to counts the number of 30 readings ...? or i am thinking wrong .

also, after i get the avarage values of each valve, and then do the calculation mentioned in the first post, where i will do the calibration ( to which part of program ) ? or i do need another (SCP) instruction to be used ?



For the second point:

Calibration of a sensor in a PLC normally follows one of the following methods:

Hard coded or HMI parameters for min/max values (maybe an offset, i.e +x) - this is the simplest. You would usually use that for a very stable sensor. Maybe a temperature sensor or pressure sensor. Hard coded is lazy, as you cant swap to a different make of sensor if the sensor is damaged. Using HMI parameters is best practice. This works well for linear sensors, and calibration is mostly manual where a technician enters the required values.

Self calibration against references which is what you have been given. I.E take readings of references and calculate the min/max parameters. The math for this can be quite complex depending of the linearity of the sensor.

Semi manual calibration where you have a self calibrate routine, but someone is manually applying the reference to the sensor. This is fairly common in things like portable PH probes which cannot self apply a reference.


For your third point:

The calibration does not change for 2 or 4 wire sensors, as the sensor is sending the same signal via a slightly different hardware method.


Please show us your logic.
Then when you have a specific question we can assist.

Regards,

i will come back later to share it with you
 
Last edited:
i am really felling confused with the description of the assignment, if i had to calibrate the sensor so i think the calibration should be done regularly regardless if i pressed the button or not other wise the sensor some where will give somme wrong/bad reading.

Calibration would in a plant be a process that is scheduled and run by an operator, or by an event such as on startup. Calibration itself is a field of study worthy of many PhDs, but the basics are the following criteria:

1) The systematic sensor error is known - i.e. its always +0.1 or -0.2 from the calibration standard.
2) The sensor drift over time is known - i.e on july 1st it was +0.1, on august 1st it was +0.2.
3) The sensor resolution - i.e. it will detect reliably a 0.1 difference or a 0.01 difference.

From 1 and 2 you can know what your reading is likely to be when compared to the reference standard. 3 helps with knowing what the value probably is.

As for accuracy, that depends on the system. One of my systems only needs to detect a difference of 10 kN between three load pins. The actual reading in kN doesn't matter, as long as 1 kN on the sensor is 1 kN of applied force. 0 kN doesn't need to be 0 kN applied force, but all three load pins need to agree on what 0 kN is.

Your application suggests that someone sets it into calibration, so I would happily leave it at that and not worry. Operators in this instance can be trusted to follow calibration plans :)

another thing how how can i get the readings from the sensor for 30 sec ...?
my plan is to use a 30 sec timer to hold-in the (0%) valve, and while the (0%) valve is energized i have to get 30 readings over this 30 sec period then close the valve ( de-energize it ), then do the same for the (30% valve), how i collect the value of each valve, do i need to use a timer with (1 sec) preset then reset it each 1 second, so i can use the (DN bit) to counts the number of 30 readings ...? or i am thinking wrong .

So far so good. Your idea will work in principle, reality might make it bit difficult to debug.

Other options:
In a Siemens PLC there is a 1Hz clock bit. I don't know if that exists in your PLC, but you could implement one using timers.

On the other hand, an easy solution is to set up a cyclic task on a 1 second period. Every second you can take a reading because your code is called once a second.

Also check https://en.wikipedia.org/wiki/Moving_average if you are not familiar with the concept of moving averages. Its often helpful to just calculate the average during the data collection, instead of later.

also, after i get the avarage values of each valve, and then do the calculation mentioned in the first post, where i will do the calibration ( to which part of program ) ? or i do need another (SCP) instruction to be used ?
u

Your calculation will give you a max and min number. These should then be moved into the Max/Min scaling of the sensor. The sensor will now read correctly in the PLC. Calibration is finished.

In a real world example you would allow an undo feature if you were feeling nice to the operator... or the PM was smart and made it a requirement.
 
calibration:
open 0% valve
wait 10 seconds, get a reading, wait 1 second get another reading IF same take 10 seconds of reading.
make an average of this and use it as 0%
close 0% and open 30% valve
wait another 10 seconds, and take a sample, wait 1 second and if sample is dsame take 10 readings, and average them.
this is 30% scale.
Now you can open the normal valve again and take readings.
calcs are simple y=reading*(max-min)/(max-min) *30%
 
sorry for the late response , i was busy too much ...o_Oo_Oo_O

so far , i have tried to complete the logic of the calibration part, but i think i misunderstood the assignment, i have do it in a difficult way .

things that really i don't know about it :

- i am working on Allen Bradley (RSlogix Micro Starter Light) and it seems that when i want to do any math i have to use these math instructions which make the program just long. ( i am keep reading about VMware and installing a version of any trial software of RSlogix , but really i didn't get what the people are talking about here in the forum, there are no clear steps for that )

- after i did the math part of the calibration, i will get two values , does those values will be assigned in SCP as ( Input min & input max ) or ( scale min & scale math ) ?

- i think i have did a mess in order to get the readings from the sensor for 30 sec, looking for advices to help minimizing the code or optimize it at least .



i have attached the printed version of the program , and really looking for your advice guys ...
 
It doesn't look too bad. There are a million different ways you can code a sequence problem and everyone has a way they think is best. Ultimately you should try for simple code. Clever is great but if you or anyone else can't understand it later it has failed.

One thing you need to think about is that using timers like that is not terribly accurate because the time the program takes to scan will introduce errors into the 1 second timer cycles. Accuracy is not terribly important in this case but you should probably make the long timers run for 31 seconds to make sure you get the last sample of each batch.

The SCP instruction just uses the parameters to calulate y=mx+c for the data provided.
The input min/max values are your measured mean values for 0 and 30%. The output values will be your engineering units for those same values or 0 and 30. Normally you would use reals or scaled integers (3000 = 30)for these value or the result is going to be pretty rough.
 
It doesn't look too bad. There are a million different ways you can code a sequence problem and everyone has a way they think is best. Ultimately you should try for simple code. Clever is great but if you or anyone else can't understand it later it has failed.
that's what i am trying to, to find or to optimise the code as minimal as i can to be understood for all people will come after me, i am trying to look at others codes to and analyse it to learn more.

it will be appreciated if point to any good source of codes that i can learn from.


One thing you need to think about is that using timers like that is not terribly accurate because the time the program takes to scan will introduce errors into the 1 second timer cycles. Accuracy is not terribly important in this case but you should probably make the long timers run for 31 seconds to make sure you get the last sample of each batch.
i have been reading about this accuracy issue much here in the forum, but i will be honest i couldn't understand most of what i read because no one identified this issue correctly IMO .

so if there is any reference or paper will help in this matter i will be thankful.


The SCP instruction just uses the parameters to calulate y=mx+c for the data provided.
The input min/max values are your measured mean values for 0 and 30%. The output values will be your engineering units for those same values or 0 and 30. Normally you would use reals or scaled integers (3000 = 30)for these value or the result is going to be pretty rough.


can you explain this part again ..!! sorry i didn't get what you traying to say ...
 
2.
Imagine you have two timers. One is set for 30 seconds. One is set for 1 second. Start both at the same time.
Every time the one second timer elapses you restart it. It takes time to see that the 1 second timer has elapsed and then to restart it. When the 30 second timer elapses the 1 second timer will not have completed 30 cycles.

Logic is evaluated from top to bottom. The order of the rungs is important.

3.
The instruction is trying to map the values you put into it (N7:0 at 0 and 30%) to engineering units. To do this you give it the equivalent values of input and output at two points and then it calculates values along the line between them.
Using an integer for this calculation will limit the resolution of the output to integer values. 0-30 is 1% resolution, 0-3000 is 0.01% resolution)
 
@MaGoOoDy
Did you ever get this Oxygen Probe calibration program finished?

He did have plenty of time to do it...

This brings me one question though, how many people here know of the State Machine concept?

This was a classic example to be solved by one, but many people struggle with the aspect of locking in phases of a sequence in code (doesn't matter which).
 

Similar Topics

Hi all, Probably a bit out of my depth here as I'm not an instro but just wondering if someone could provide some clarity on calibrating/scaling...
Replies
11
Views
2,985
I know this is very far off subject, but was hoping someone here could lend a hand. I have been asked to start calibrating belt scales for...
Replies
19
Views
3,754
I have a vacuum furnace running an Allen Bradley SLC 5/03 with a PV 550, I am new to PLC's and using RSLogix 500. My issue is that the control...
Replies
35
Views
4,851
I am looking for information on the proper calibration of an 1756-IF8 analog input card, preferably the wiring of the voltage source. Rockwell...
Replies
10
Views
7,995
Hi everyone! First of all, thank you for taking you time to read this post :) This motor (AB MPL-A430H-SJ22AA) was disassembled to change the...
Replies
3
Views
1,505
Back
Top Bottom