The math to implement a Alpha-Beta-Gamma filter isn't too hard.
The goal is to achieve 95% of what a Kalman filter can do with only 5% of the effort. That includes both coding effort and CPU usage.
Alpha-Beta-Gamma Filter Code
The first step initializes the filter.
x0,x1 and x2 are the estimated position, velocity and acceleration.
T is the sample period. Normally this is a constant but in the case of PLCs it may vary. My motion controller does not suffer from sample jitter like PLCs do because the encoder positions are latched by a FPGA without the need for software or interrupts. It is important that the update period be known as accurately as possible.
Omega is the filter frequency in radians per second. You can see I compute that from a frequency in Hz. Omega determines how quickly the filter corrects the estimated error.
K0,K1 and K2 are the filter gains for correcting the estimated position, velocity and acceleration. In a Kalman filter these would be called the Kalman filter gains. Kalman filters do a lot of work to calculate these gains to make sure the error between the actual and estimated states are minimized but often this is done with inaccurate values of measurement and system covariances so they get fudged anyway so why bother. You can see in step 0 the calculations for K0, K1 and K2 are relatively simple. You select omega, just one number, to achieve the desired results.
Notice that there are 3 different ways of computing K0,K1, and K2. One can experiment with the coefficients but I recommend that one of these three ways be used. The critically damped K gains are the most conservative and the minimum IAE K gains are the most aggressive. The graph in the first post was made using the Butterworth K gains.
After step 0 is execute the state machine repeats step 1 in a forever loop.
After caching the sample time the predicted position is calculated. This is do a straight forward way. Basically the line of code is
x0p=x0+x1*T+½*x2*T²
Remember x0=Position, x1=Velocity and x2=Acceleration
The code I wrote is optimized to minimize the number of multiplies. Writing the code in this way also reduces rounding errors.
Next the error between the predicted position and the actual position from the encoder is calculated. The motion controller takes care of scaling counts to position units so the user, me in this case, doesn't need to worry about it.
The next three steps update the estimated position, velocity and acceleration.
The position update is simple, the predicted position is corrected by K0*err to get the new estimated position, x0.
The velocity, x1, is updated like this
x1=x1+x2*T+(K1*err)
Again I grouped the calculation to minimize rounding error.
Finally the estimated acceleration is calculated. The new estimated acceleration is the old estimated acceleration plus the correction factor K2*err.
One can see that one doesn't really need the estimated position except to calculate the error.
So how does it work?
There are two main parts to this. One is the prediction part. The prediction part basically says that what ever the motion is doing this period it will do the next period and keep doing it until corrected. So if the state is accelerating it will accelerate the next time and the acceleration also affects the velocity and the position. It is this prediction part that reduces the phase delay that a Butterworth filter suffers from. Phase delays can lead to instability in feedback systems so one should chose a filter that does the job with little or no phase delay. The prediction part is the same as what is used in a Kalman filter.
However the predictions aren't perfect especially when one is changing the acceleration all the time as in the graph in the first post. The error in acceleration must be corrected as well as the error in position and velocity and that is what the K gains do. This is where there is a difference between the Kalman and ABG filters. The gain calculations in the ABG filter are simple but not optimal compared to the Kalman gain calculations, in theory. In practice the engineer probably doesn't know the measurement noise or system covariance so the fudge the numbers to get the desired results anyway.
In the graph in the first post I set the filter frequency to 50 Hz for the ABG filter (x0,x1,x2) and 100 Hz for the velocity and acceleration for the 4 pole Butterworth filter. Even though the ABG filter is operating a lower frequency, one can see that its estimates lead the Butterworth filters estimates. This is due to the prediction ability of the ABG filter.
The key is to notice how smoothly the velocity and acceleration change. If simple velocity and acceleration estimates were done the minimum change in velocity would be 0.5 revolutions per second and the minimum change in acceleration is 2000 revolutions per second. These coarse calculations are unusable. A graph of unfiltered motion would make this clear.
So why bother?
The code is relatively simple compared to the benefit. In motion control one must often need to gear to a feed chain or conveyor. Crude motion controllers simply use a PI to follow the master position. Following errors will be high because the derivative gain and feed forwards are not usable. How many times have you seen comments like "I don't use the derivative gain because it is noisy"? When we have accurate estimates of the master's velocity and acceleration we can use the derivative gain and both the velocity and acceleration feed forwards. The slave actuators can then accurately follow the master encoder.
I can extend this to estimate a jerk to for a jerk feed forward too.
I would be interested in any questions or comments. I am writing a magazine article on filters and I don't want to make it too technical.
Again, the time I have spent in the sci.engr.control and the comp.dsp site has led me to believe that few implement a Kalman filter as it should be implemented. They should have simply implemented an ABG filter. What I like is that the user only need to select the filter frequency until the desired results are achieved. Simple.