Hi all,
I currently have a rejection conveyor belt with 4 lanes and with 1 paddle per lane to sort the quality of a particular agricultural product. The quality is sorted by machine learning on a stereo Basler setup. Anyway, the vision system outputs an array of values into Modbus registers, one pair each for each rejected sample on the belt (in camera frame). About 20 objects max would be detected per frame.
The vision system is fast-- 15fps segmentation. I have a bit of Matlab on top that takes 300ms by design, including that original machine learning/vision analysis (takes a segmented image, evaluates it a bit, and writes the modbus registers).
The belt is moving at 140mm/s, with an encoder wheel traveling along top. Encoder is 100ppr, and so with the wheel it is producing a 70Hz signal (14.28ms pulse width I think) with a resolution of 2mm/pulse, which is about what is required. It is on a high speed counter on a CLICK plus PLC.
My scan times are 2ms on average, increasing to 8-9ms at a maximum when an image is captured, particularly if 20 rejects are in frame and the for loop must iterate 20 times through the registers. Performance now is excellent, versus moderately successful at a peak scan of 22ms before optimization.
My question is, am I aliasing the shift register output, causing missed or late rejections at the paddles. I'm thinking I need to get under the Nyquist frequency of 1/2 my encoder pulse width (thus the width or length of the shift register output bit). So, under 7.6ms. Am I aliasing data by having cycle times close to 9ms? Or am I missing something else, perhaps. Statistically, we are doing better since reducing scan times.
I currently have a rejection conveyor belt with 4 lanes and with 1 paddle per lane to sort the quality of a particular agricultural product. The quality is sorted by machine learning on a stereo Basler setup. Anyway, the vision system outputs an array of values into Modbus registers, one pair each for each rejected sample on the belt (in camera frame). About 20 objects max would be detected per frame.
The vision system is fast-- 15fps segmentation. I have a bit of Matlab on top that takes 300ms by design, including that original machine learning/vision analysis (takes a segmented image, evaluates it a bit, and writes the modbus registers).
The belt is moving at 140mm/s, with an encoder wheel traveling along top. Encoder is 100ppr, and so with the wheel it is producing a 70Hz signal (14.28ms pulse width I think) with a resolution of 2mm/pulse, which is about what is required. It is on a high speed counter on a CLICK plus PLC.
My scan times are 2ms on average, increasing to 8-9ms at a maximum when an image is captured, particularly if 20 rejects are in frame and the for loop must iterate 20 times through the registers. Performance now is excellent, versus moderately successful at a peak scan of 22ms before optimization.
My question is, am I aliasing the shift register output, causing missed or late rejections at the paddles. I'm thinking I need to get under the Nyquist frequency of 1/2 my encoder pulse width (thus the width or length of the shift register output bit). So, under 7.6ms. Am I aliasing data by having cycle times close to 9ms? Or am I missing something else, perhaps. Statistically, we are doing better since reducing scan times.
Last edited: