Non synchronized vision-based conveyor rejection

@Starbot1 I think @PeterN meant a line camera that has one row of pixels aimed at a fixed line across the conveyor direction of motion, and takes a line-image at regular intervals, so the conveyor movement generates the second dimension and multiple one-row images are concatenated to get a 2D image.


It's a pretty standard technique e.g. the MRO and LRO missions to Mars and the Moon use it. Obviously there is a smear issue but if the line is narrow enough that may not matter. Another similar technique is Time-Delay Integration (TDI), which is essentially several line cameras in parallel, geometrically and operationally, on a single detector, and the charge transfer rate on the detector chip is matched, geometrically, to the scene movement in the field of view. It is used when a line camera cannot integrate enough light per row exposure to make a useful image. The Ralph instruments on the New Horizons mission and on the just-launched Lucy mission use TDI.


Anyway, whether a line or TDI system, or a 2-D CCD, I doubt lighting or smear is a significant problem in an industrial environment. There might be an advantage to a line camera for your application, but you already have a system in place and detecting product to reject, so at best a line camera is something to look into for the next iteration.
 
Again, I think you're trivializing the vision side.
i trivialize nothing. I know what it takes.



A line scan camera does not tell you much without an algorithm.
It can.


And ML algorithms like context. I.e. one line will not anything for us.
[/quote[
It can. Think about what drbitboy said. An image can be made one encoder count/line scan at a time and the data is synchronous with the belt. Also, processing can be started as the product comes into view.



You need a full 2MP frame so that the product can be categorized by a ML algorithm (which can not work on a line sample alone, it must have context and see the full product at once).
I thought you said there are four lanes. The data you get between the lanes can't be very useful.



Again, you are acting as if this is a plug and play solution I can grab from Cognex. I assure you, it can not be done that way, even with their D900 deep learning camera. They are lightyears behind ML when it comes to categorization or any organic analysis.
I didn't say the solution would be off the shelf or use a Cognex. I know.


How are you going to do all that machine learning in a PLC?
The data transfer times between the PLC and what ever you are using for the machine leaning or defect detection will be performance inhibitor.



Just so you know that I know what I am talking about. This video shows how we can scan potato strips and cut out the defects. We use what amounts to a line scan camera that takes s scan every encoder count. It is about 1/40th of an inch. The image is built in memory in a queue. It is very important that everything is synchronized so that the knives cut at the right spot. Notice the viewers are low. This keeps ambient light out that can screw up the scanning. You haven't addressed this problem yet. Each lane has its own viewer and pair of knives. All the nitty gritty processing and motion control is handled in the 64 bit CPU for each lane. There is AI of sorts. It is called classification. We classify each pixel. We had to teach the "AI" how to classify by running many strips through the machine and having a human grade the machine. We used a programming language called R to do this on a AMD thread ripper. The thread ripper is only used for the learning. It is not part of the operational machine. The information that is learned is programmed into the lane CPUs. These machines can have 32 to 48 lanes. All lanes are coordinated using a real time linux.

Turn down the volume, unfortunately this is more of a marketing video but you will get the point.

https://deltamotion.com/peter/Videos/Delta Fry Cutting Machine Demo.mp4

We have been doing this for 38 years now.
 
How are you going to do all that machine learning in a PLC?
The Machine-Learned algorithm is performed in a "vision system," i.e. I assume that is a CPU separate from the PLC.

All the PLC knows is the location of reject product at time [NOW-deltaT], and it knows deltaT (when the image is taken).

This is more or less a straightforward bit-shift program driving a reject diverter paddle; the only slight twist is that the status - accept or reject - of any bit is not known when that bit is pushed onto the shift register, but reject bits are known later. So the PLC code will assume all bits are accept (0), and only change the rejected bits to 1 when they have already moved partway down the pipeline.
 
Last edited:
The Machine-Learned algorithm is performed in a "vision system," i.e. I assume that is a CPU separate from the PLC.
The machine learning takes a lot of teaching. Machines just don't learn unless they are chess AIs where they can play each other to 'grade' or teach each other.







All the PLC knows is the location of reject product at time [NOW-deltaT], and it knows deltaT (when the image is taken).
This is more or less a straightforward bit-shift program driving a reject diverter paddle; the only slight twist is that the status - accept or reject - of any bit is not known when that bit is pushed onto the shift register, but reject bits are known later.

So the PLC code will assume all bits are accept (0), and only change the rejected bits to 1 when they have already moved partway down the pipeline.
Yes, there can be many strips in the queue between the viewers and the knives. The strip closest to the knives gets all the attention.


What will determine if the product needs to be rejected or not?
Certainly not the PLC.



We use a lot of processing power to do this. Each lane CPU is a 64 bit CPU. The CPU does all the processing for that lane so no communications are necessary. The lane CPUs are also doing all the motion control.
 
Sorry it’s been so difficult to communicate with you. We know how hard it is. We’ve made a robotic trimmer with the algorithm and are porting it to a new application. The robot utilized a i5 cpu and 1060 GTX gpu. Tensorflow CNNs… you know the stuff… yolo etc.

Thanks for your input but I don’t think it’s with the right intentions for some odd reason.🙃🙃

The machine learning takes a lot of teaching. Machines just don't learn unless they are chess AIs where they can play each other to 'grade' or teach each other.








Yes, there can be many strips in the queue between the viewers and the knives. The strip closest to the knives gets all the attention.


What will determine if the product needs to be rejected or not?
Certainly not the PLC.



We use a lot of processing power to do this. Each lane CPU is a 64 bit CPU. The CPU does all the processing for that lane so no communications are necessary. The lane CPUs are also doing all the motion control.
 
And yes sorry for not divulging more— it’s not my liberty. I’m sure as machine builders, you know what I mean!!
 
Thanks for your input but I don’t think it’s with the right intentions for some odd reason.🙃🙃
The intention was to keep the forum from flailing around like it often does. I really wish people would simply say what they are trying to do instead of us having to pull teeth. I wasn't trying to sell you anything. I was just trying to save you and the forum some time. We have many years experience at this and have a mythology that we use to approach these kinds of problems. I know it isn't easy and off the shelf solution don't really exist.
Hopefully the Tensor Flow works for you. Obviously Tensor Flow did not exist 30+ years ago.
 
You’ve done nothing but flail around a productive thread. As drbit summarized, what I am doing with the plc is not all that unusual, and often vision is mixed with motion in this way.

Our cameras capture at 100fps. The algorithm processes at 15fps. I understand tensorflow wasn’t there 30 years ago and neither was the ability to classify and identify organic objects in a frame. We are, indeed, in 2021 and there is no reason to use 30 year old vision tech. The bit registers and conveyors havent changed all that much, and hence, the PLC.
 

Similar Topics

if we need to take back up of Siemens CPU(s7300 /s7400) with non siemens slave profibus nodes without their gsd files and when we restore it in...
Replies
0
Views
83
Hi all, I have installed a SICK AFM60A-S4IB018x12 encoder to Allen-Bradley CompactLogix PLC and configured its parameters via AOI provided by...
Replies
0
Views
97
Hi, I'm trying to understand a couple of things with a feedback signals that are used in a SRP/CS that I'm working with. Quick disclaimer: I'm...
Replies
2
Views
536
Hello, has anyone ever been caught in this situation? You have redundant CPUs, fieldbus is PROFINET, there are some devices which don't have the...
Replies
0
Views
516
Looking to see if there is any sort of publication or details on if I can tie a 1794 AENT module to a 3rd party eth/ip scanner (happens to be opto...
Replies
2
Views
709
Back
Top Bottom