machine Vision Application Assistance

rguimond

Lifetime Supporting Member
Join Date
Jul 2009
Location
Escuminac
Posts
666
Please allow me to briefly explain a problem I need a solution to:

We have an in-floor conveyor system that carries product from up to four different fillers to a cooler where it’s identified manually and stored accordingly. I want to eliminate the labour associated with identifying and storing the product. I have the storage problem solved, but I need a way to identify the product.

The product (fluid milk) is put into various containers (jugs, ½ gallon cartons, quart cartons or bulk bags), which eventually make their way into standard milk cases. Each filler has its own “stacker” that creates stacks of six cases. My plan is to mount a camera over the top of every stacker and to take a picture of the top case before the stack is released. A camera just upstream of the storage equipment will compare what it sees to pictures in the system memory to determine when it is released and where it stops. I do not need a camera model that’s capable of differentiating between products – I just need one that can compare what it sees to what another camera has seen.

Ideally, the cameras will be capable of communicating directly with an Allen-Bradley PLC through Devicenet. I was hoping that one or more analog values could be generated by the cameras over the stacker so I can store them in a FIFO buffer that the final camera can compare to. Perhaps this concept is flawed. I would appreciate your recommendations. Various machine vision providers seem to be stuck on providing cameras and systems that identify the product in a case, but it's overkill in this case.

Can anyone point me in the right direction?
 
rguimond,

i'm not sure of exactly what you want, but here is my take on your app.

1. you will have a camera in front of the stacker.
camera will take picture of container and compare to programmed samples and send signal to plc.

2. plc logic will load fifo

3. camera on stacker will take picture of container and send signal to plc.

4. plc will compare results.
if they equal - keep going
if they don't - ???????

the problem with vision cameras is that they must be programmed using master samples. you must keep these samples.

while programming these samples, allowances must be made.
product color changes, height changes, size changes, lighting changes, dirt and smudges cause changes.

all this must be taken into consideration.

my preference is cognex, but there are many other brands as well.


i re-read your post.
vision systems can tell you position, but the number is constantly changing.
you may need to use enconers, photocells, and air stops to make everything work correctly.
hope this helps,
james
 
Last edited:
I don't believe you will be able to accomplish the proposed(as OP described) solution while DeviceNet integrated; DeviceNet is a low level data exchange protocol, not quite capable of the bandwidth requirement of the "pixel compare" vision applications.
You could use DeviceNet if both vision applications are running "locally" within the cameras themselves; once you have all the possible scenarios data (pics) stored within both vision devices' memories, then, when user application "decides", trigger a "Scan" command and "generate" a "bit state change" within the upstream device, bit corresponding with the current "match", "read" the change via DeviceNet within the Scanner/CPU and then DNet "write" this to the downstream device's memory trigerring the "current" pic to be compared against.
 
After speaking with several machine vision camera suppliers, I now see what you mean. I expected each camera to produce one or more analog values that I could store temporarily in a data file. The "sorter" camera would compare its values against these values and determine where the product should go. I've attached a rudimentary version of what I want to do:

One thing that may not be clear from the sketch is that there is a stepper motor close-coupled to a shaft on each pusher bar. Three "flags" are attached to each shaft. The stepper motor advances 90, 180 or 270 degrees, as necessary, to capture product in the correct position before pushing.

I can discard the data once the stack is pushed off the conveyor.
 
In this case I would use four cameras, located at every "push" station.
Capture "all colors" products' images and enter the corresponding "color" pic as the "Master" pic within the designated "color pusher" vision device.
Once the stopper is released and the first product reaches Pusher #4 (proximity switch detected), using the DeviceNet interface, trigger the "Teal" camera and compare the current scan to the "Teal" Master pic; if the result is "True" using the same DeviceNet interface, trigger the Pusher #4 actuator; if the result is "False" keep the conveyor running and, when the product reaches Pusher #3 station proximity switch, trigger the "Red" camera and make it compare the scan result to the "Red" Master pic then follow up with the same "decision making" logic.
Since the upstream conveyor could contain various numbers of different products, I don't think that the upstream detection is necessary.
 
Last edited:
In this case I would use four cameras, located at every "push" station.
Capture "all colors" products' images and enter the corresponding "color" pic as the "Master" pic within the designated "color pusher" vision device.
Once the stopper is released and the first product reaches Pusher #4 (proximity switch detected), using the DeviceNet interface, trigger the "Teal" camera and compare the current scan to the "Teal" Master pic; if the result is "True" using the same DeviceNet interface, trigger the Pusher #4 actuator; if the result is "False" keep the conveyor running and, when the product reaches Pusher #3 station proximity switch, trigger the "Red" camera and make it compare the scan result to the "Red" Master pic then follow up with the same "decision making" logic.
Since the upstream conveyor could contain various numbers of different products, I don't think that the upstream detection is necessary.

There are two problems with this proposal:

1. Even though there are only four products being produced at any one time, there may be as many as 12 different products produced on an individual filler in one day. This means each camera would have to be capable of identifying many products

2. Some products don't sit uniformly from case-to-case. Small single-serve jugs, for example, tend to scatter when they're cased, making accurate identification very difficult.
 
You should be able to accomplish your task with only the one camera at the end of the conveyor, looking down into the open top of a case, before the sort at your storage location.

The vendors are somewhat correct but are making this overly complex. You should be able to use the camera to identify the TYPE of container by the distinct features of each, not necessarily identifying the particular product. ie: Gallon Jugs are aligned, fit four to a case and have caps showing....Paper cartons are a different size, have no caps, but show the ridges of the cartons...etc. The single serve jugs may be what is identified by default if another type isn't or there may be some distinct signature of size of light and dark that can be picked out that is different from the other containers.

Using this scheme you can minimize the number of variables and certain combinations of these variables that are "found" can be used to identify the type of product and trigger the appropriate output to your sorting device.

Cognex is my preferred choice as well and I'm thinking a more powerful camera (not a checker) might be required. The spreadsheet is a very powerful tool in applications like this.

Cheers

Ken
 
Last edited:
There are two problems with this proposal:

1. Even though there are only four products being produced at any one time, there may be as many as 12 different products produced on an individual filler in one day. This means each camera would have to be capable of identifying many products

2. Some products don't sit uniformly from case-to-case. Small single-serve jugs, for example, tend to scatter when they're cased, making accurate identification very difficult.

Any middle tier vision devices could store 12 or more Master pics, depending of the required resolution; again, it is a matter of establishing the "template" the scan result is going to be compared to.
#2 issue is quite acute if I may...You have to remember that vision devices compare "pixels distribution" to a given "model"and nothing else; you will have to find at least one "common denominator" (specific only to the product to be sorted at the designated station) which will represent or decide the "match".
 
I am currently setting up a vision system
- you can use E/net IP or digital wiring
- each camera could be taught to accept multiple trained images.
- you can send the measurement value of an image, might be the same value as an incorrect crate though.

Not certain if this helps
 
From much experience with applications that have a great deal of variation, I would highly suggest that you stay away from image matching and trained tools for this kind of application. With the right application, these types of tools can be very powerful but they tend to work best in a narrow range of variability. Leaning toward general area tools that look for distribution of light and dark will make for a much more robust application. Blob tools are great for this type of gross product detection and can sort based on size, shape and count. These differences determine your product type and this is where I would start. I think you will find that in the end you will make for far fewer headaches with simple tools over those that use image matching or require training. That said, Cognex's Patmax tool is pretty robust and can tweaked to allow for a large amount of variability without failure of the tool. On the flip side, Patmax requires a big (expensive) processor and is an extra cost.

Cheers

Ken
 
I would warn you to take into consideration the speed of the process before selecting an IFM dualis.

We have two that we have tried using in various processes throughout our facility, but they end up being too slow for us. They are easy to program, and generally pretty nice, but they are definitely on the lower end of the spectrum from my experience at around $1000.

I ended going with two cognex 7000 series cameras, and they did what I needed at a much faster rate. probably triple the price of an IFM, but you get what you pay for.

good luck.
 

Similar Topics

Looking for what is preferred out there for thermographic machine vision; like Cognex or Keyence, but with thermographic imaging and tools to...
Replies
6
Views
1,505
Hi all, PLC machine vision is barely talked about anywhere yet it is very important PLC and Industrial automation topic. Like you see vision...
Replies
0
Views
1,055
Calling all vision experts, I am looking for feedback on the best approach to take for this vision application. This application requires many...
Replies
1
Views
1,937
Hello, I just registered myself on this forum, I'm currently doing my graduation internship. The assignment involves machine vision. I have...
Replies
7
Views
1,707
Hi everyone, I will shortly start last year of college and thinking about final project ideas. Last year i have made and programmed this...
Replies
3
Views
2,098
Back
Top Bottom