Aritificial Intelligence/Fuzzy

baden

Member
Join Date
Jun 2002
Posts
13
I know this may be off topic but I am curious.

I have been aching to think of a program that can teach itself. As in it would recognize that the process variable is starting to become uncontrollable and it needs to recalculate where it needs to be. This would be most recognizable in PID functions. Instead of having a programmer use a tool to figure up what constants to use in the PID function, the computer will recognize what to look for. Now I do not know how changing the proportional constant is advantageous to changing the integral, but trying to find an artificial intelligence to decipher this would give me the push to learn PID.

Now what I see a need for in Artificial intelligence is being able to rewrite the program while the program is running. This would be difficult to impossible. So being an avid OMRON user I thought about using the DM area. This would be the "brains" and the ladder logic would contain functions that would act as the code to understand the DM area values. Very in depth programming would have to be done. But I feel this would be a very good challenge for the expert programmer.

If you guys could give me insight in what I need to look for and maybe point me to someone who already has (I know, I know I should of looked in a search engine FIRST) some information.

Thanks for the time, sorry for the crazy question.

Will Baden
 
I seem to remember that either Fuji or Omron can out with some photo eyes back in the early 90's that had some Fuzzy logic built in for tuning and maintaining constant sensitivity. Or was it Fuji Temp controls......I'll chack back in my files.... David
 
I too have thought about this question a fair bit.
One approach I've seen that has had some success has been 'genetic algorithms'.
The basic approach is (for tuning a PID loop) to set up a method of testing the performance with some, lets call it a PV, to minimise or maximise.
You begin with one set of parameters, and measure the result with your test criteria. These are copied and saved. You then use a random function to generate a small error in the parameters. These new parameters are then tested. The ones with the best performance are kept, the others disgarded. The process is then repeated.
Over a period of time, the parameters will tend towards their optimal settings.

Doug
 
The ONLY way to have a "computer", of any kind, teach itself, is to have a computer that has a sense of "History".

Think about it... How is it that you "KNOW" anything? EXPERIENCE, ie, HISTORY!

Fuzzy Logic has NOTHING to do with that. It's a matter of "CAUSE & EFFECT" and the REMEMBERANCE of it!

I'm a great proponent of Fuzzy Logic... but this ain't the place for it... at least, not until you have a computer that has a lot of experience and can then weigh the historical "Cause & Effect" relationships. At that point, you have a computer that is making "Historically, Reasonable Guesses"!
 
As David mentioned, Omron was quite heavily pushing the "fuzzy logic" concept for a while there. I don't know if they are still pushing it though... :rolleyes:

Try doing a search for "FZ001" at the Omron Web Site <----LINK

beerchug

-Eric
 
Thanks Doug that is what I was talking about. But can we go further?

As Terry said we have to have a history of events. Well that would be way too big of a history. Why can't we do selective history? As in not remember the whole process, but instead the most important parts of the process. But it would not stop here. We would then make it learn the process and then switch a whole bunch of variables to allow it to learn that. And keep on doing this until it is basically building upon itself with a whole bunch of little tid bits of data. This would allow for us to reason the small stuff and allow it to build upon itself.

Do you think this would work? But that would almost have to involve a larger program to sift through the stuff I would want the PLC to keep and what needed to be discarded. But for a very large system, this kind of thinking would be beneficial because the program would be smaller than the history.

Now Terry, you say I am going down the wrong path? Then may I ask where the path is and has it been traveled upon numerous times with success? Or is it still sketchy?

Thanks for the replies you guys.

Will Baden
 
This ain't new

Fuzzy Logic, Adaptive Control, and Self-Tuning PID loops is purty interesting technologies. Each o' them has, I'm sure, specific successful applicashuns and they has surely got a lot of merit where they is neeed. Howsomever, I has seen each o' these technologies in turn touted as "the next big thing" in process control o'er the years, and a lot of trees has given their lives to inform us control guys about the wonderful new way o' solving most ever' problem we has!

And after a few months or years and after a few thousand trees, things go pretty much back to usual fer all 'cept a few specialized applicashuns.

Now, I ain't against new technology - I is in the business o' inventing new technology! I jist want t' make sure thet ya' understand there ain't no magic bullet! There ain't no easy solution t' tough problems! Don't expect more out o' them there fancy algorithms than they can deliver! If yer applicashun needs fuzzy logic, go fer it. But if you is jist tryin' to get out of tuning yer loops, don't bet the ranch on makin' it happen any easy way.
 
See: Synthesis of Fuzzy-Logic Circuits in Evolvable Hardware. NASA Tehch Briefs, November 2002. Their website is www.nasatech.com the pub number is NPO-21095. It also has referances to a number of other papers.

Lenny
 
One technology that can fall under the heading of adaptive controls is neural networks. Unlike a PID controller, neural networks can frequently perform control of non-linear systems.

In the traditional von Neumann computing scheme, a single powerful processor is given instructions written by a human and converted into electrical signals to execute a program. The concept of "memory" is implemented as a single discrete entity that interfaces with the processor to provide data and program storage.

In the neural network (NN) computing scheme, multiple small processing elements (neurons) are interconnected in a way that allows each of them to have the potential to store data (remember the concept of "memory" above?) and act on both it and inputs to execute a task. A key feature of a NN is that the connections between two neurons are adjustable in gain, and the network can therefore be "trained" using an appropriate training algorithm and an external supply of data. in other words, once you get the hardware set up, the network learns from the data you provide how to perform its task by adjusting its connections in response to the training algorithm. No software necessary. Keywords for net searches: neural networks, Hebbian learning)


The three basic types of learning that can take place are:

Supervised learning, where the desired output is known for a given input set and the network has to implement the transfer function, perform the proper pattern classification task, or filter noise from a signal or pattern (Keywords for net searches: MLP, backprop, RBF, PNN, GRNN, BAM, Hopfield)

Unsupervised learning, where you don't know a lot about the input data set and the network can adjust itself during training to reveal important features or patterns to you after it finishes learning own its own. (Keywords for net searches: LVQ, SOM, Kohonen)

Reinforcement learning, where a meta-level observer (a human, another NN, another controller) can help the NN learn its task without full knowledge of the relationship between input and output. (Keywords for net searches: DHP, HDP, adaptive critic)


I've only just finished a grad school class on NNs, so I know a little of the background and theoretical considerations, but next to nothing about the practical implementation. I do know they can be used as universal function approximators (got a highly non-linear transfer function you need to implement?), as pattern classifiers for signal and image processing, as virtual instruments that can predict measurements at places you can't put a sensor, as adaptive controllers, among other things.

I think they're particularly interesting from both the theoretical and practical standpoints, and I'm looking forward to my next quarter's class where we focus more on design and implementation issues.
 
At the risk of making the a temporary AI site.

Rytko, what do your instructors say about linearizing the output of a PID? For instance a hydraulic valve often responds non-linearly to the input. Yet it is relatively easy to have a look up table or formula that takes the output of the PID and compensates for the non-linear valve.

I know this is not as high tech, but this trick is something a PLC may be able to use. I doubt anyone here is going to be designing a NN or fuzzy logic, at least on a PLC.

Good post though, I'll save it.

I am working on a way of autotuning hydraulic systems and I agree that the non-linearities are a killer. I don't have much faith in the old step input technique the looks for overshoot, rise times and settling times.

BTW, there is an 'engineering' meetin at McMenamins at NW 23rd and Vaughn about once a month if you are interested. Most there are integrators or OEMs that program using C or ladder. beerchug
 
There is a difference in Fuzzy logic and adaptive control.

Fuzzy logic, as I understand, constantly trims control parameters to best control any given process variable, as explained above. Many single loop PID controllers (ie honeywell DC3000's) have this "built in" and it is transparent to you and me. I have newer (black) and much older (blue) DC3000's controlling temperature in mechanically identical ink dryers on a printing machine. The blues are OEM in 1987 and the blacks are replacements because of blue failures over the years. Both controllers are loaded with the same P, I, and D values, and control very well near setpoint, but the newer 3000's reach setpoint from cold, much, much quicker. This is the fuzzy logic working, I think. (although, I don't know how)

Adaptive control, however, adjusts control parameters to suit SPECIFIC changing conditions in a process. Example--An unwinding roll of paper--web tension, the process variable, is controlled by air loaded brakes, the control veariable. This is done with a PID loop in a PLC. The properties of the full roll are much different than those of the same roll as it approaches the small diameter core. If you tune the loop for good response at full roll, the gain is too much and tension oscillates as the roll approaches core. So we dynamically reduce the gain of the PID loop as the roll unwinds to get good response at full roll and core. Also, at roll change, which is on the fly, we have to give the new, full roll PID output a place to start. This starting brake pressure is determined by remembering the last time this spindle, with it's set of brakes, had a full roll on and controlled within a small window around setpoint. This greatly reduces recovery time after the big tension upset of a flying paper splice. The system adapts to speed changes, tension setpoint changes, etc, on the fly, by remembering what worked the last time. You could take it a step further and remember correct brake pressures within specific speed ranges, if it would help.

I wonder how the black DC3000 would do in thae brake application....
 
baden...

I am not trying to discourage your learning of Fuzzy Logic. As I said before, I am a great proponent for Fuzzy Logic. Learn it! It's great stuff!

For many, your idea is an exercise in futility. Many wonder what it has to do with PLC's. Others think there are more practical ways to attain PID tuning.

I see this as a GREAT exercise in Engineering! Engineering is NOT ONLY about finding the way to employ some idea cheaply and effectively. It is ALSO about DEVELOPMENT!

In the world of Magic and Wonder...
"What if..." is not as powerful as "Open, Sesame!"

However, I feel...
"What if..." is definitely more magical than "Open, Sesame!"


A very interesting aspect of this exercise is that while you are trying to figure out how to get your code to "learn", YOU are trying to figure out how to "teach" it to learn!

That, of course, is the problem that all the guys at MIT are experiencing! Some of the fundamental questions are...
  • What is "Learning"?
  • What does it mean "to Learn"?
  • What makes anything "Capable of Learning"?
  • How does anything "Learn"?
  • What is "Effective Learning"?
  • What is "Teaching"?
  • What does it mean "to Teach"?
  • What makes anything "Capable of Teaching"?
  • How does anything "Teach"?
  • What is "Effective Teaching"?

Sure, we can come up with some "answers" to these questions. But, do those answers contribute to the solution of the problem? I think, not.

The answers to those questions are intuitive to us. Yet, we CAN NOT VERBALIZE those answers in a manner that contributes to the solution! We can only demonstrate! Q.E.D. - Quod Erat Demonstrandum! (Something like, "That which is proven to be so by demonstration".)

So... Fuzzy Logic...

An interesting thing about Fuzzy is that, of all of the logic-tools out there (Flowcharts, K-Maps, State-Diagrams, etc, etc), Fuzzy is the most Human-like.

While Flowcharts, K-Maps, State-Diagrams, etc, examine conditions in a Black-White manner, Fuzzy can see EVERY** shade of gray in between (**subject to resolution limitations).

Fuzzy can deal with problems in a manner very similar to how Humans deal with problems.

The basis of Fuzzy Logic is the Rule Structure. As long as there is an adequate set of rules for a particular situation, then the subject can make a "Fuzzy-evaluation" followed by a "Fuzzy-decision".

Here's another interesting point...
The "Fuzzy-decision" can result in what we recognize as a Black/White Response (as in "$hit, or get off the pot!") or it can result in a response that is as gray as any of the examined conditions (I'm only 20-feet away from that brick wall, and my speed is 50-Mph... so I should probably step on the brakes quite a bit harder!).

Notice that a Black/White Response ("YES") is implied in the second example.

Fuzzy Logic produces a "Vectored Response". That is, Direction and Magnitude.


|<--------------|-------------|------------|---------------->|
100%-NO 50%-NO 0% 50%-YES 100%-YES


.

We would recognize the 100%-NO and the 100%-YES as being Black and White. This also allows a response of... "Yeah, sorta." This might be 50%-YES. It's a "YES", but not a very strong YES. Much like the responses that we come up with, everyday, in our dealings with day-to-day activities.

This scale is used to represent the "weight" of the Input conditions as well as the "weight" of the resulting Output condition. The easiest way to visualize this is with an Analog Input and Analog Output.


OK, so that provides a very general description of the Input & Output conditions. Notice, I did not say RELATIONSHIP. The Relationships are determined by the RULES.

In a typical Fuzzy-System, the Rules are hard-coded. The programmer needs to know all of the situations being controlled and the operational extents of those situations. Knowing that information, the programmer can then design the Rules to produce reasoned results.

Since the Rules are hard-coded, the response will always be the same for a given set of Input conditions (this might, or might not, include feed-back... that depends on how the code was designed.) In this scheme, the Rules don't "Learn". They simply respond as they were designed.

In terms of solving the problem, employing Fuzzy Logic (or any logic), at this point, is like taking the test before studying for it.

There are some questions that need to be answered...

Q: Do you want the subject to "learn" how to handle a specific task, in all its KNOWN variations?

A: That can be accomplished using plain Fuzzy.


Q: Do you want the subject to be able to "learn" how to handle ABNORMAL variations that come along? That is, Do you want the subject to be able to find ways around peculiar, unexpected problems as they occur?

A: This is the Holy Grail!


I've seen some attempts at making a C3PO-type droid. These are more impressive for their visible (hardware) attributes than their "intellectual" prowess. In this case, they have a hunk of hardware with many capabilities, but the hardware has no (or very little) impetus to use those capabilities... at least, not yet.

As you mentioned, there is this problem of VOLUME.
How much Historical data is necessary?
How do you "filter" that data?
How do you develop "common sense" out of that data?

I've also seen where engineers are trying to "teach" a mechanical mouse to negotiate a maze. Once the mouse figures out how one maze is layed out they put the mouse in another maze. Of course the mouse immediately tries to use the path it learned in the previous maze. And, of course, that doesn't work.

The maze patterns are NOT random, at least, not yet. The first several mazes all have the same pattern at the begining. It repeatedly starts with what it learned. This continues until the maze is changed. The maze changes only as the mouse gets closer to the target. The continued success at the beginning of the maze instills a "sense of confidence" in the mouse.

Meanwhile, the mouse is "building a map". At some point, the Left/Right decision is determined by the mouses' interpretation of the map. That is, in some situations, when the mouse gets to an unexplored area of the maze, the mouse "knows" that turning this-way or that-way will lead to a dead-end... simply by referring to the recorded "geography" - the map.

This brings up the following point...

The mouse "learns" the extents, that is, the limits, of the maze through exploration. It can only "learn" the FULL extent of the maze if it explores, damned near, the entire maze. That does not necessarily require that the mouse explore all of the inner-recesses.

Likewise...

Your Fuzzy-System has to be able to "explore". BUT, and this is a Big But, you sure don't want the Fuzzy-System exceeding the Physical Limitations of the controlled system. So... how do you accomplish that?

I gotta stop before I exceed the Character-Count Limit.
 
Thanks for the great info rytko and Terry. That information will get me on the right track of making small learning programs. I will not try to go for it all, since I have a lot of catch'n up to do.

Again, thanks for the information.

Will Baden
 

Similar Topics

Dear All my apprecaition to all. Could any one Pls. tell me "could you benefited from artificail intelliegence to the world of PLCs" I.e. can...
Replies
11
Views
7,089
We have an input map that executes at the top of every scan. In most cases we are using the MOV instruction to copy the input data into a human...
Replies
7
Views
2,223
I meant to say most papers comparing fuzzy logic vs PID are frauds. Fuzzy logic can work but not better than PID with feed forwards. I know some...
Replies
13
Views
6,239
I am fan of expert system, and peter is trying to convince me of the opposite. BTW no hard feelings as both types have advantages/disadvantages...
Replies
19
Views
6,555
Can i use fuzzy controller for the controller purpose in programing? if yes how? any suggestion?
Replies
7
Views
2,075
Back
Top Bottom