Self-optimizing method and machine

Data processing: generic control systems or specific application – Generic control system – apparatus or process – Optimization or adaptive control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S025000, C706S045000

Reexamination Certificate

active

06625500

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field
The invention relates to self-optimizing method and machine; and more particularly to multivariable, real-time self-optimizing method and machine to determine, with highly fractionally replicated experiments, substantially true main effects of control variables little contaminated by interactions.
1. Prior Art
Computer-controlled, automation systems are widely used. All these systems are extremely powerful. With their uses, consistency is generally achieved together with usually, but not always, associated improved profits, productivity, and product or service qualities (P
3
Q)
Further, by transferring to machines human intelligence, rather than skill, these systems have ushered us into this Second Industrial Revolution.
But human intelligence or knowledge bases merely document averaged previous test results on old samples with equipment, materials, parts, procedures, or environment different from the present or future. Inherent are errors due to the various samplings, assumptions, or extrapolations, and repeated human interactions. These knowledge bases are often incomplete, inaccurate, biased, erroneous, out-of-date, and too generalized for use in a particular automation task with a specific combination of equipment, procedures, materials, parts, and environment.
Detailed knowledge bases on modern technologies are particularly lacking. Even many old yet important technologies are not fully understood. Each of these technologies involves many processing steps often with unknown chemical, mechanical, electromagnetic, aerodynamic, fluidic, or other phenomena on the subatomic, atomic, microscopic, macroscopic, or other levels. Even the controlling variables in each phenomenon are often not known, and certainly have not been thoroughly studied. In fact, much research and development (R&D) remains to be done in every field. Yet time is running out in this highly competitive world. A new R&D method must, therefore, be developed.
For example, system dynamics of modern processes and equipment are generally ill-defined. Chemical and metallurgical reactions are often uncertain. The mechanism of catalysis is not completely known. Crystal growth is still an art. After millions of controlled welding experiments, the critical welding variables cannot yet be even identified among the very many possible. In the case of the new high-temperature ceramic superconductors, the samples often are still hard to make, shape, purify, reproduce, isolate, stabilize, confirm, or even determine compositions.
Without reliable knowledge bases, the usual automation specialists would be at a loss in selecting the few among many manufacturing or servicing phenomena or variables to be controlled, in formulating the system dynamics models, in setting up the control equations, in determining the control constants, and in specifying setpoints for the control variables.
The fragile and unreliable knowledge bases often give only partial or invalid system dynamics models, oversimplified control equations, and inexact or misleading control constants. In addition, all the setpoints are too arbitrary and round-numbered (e.g., 800° C. and not 796.768° C., 3 feet per minute, 20 gallons) to be possibly optimal statistically. The chance of these variables or setpoints being optimal at any time, not to say instantaneously or continuously, is nearly zero. Further, the optimal setpoints cannot, by definition, be constant, as is assumed in present automation systems, but must change with variations in time, equipment, procedures, materials, parts, and environment.
The conventional automation systems are also not smart and must be spoon-fed at every step via computer programs or master-slave instructions. They are not totally integrated or automated, and require constant human guidance, interaction, supervision review, and analysis.
The repeated human involvement greatly slows down the conventional automation systems, and passes to machines many defects of the inconsistent and imperfect human test planners, samplers, testers, data collectors and analyzers, communicators, and technicians. Humans are million or billion times slower and less reliable than microprocessors at least in, e.g., memory recalling or storing, information inputting or outputting, data analyzing, communicating, and commanding or actuating.
In addition, usually these present systems merely passively adapt, adjust, correct, control, or regulate, in response to variations in the environment or a few control variables. Dealing with more than several interacting variables results in extremely large number of tests to be made; and in massive amount of data to be collected, conditioned, stored, and quickly or instantly analyzed. This is often impossible because of the well-known problems of “combinatorial explosion” and “computer intractability,” as shown below.
Yet, modern technologies invariably involve many unpredictable, interacting, and rapidly changing control variables in such categories as: raw materials, vendors, batches, lots, and conditions; compositioning; processing equipment; procedures in each of the many steps in every process; and environment. Many phenomena are transient but highly nonreproducible yet unknown and critical.
Artificial intelligence (AI) technologies, particularly the expert systems, have been developed and increasingly used in various fields. But again the knowledge bases are often inadequate or deficient, particularly on developing technologies. The present expert systems are also costly, inflexible, qualitative, and often inaccurate and out-of-date particularly for complicated yet rapidly improving modern technologies. In addition, they too cannot handle the inherently large number of interacting variables.
Reliable and relevant knowledge base (KB) is scarce and very costly. Up to now, the main bottleneck in the development of expert systems has been the acquiring of the knowledge base in computer-usable form. Human knowledge often not only is fragile, costly, unreliable, but difficult to be translated for uses by machines. Even not in real time, codifying an expert's limited knowledge has always been a very long and labor-intensive process, if at all possible.
Hence, experts conclude that machine learning is the key to the future of automation in general and expert systems in particular. The machine must first learn through computerized experimentation involving at least 7 (as in steel-making example shown later), 60 (as in a software development shown later), 127 (as in a later case to estimate the computing time), or several hundred and several thousand (as in screening for drugs or chemicals) variables.
The comprehensive but instantly produced knowledge base, not just a few isolated numbers, must be generated in bulk and at low cost. For best results, the new knowledge base must also be computer-coded in real time into, e.g., expert rules for instant use by another machine. The other machine must also select the most relevant knowledge from the comprehensive results, digest and learn from, e.g., the expert rules, and make logical decisions as to which rules or conditions to apply, for the proper and most efficient uses of the knowledge base. This is totally different from merely “telecommunicating” a few isolated optimum values disclosed in Li's U.S. Pat. No. 4,472,770. The Li' self-optimizing method is much more than simple telegraphing words or numbers invented many years ago. So far, however, no such machines exist.
Conventional AI development environments have difficulties in producing efficient real-time systems. This is partly due to the fact that the same code necessary to enhance the development environment slows down the system run-time. To overcome this limitation, AI system designers must embed the knowledge base (KB) into their own -custom run-time AI shells to achieve real-time performance. Unfortunately, the deeper the KB is embedded into the actual code, the harder it is to change the KB when maintenance is necessary. The AI system designer must constantly bal

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Self-optimizing method and machine does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Self-optimizing method and machine, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Self-optimizing method and machine will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3104795

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.