Artificial neural network voice coil motor controller

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S015000, C706S022000, C706S903000, C700S048000, C700S045000, C700S037000, C360S075000, C369S044250

Reexamination Certificate

active

06629089

ABSTRACT:

FIELD OF THE INVENTION
The present invention is generally related to the field of Artificial Neural Network based control systems. The invention provides a method of Artificial Neural Network training without the need of an offline learning phase and training vectors.
BACKGROUND OF THE INVENTION
Moore's law has long been cited as the prime mover in the information technology revolution. Now this revered law may have to share the spotlight with another, equally important phenomenon: the amazing growth of areal densities in the storage industry.
Driven by the increasingly rich content of files, easy access to Internet downloads and data collected by corporate web sites (not to mention the pervasive reluctance among users to delete old e-mail files), the need for storage capacity is growing at rates estimated as high as 100% per year.
Luckily for corporate space planners, areal density is keeping pace with the need.
Data Storage Magazine
reports annual increases of 60 w, with 100% increases projected in the near future. Maintaining such increases in a highly price-competitive environment puts tremendous pressure on every aspect of disk drive technology. Given the relentless market pressure to increase areal density by at least 60% annually and the problems associated with increasing linear bit densities (bpi), considerable interest has been focused on radial track density (tpi).
Magnetic disk drive motion control design has been dominated by classical control techniques. While this tradition has produced many effective control systems, recent developments in radial track density and micro-actuators, have forced a growing interest in robust, nonlinear and artificial intelligence control.
Disk drives store data on constantly spinning disks made of aluminum or glass and coated on both sides with a thin film of magnetic material. Magnetic read/write heads store data on both sides of the disk by magnetizing very small areas on the disk surface in closely spaced, concentric tracks. A positioning device called an actuator or, voice coil motor, moves the heads rapidly from one track to another by the servo control system. An example of such Prior Art disk drive architecture is disclosed, for example in C. Denis Mee,
Magnetic Recording
, McGraw Hill Inc., 1988, incorporated herein by reference.
The expected increase in track density places an extra burden on the servo control system, which must hold the off-track motion of the heads within increasingly tighter limits for errorless reading and writing. This limit, known as track mis-registration (TMR), amounts to about 10% of the total track width. At today's average track density of 25000 tracks per inch, the TMR budget is approximately four micro-inches. This implies eliminating as much as possible, any disturbance that might cause the head to move off track.
The major causes of off-track disturbances include Non-repeatable runout from the spindle bearing, Residual vibrations due to actuator modes, Servo writer errors, Nonlinear friction effects from the VCM bearings, Slipped disks, leading to repeatable runout, and Casting warpage.
The servo bandwidth frequency provides a measure of how well the control system will dampen the effects of off-track disturbances.
Voice coil motor control can be classified into two basic problems: tracking a reference trajectory, (seeking) and track following. Several linear controllers as well as nonlinear controllers have been proposed in the Prior Art for solving these problems. The main idea behind these control systems is to achieve suitable bandwidth to obtain the track density required by the marketplace.
Artificial Neural Networks are known in the art. Although Artificial Neural Networks have been around since the late 1950's, it wasn't until the mid-1980's that algorithms became sophisticated enough for general applications. Today, Artificial Neural Networks are being applied to an increasing number of real-world problems of considerable complexity. They are good pattern-recognition engines and robust classifiers, with the ability to generalize in making decisions about imprecise input data. They offer ideal solutions to a variety of classification problems such as speech, character and signal recognition, as well as functional prediction and system modeling where the physical processes are not understood or are highly complex.
Artificial Neural Networks may also be applied to control problems, where the input variables are measurements used to drive an output actuator, and the network learns the control function. The advantage of Artificial Neural Networks lies in their resilience against distortions in the input data and their capability of learning. They are often good at solving problems that are too complex for conventional methods and are often well suited to problems that people are good at solving, but for which traditional methods are not.
In its most general form, an Artificial Neural Network-is a machine that is designed to model the way in which the brain performs a particular task. Artificial neural networks are collections of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning. The key element of the Artificial Neural Network paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements that are analogous to neurons and are tied together with weighted connections that are analogous to synapses.
Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of Artificial Neural Networks as well. Learning typically occurs by example through training, or exposure to a training set of input/output data where the learning algorithm iteratively adjusts the connection weights. These connection weights store the knowledge necessary to solve specific problems.
There are many different types of Artificial Neural Networks. Some of the more common include the multilayer perceptron which is generally trained with the back-propagation of error algorithm, learning vector quantization, radial basis function, Hopfield, and Kohonen, to name a few. Some Artificial Neural Networks are classified as feedforward while others are recurrent depending on how data is processed through the network.
Another way of classifying Artificial Neural Network types is by their method of learning, as some Artificial Neural Networks employ supervised training while others are referred to as unsupervised or self-organizing. Supervised training is analogous to a student being guided by an instructor. Unsupervised algorithms essentially perform clustering of the data into similar groups based on the measured attributes or features serving as inputs to the algorithms. This is analogous to a student who derives the lesson totally on his or her own. Artificial Neural Networks can be implemented in software or in specialized hardware.
FIG. 1
illustrates the general architecture of a two layer artificial neural network. The left layer represents the input layer, in this case with three inputs nodes
110
,
120
, and
130
receiving inputs X
1
through X
3
. The middle layer is called the hidden layer, with five nodes
140
,
150
,
160
,
170
,
180
, and
190
. It is this hidden layer which performs much of the work of the network. The output layer in this case has one node
190
outputting signal Y
1
representing output values determined from the inputs.
Each node
140
,
150
,
160
,
170
, and
180
in the hidden layer may be fully connected to the input nodes
110
,
120
, and
130
. That means what is learned in a hidden node is based on all the inputs taken together. This hidden layer is where the network “learns” interdependencies in the model.
FIG. 2
provides some detail into what goes on inside a hidden node.
As illustrated in
FIG. 2
, a weighted sum
210
may be performed as follows: X
1
times W
1
plus X
2
times W
2
and so on through X
3

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Artificial neural network voice coil motor controller does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Artificial neural network voice coil motor controller, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Artificial neural network voice coil motor controller will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3044924

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.