Method for selecting medical and biochemical diagnostic...

Data processing: artificial intelligence – Neural network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C702S082000, C702S121000, C702S123000

Reexamination Certificate

active

06678669

ABSTRACT:

The subject matter of each of the above-noted applications and provisional application is herein incorporated in its entirety by reference thereto.
COMPUTER PROGRAM LISTING APPENDIX ON COMPACT DISK
Three computer Appendices containing computer program source code for programs described herein have been submitted concurrently with the filing of this application. The Computer Appendices were converted to Computer Program Listing Compact Disk Appendices pursuant to 37 C.F.R. 1.96(c). Appendices I, II, and III are on compact disks, copy 1 and copy 2, and stored under the file name Appenixl-III.txt, 392KB, created on Apr. 6, 2001. The compact disks, copy 1 and copy 2, are identical. The information submitted on the Compact Disk is in compliance with the American Standard Code for Information Interchange (ASCII) in the IBM-PC machine format compatible with the MS-Windows operating system. The Computer Appendices, which are referred to hereafter as the “Compact Disk Appendices”, are each incorporated herein by reference in its entirety.
Thus, a portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD OF THE INVENTION
This subject matter of the invention relates to the use of prediction technology, particularly nonlinear prediction technology, for the development of medical diagnostic aids. In particular, training techniques operative on neural networks and other expert systems with inputs from patient historical information for the development of medical diagnostic tools and methods of diagnosis are provided.
BACKGROUND OF THE INVENTION
Data Mining, decision support-systems and neural networks A number of computer decision-support systems have the ability to classify information and identify patterns in input data, and are particularly useful in evaluating data sets having large quantities of variables and complex interactions between variables. These computer decision systems which are collectively identified as “data mining” or “knowledge discovery in databases” (and herein as decision-support systems) rely on similar basic hardware components, e.g., personal computers (PCS) with a processor, internal and peripheral devices, memory devices and input/output interfaces. The distinctions between the systems arise within the software, and more fundamentally, the paradigms upon which the software is based. Paradigms that provide decision-support functions include regression methods, decision trees, discriminant analysis, pattern recognition, Bayesian decision theory, and fuzzy logic. One of the more widely used decision-support computer systems is the artificial neural network.
Artificial neural networks or “neural nets” are parallel information processing tools in which individual processing elements called neurons are arrayed in layers and furnished with a large number of interconnections between elements in successive layers. The functioning of the processing elements are modeled to approximate biologic neurons where the output of the processing element is determined by a typically non-linear transfer function. In a typical model for neural networks, the processing elements are arranged into an input layer for elements which receive inputs, an output layer containing one or more elements which generate an output, and one or more hidden layers of elements therebetween. The hidden layers provide the means by which non-linear problems may be solved. Within a processing element, the input signals to the element are weighted arithmetically according to a weight coefficient associated with each input. The resulting weighted sum is transformed by a selected non-linear transfer function, such as a sigmoid function, to produce an output, whose values range from 0 to 1, for each processing element. The learning process, called “training”, is a trial-and-error process involving a series of iterative adjustments to the processing element weights so that a particular processing element provides an output which, when combined with the outputs of other processing elements, generates a result which minimizes the resulting error between the outputs of the neural network and the desired outputs as represented in the training data. Adjustment of the element weights are triggered by error signals. Training data are described as a number of training examples in which each example contains a set of input values to be presented to the neural network and an associated set of desired output values.
A common training method is backpropagation or “backprop”, in which error signals are propagated backwards through the network. The error signal is used to determine how much any given element's weight is to be changed and the error gradient, with the goal being to converge to a global minimum of the mean squared error. The path toward convergence, i.e., the gradient descent, is taken in steps, each step being an adjustment of the input weights of the processing element. The size of each step is determined by the learning rate. The slope of the gradient descent includes flat and steep regions with valleys that act as local minima, giving the false impression that convergence has been achieved, leading to an inaccurate result.
Some variants of backprop incorporate a momentum term in which a proportion of the previous weight-change value is added to the current value. This adds momentum to the algorithm's trajectory in its gradient descent, which may prevent it from becoming “trapped” in local minima. One backpropogation method which includes a momentum term is “Quickprop”, in which the momentum rates are adaptive. The Quickprop variation is described by Fahlman (see,“Fast Learning Variations on Back-Propagation: An Empirical Study”,
Proceedings on the
1988
Connectionist Models Summer School
, Pittsburgh, 1988, D. Touretzky, et al., eds., pp.38-51, Morgan Kaufmann, San Mateo, Calif.; and, with Lebriere, “The Cascade-Correlation Learning Architecture”,
Advances in Neural Information Processing Systems
2,(Denver, 1989), D. Touretzky, ed., pp. 524-32. Morgan Kaufmann, San Mateo, Calif.). The Quickprop algorithm is publicly accessible, and may be downloaded via the Internet, from the Artificial Intelligence Repository maintained by the School of Computer Science at Carnegie Mellon University. In Quickprop, a dynamic momentum rate is calculated based upon the slope of the gradient. If the slope is smaller but has the same sign as the slope following the immediately preceding weight adjustment, the weight change will accelerate. The acceleration rate is determined by the magnitude of successive differences between slope values. If the current slope is in the opposite direction from the previous slope, the weight change decelerates. The Quickprop method improves convergence speed, giving the steepest possible gradient descent, helping to prevent convergence to a local minimum.
When neural networks are trained on sufficient training data, the neural network acts as an associative memory that is able to generalize to a correct solution for sets of new input data that were not part of the training data. Neural networks have been shown to be able to operate even in the absence of complete data or in the presence of noise. It has also been observed that the performance of the network on new or test data tends to be lower than the performance on training data. The difference in the performance on test data indicates the extent to which the network was able to generalize from the training data. A neural network, however, can be retrained and thus learn from the new data, improving the overall performance of the network.
Neural nets, thus, have characteristics that make them well suited for a large number of different problems, including areas involving prediction, such as medical diagnosis.
Neural Nets and Diag

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for selecting medical and biochemical diagnostic... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for selecting medical and biochemical diagnostic..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for selecting medical and biochemical diagnostic... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3237201

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.