Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
1999-05-10
2003-09-23
Portka, Gary (Department: 2188)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C714S030000, C714S732000, C714S733000
Reexamination Certificate
active
06625688
ABSTRACT:
TECHNICAL FIELD
The present invention relates generally to microcontrollers. More specifically, the present invention relates to a method for verifying the proper operation of a microprocessor-based control module.
BACKGROUND OF THE INVENTION
The use of electronics in automobiles is continually increasing. Many electronic applications include a microcontroller unit (MCU) that is comprised of a central processing unit (CPU) and associated peripheral devices. The peripheral devices may be specific or customized to the controller application. These can include communication devices such as serial peripheral interfaces, memory devices such as RAM, ROM/FLASH and EEPROM, timers, power supplies, A/D converters and other devices, either built on the same integrated circuit or as separated devices. The CPU and its peripheral devices are linked together by a communications bus.
An MCU dedicated to the control of one subsystem (such as anti-lock brakes-ABS) is said to be embedded in that subsystem. When the MCU is part of an application Electronic Control Unit (such as an ABS ECU) which contains interface circuits for example, to aid in the collection of data or support high current drive requirements, the combination may be referred to as an embedded controller. The method as described, is not limited in use to embedded controllers.
MCUs typically include self-tests to verify the proper operation of the CPU and the associated peripheral devices. The self-test will detect illegal memory access decoding, illegal opcode execution or a simple Watchdog/Computer operating properly (COP) test. More fault coverage than this is required for a mission critical system. In a mission critical system, the correct operation of the CPU and the MCU's peripherals (such as timer module, A/D converters, and Communication Modules, etc.) that comprise the MCU is important for the satisfactory operation of the vehicle. Correct operation of the MCU must be established during the initialization phase following power on, and during repetitive execution of the control program.
Allowing the device under test (such as the CPU) to test itself is a questionable practice. Test methods that are implemented so that execution occurs as the application algorithm is running will be referred to as “On-Line” or “concurrent” testing. Further, “Off-Line” testing will reference the condition when the device is placed in a special mode in which the execution of the application algorithm is inhibited. Off-line testing is used for manufacturing test or for special purpose test tools, such as that a technician might use in the field to run unique diagnostic tests. On-line, concurrent testing using redundant software techniques is throughput Consuming. The ability of the CPU to test it's own instruction set with a practical number of test vectors is limited at best.
Tens of thousands of test vectors are generated for manufacturing tests are required to establish a 99% fault detection level for complex microcontrollers. Writing routines to test the ability of a CPU to execute various instructions by using sample data and the instruction under-test that will be used in the application is practically futile.
Even if a separate “Test ROM” was included in the system to either:
1. Generate a special set of inputs and monitor the capability of the CPU and application algorithm or a test algorithm to properly respond.
2. Generate and inject test vectors derived from manufacturing fault detection testing and then evaluate the capability of the CPU to properly process, and produce the correct resultant data at circuit specific observation points.
In a complex system a test ROM would become inordinately large in order to guide the CPU through a limited number of paths or “threads” of the application algorithm. The data used must be carefully selected and necessitates detailed knowledge of the MCU by the test designer. MCU suppliers rarely supply sufficient information to allow effective design. Thus the first test ROM method would be contrived and limited in its ability to simulate an actual operating environment. If the second technique were employed, and unless all of the manufacturing test vectors were used, the resulting tests would be partial and lengthy. If an attempt were made to isolate the portion of the system that was used and then target it with the proper vectors (to reduce the overall vector quantity), every time the algorithm changed the subset of vectors would be subject to further scrutiny, and possible modification. The technique would have further implementation difficulties for continuous validation of the system in a dynamic run mode of operation. The technique does not consider the concept of monitoring a system based on execution “Dwell Time” in any particular software module or application “Run Time Mode” condition.
Modifying the CPU to have built in-self test, such as parity to cover the instruction set look up table, duplication or Total Self Check (TSC) circuit designs, etc., of sub-components of the CPU, may result in a significant design modification to a basic cell design. CPU designers are reluctant to modify proven designs for limited applications.
Software techniques that involve time redundancy, such as calculating the same parameter twice, via different algorithms, also require that multiple variables be used and assigned to different RAM variables and internal CPU special function registers. Thus time redundancy also requires hardware resource redundancy to be effective. Because of the substantial amount of CPU execution time needed for redundancy, the CPU requires excess capacity to accomplish the redundant calculations in a real time control application. Because of the added complexity necessary for this implementation of redundancy, the verification process is commonly long and lengthy.
The process of requiring the CPU to perform the self-test is time consuming and inadequate, especially in applications having a relatively large memory and with many peripheral devices. To date, the most direct way to solve this problem has been to simply place two microcontrollers into the system. In such systems, each microcontroller is the compliment of the other and each memory and peripheral module are duplicated. Both devices then execute the same code in near lock step. The technique is effective because it checks the operation of one microcontroller against the other. Although the system tests are performed with varied threads through the algorithm, variable dwell in any portion of the application, and with the random-like data that occurs in the actual application environment, the following must be considered:
1. Data faults or hardware faults that may occur, are used to calculate system parameters. In a dual microcontroller system these parameters may be filtered before they are compared by the second microcontroller. Thus parametric faults are “second order” to the data or hardware faults that initially created them.
2. Parameters have to be carefully checked against tolerance ranges.
3. The number of times that a miscompare between the two devices actually occurs before a fault is actually logged and responded to must be established.
4. The fail-safe software is not independent from the application algorithm. As adding parameters modifies the application algorithm, fail-safe software alterations must also be evaluated.
This technique is not an efficient form of resource allocation. Two identical, fully equipped, microcontrollers doing the same task is expensive. Also, extensive communication software is used to synchronize the data between the two microcontrollers.
Other dual microprocessor systems may use a smaller secondary processor to do a limited check of a few portions of the algorithm, or to accomplish a control flow analysis of the main controller to validate its execution from one module to the next or its ability to transfer to and return from subroutines. These schemes are inherently limited and can only detect a small subset of all possible system faults.
A common technique for verifying
Fruehling Terry L.
Helm Troy L.
Waidner John
Chmielewski Stefan V.
Funke Jimmy L.
Portka Gary
LandOfFree
Method and circuit for analysis of the operation of a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and circuit for analysis of the operation of a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and circuit for analysis of the operation of a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3051664