Automated language and interface independent software...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C707S793000

Reexamination Certificate

active

06763360

ABSTRACT:

BACKGROUND OF THE INVENTION
1. The Field of the Invention
The present invention relates to measuring the reliability of software modules. More specifically, the present invention relates to systems, methods, and computer program products for generating language independent state tables usable when measuring the reliability of different versions of a software module, automatically verifying that a version of a software module performed an action as intended, and performing customized testing functions associated with a version of a software module.
2. Background and Relevant Art
Software vendors often release software products, such as user interfaces and application programs, in a variety of written languages and for use on a variety of computer platforms. This is in large part due to the number of differently configured computer systems operating worldwide in different countries. If a software product is released in a variety of languages and/or for use with a variety of computer platforms, finding and predicting the severity of defects (or “bugs”) in the software product becomes very important.
Many consumers install a wide variety of software products on any given computer system. These software products must interact in a manner that does not corrupt data or cause the computer system to crash. To decrease the chances of such problems occurring, it is beneficial to measure the reliability of software products under a wide array of different circumstances. Thus, versions of software products in different written languages may often be tested for use on a variety of computer platforms.
One conventional approach to measuring software reliability is to hire and train skilled software testers to systematically exercise a software product to find bugs. However, this approach to testing is highly labor intensive, and it is often very expensive to hire and train testers. Also, there is also some likelihood that a tester may miss a critical function of a software product or an interaction between several software products, which may allow a defect to go undiscovered and thus uncorrected.
Another conventional approach to measuring software reliability is to send a partially complete software product out to a large population of users to test the product. However, it is often expensive to create a software product, even one with partial functionality, and distribute it to a large number of users. Additionally, competitors of a software vendor may obtain a pre-release version of the product thus reducing the product's competitive advantage. There are also logistical problems associated with obtaining any useful information about a software product when test reports are received from a large number of users. Furthermore, similar to skilled software testers, users of a pre-release version of a software product may miss critical functions.
To lessen the chance for human error when testing the reliability of a software product, many vendors began creating batteries of automated software tests. These tests simulate actions that a user may typically cause a software product to perform. However, conventional automated test batteries have some inherent problems: batteries of tests are expensive to create and maintain; they are often self-contained and do not interact with other testing programs; and they often test only a small portion of the software modules included in a software product. Since different batteries of tests are often self-contained and incompatible, batteries for individual software products must be individually coded. This is time consuming, may require substantially technical expertise on the part of the tester, and increases the chance of introducing human error into the testing process.
More recent approaches to testing software reliability use a modular approach where different testing modules perform different functions of the testing process. A state table editor may be used to create databases called state tables, which include object values and actions associated with particular states of a target software product. State tables may be created by entering some information manually and detecting and retaining other information from an operating system. Detected and retained information may include operating system values, such as the values associated with buttons, scrollbars and dialog boxes, and user actions, such as mouse clicks and keyboard typing, into a database.
A state table contains various entries, each entry representing a state that the software module may potentially enter. For each potential state, there is a list of actions that are available once that state is entered. For each possible action, the state table may include a pointer to the state that the software module should enter if that action is performed. For each possible action, the state table may store a likelihood that a user may perform that action given that state.
To test a software module included in a software product, a state table runner “reads” a state table entry corresponding to a particular state and causes a software module to perform an action associated with that state, based on a likelihood percentage assigned to the action. For example, if a state table likelihood for a particular action equals 10%, the state table runner will perform this action one out of every ten times that it enters the particular state. Thus, unlike testing methods employing human testers, the likelihood of executing functions with a higher or lower probability than they would actually occur is reduced. The state table runner is also capable of verifying that a correct result occurred when an action is performed and of detecting the level of failure of an action. As the state table runner performs the actions included in the state table, it may record each action it performs and the result of performing the action
While conventional modular approaches provide benefits when compared to human testing or hard-coded automated testing methods, they also have certain problems. For a reliability test to provide meaningful results, a state table associated with a software module must be properly configured before a state table runner reads the state table. State table runners rely heavily on text string values included in tested software modules, such as window titles and dialog box contents, when determining if an action performed as intended. In a state table, these text strings are often hard-coded, which may require an individual to manually enter the text strings.
The problem is significantly compounded when versions of a software application are created in different languages. If an English, German, and French version of a software product is to be released, text strings associated with each language may need to be manually entered into separate state tables, where a state table is dedicated for each language version. Additionally, the need to manually edit an existing state table may arise when developers of a software product change text strings included in the product. Manually entering or editing text string values in a state table increases the operational complexity associated with testing, consumes time, and increases the chance that human error will be introduced into the testing process.
Another problem is that different versions of a software product may exist for different computer platforms. That is, a version of a software product designed for use on mobile devices may operate differently on a cell phone than on a personal digital assistant. Thus, when using conventional modular approaches to test a software product, a separate state table may need to be created for each computer platform compatible with a version of the product. Furthermore, if customized testing functions are to be performed such as actions not included in the functionality of a generic state table runner, these functions may need to be coded into different versions of a state table runner specifically designed for each computer platform. Coding a state table runner is time consuming, increases the chance of introducing

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automated language and interface independent software... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automated language and interface independent software..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automated language and interface independent software... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3233495

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.