Apparatus and method for semi-automated generation and...

Data processing: software development – installation – and managem – Software program development tool – Translation of code

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C717S152000, C717S152000

Reexamination Certificate

active

06321376

ABSTRACT:

BACKGROUND OF THE INVENTION
The ability to efficiently migrate programs, programmers, hardware designs and hardware designers among various tools motivates the need to measure the conformity between a specific tool and a specific language standard for which the tool claims conformity. Such conformity assessment is often done by applying many test cases to a tool under test and evaluating the tool's response (black-box testing). Use of tools which conform well and in known ways to a language standard reduce training costs, development costs, maintenance costs, time to market and risk. However in the current art, apparatus and methods for development and maintenance of the required language conformity results in sub-optimal measurement fidelity at undesirably high cost.
A commercially interesting language generally has several hundred lexical and syntactic productions paired with several hundred semantic constraints. For example, a lexical production may characterize an identifier lexical token as having a leading alphabetic character followed by zero or more alpha-numeric characters or underscores such that at most one underscore appears sequentially. Perhaps the identifier then appears in a syntactic production for a variable declaration. The example variable declaration syntax includes one or more identifier lexical tokens, a colon lexical token, an identifier denoting a previously defined type, an equals lexical token and an initial value. The initial value is a non-terminal production defined in terms of other non-terminal productions or lexical tokens (representing syntactically terminals). Examples of semantic constraints include nested block structure, controlling the set of type definitions which are visible, and the compatibility between the type of the declaration and the initial value, perhaps requiring that both are integer type.
In order to achieve high-fidelity, language conformance testing must consider the cross product of the lexical productions, syntactic productions and semantic constraints. For example, the sequence of test cases applied to a tool under test must not only consider the variable declaration in isolation, but also in a myriad of contexts including nested blocks and other declarations. With tens of lexical productions, hundreds of syntactic productions (some of which are optional or repeated arbitrarily) and hundreds of semantic constraints, one can readily see that generating millions of carefully chosen test cases would be desirable to achieve high-fidelity language conformance testing.
Language conformance tests must include both language productions which are specifically allowed and those that are disallowed so that the conformance testing process can detect both correct language tests which are (improperly) disallowed by a tool under test and incorrect language tests which are (improperly) allowed by a tool under test. In order to evaluate conformance, both sets of tests must be generated such that the conformity of the test to the language standard is known independent of applying the test to any tool under test (classification).
Current art in the generation of language conformance tests relies largely on manual test case generation and manual classification. Since manual editing and classification of a test case typically requires an expert between fifteen minutes and an hour per test, it is seldom economically feasible to generate test suites with more than ten thousand test cases. Despite their cost, such manually generated suites fall substantially short of the millions of test cases required for high-fidelity validation. Such current art is an economic compromise between the desire for millions of carefully chosen test cases and the test case development effort which is economically affordable.
When a test suite is manually generated, encompassing a small fraction of the desired language validation space, a human is unlikely to touch on a significant number of the nonintuitive test cases which may arise during practical use of a tool under test. Humans are not well suited to impose generation rigor spanning thousands of test cases. As a result, test suite fidelity is compromised during manual test case generation.
A useful language standard undergoes periodic revision. Since such revisions alter the language definition, the revisions must be reflected in the associated language conformance test suites in order to maintain a high-fidelity validation suite. The manual effort required to identify and modify test cases impacted by a language revision is significant. Numerous lexical productions, syntactic productions and semantic constraints go into the definition of a single test case. A complete, manually generated cross index of language definition points and test cases is generally not feasible. Maintenance of manually generated suites is thus an expensive process with sub-optimal fidelity.
Manually generated test cases are initially classified by the test case author (good/bad). Such manual classification is refined by iterative application of the manually generated test cases to a sequence of tools under test. Any discrepancies between the manual classification and the tool response must be manually interpreted and considered for test re-classification. Such a process is expensive, error-prone and relies on the availability of many tools under test for a given language standard in order to approach high fidelity. Such a process never directly identifies test cases needed to discriminate between correct and incorrect language which are missing from the manually generated test suite.
In the current art, a single test case may be automatically permuted by character replacement in order to yield a sequence of closely related test cases. For example, a test case may be permuted to write various types and/or values into a file. Such automatic permutation spans a small space within the set of desired tests; generally a single set of syntactic productions and semantic constraints (common to all permuted tests).
Current state of the art in manual test suite development or automatically permuted test cases results in sub-optimal conformance testing fidelity of a tool under test, high development cost and high maintenance cost. An apparatus and means achieving higher fidelity conformance testing with lower development and maintenance effort, as disclosed in the present invention, is novel and useful.
SUMMARY OF THE INVENTION
The present invention discloses an automated apparatus and method for generating and classifying a set of good and bad language conformity tests from a manually generated, formal specification of the language being generated. The resulting tests and classification are correct by construction, are free from manual development or maintenance effort beyond the formal language specification and can achieve arbitrarily high test fidelity based solely on the available computational resources.


REFERENCES:
patent: 4729096 (1988-03-01), Larson
patent: 4736320 (1988-04-01), Bristol
patent: 5159687 (1992-10-01), Richburg
patent: 5359546 (1994-10-01), Hayes et al.
patent: 5394544 (1995-02-01), Motoyama et al.
patent: 5426651 (1995-06-01), Van De Burgt
patent: 5504881 (1996-04-01), Sirurget
patent: 5535329 (1996-07-01), Hastings
patent: 5604895 (1997-02-01), Raimi
patent: 5754860 (1998-05-01), Mckeeman et al.
patent: 5768593 (1998-06-01), Walters et al.
patent: 5983012 (1999-11-01), Bianchi et al.
patent: 6117185 (2000-09-01), Schmidt
patent: 6230182 (2001-05-01), Le et al.
patent: 0 204 942 A2 (1986-12-01), None
patent: 0 772 140 A1 (1997-05-01), None
Watanabe & Sakamura, A specification based adaptive test case generation strategy for open operating sytem standards, Proc. of ICSE-18, Pp 81-89, 1996.*
Demillo et al., “Experimental results from an automatic test case generator”, ACM trans soft. engi. and methodology, vol. 2, No. 2, pp. 109-127, Apr. 1993.*
Memon et al., :Using a goal driven approach to generate test cases for GUIs, ACM ICSE, pp 257-266, 1999.*
Tonndorf, “Ada conformity assessment a model for other programming

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for semi-automated generation and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for semi-automated generation and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for semi-automated generation and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2618311

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.