Data processing: software development – installation – and managem – Software program development tool – Translation of code
Reexamination Certificate
1997-11-12
2001-05-22
Powell, Mark R. (Department: 2122)
Data processing: software development, installation, and managem
Software program development tool
Translation of code
C702S182000
Reexamination Certificate
active
06237138
ABSTRACT:
FIELD OF THE INVENTION
This invention broadly relates to computer methods, systems and computer program products for usability testing of computer software applications. The invention more particularly relates to methods, systems and software for the retrospective capturing of screen displays for evaluation of applications, including graphical user interfaces, during the use of the applications. The invention extends to remotely capturing of such screen displays and also user comments so as to test, amongst other aspects, the usability of applications remotely.
BACKGROUND OF THE INVENTION
It is common and indisputable that most users of computing systems encounter various sorts of problems while using software products. Designing computer software that will minimize user problems requires a knowledge of the user and the task that is trying to be accomplished. The design process typically includes iterative design and user testing of those designs in order to identify where users have problems so that design flaws can be accurately addressed and fixed. A thorough knowledge of problems that users encounter in existing products is also critical if design mistakes are to be avoided for future versions of those products. It may also be advantageous to a particular software design and manufacturing company to capitalize on the design flaws of products of its competitors thereby providing the opportunity for a competitive advantage in product usability.
Currently, user testing methods to gather information regarding usability problems with software applications for analysis by user centred design practitioners or experts fall typically into three categories.
The first pertains to field testing where a trained usability tester is deployed in a customer site to observe users doing real work in making use of computer applications. Typically the usability tester would keep notes of critical incidents or potential usability problems that are observed while the users perform their work. Obvious problems with this method of gathering product data is that it is labour and resource intensive, limited to a small sample of user tasks and it is intrusive in the sense that an outside party is observing work being done within a customer location. This method is seldom used in practise.
A common method of gathering information is laboratory evaluations. Typically, users are recruited and brought into a usability lab probably located at the site of the software designer. Users are then requested to perform a series of prescribed tasks and the trained usability testers note the users problems while using the software (i.e. critical incidents) and manually record a description of the problem. With mutual consent, the sessions could be video taped so that the specific interaction with the application product can be replayed for detailed analysis by the usability tester at a later time. The problems with this method of gathering product information are that it is labour intensive, expensive and difficult to set up, the subjects must be recruited, it is time consuming collecting the data and the data obtained is often limited in that the tasks; situation and environment are somewhat artificially limited and specifically prescribed.
Another commonly used method for obtaining usability information is through beta testing where an early version of a software product is made available to a number of beta evaluation participants. This method is currently used by most software design and manufacturing companies. Feedback on the products is gathered from the users in conference calls or surveys. Users can also identify specific problems to the manufacturer in electronic forums. The shortcomings with this method are that the responsibility for problem identification and reporting is left primarily to the users and that the manufacturer must rely on users to document all problems encountered including the inability of users to articulate some specific problems.
Thus, in general, the labour costs of using traditional usability evaluation methods (including recruiting, setup costs, running evaluation sessions, recording and analyzing data) to record critical incidents and determine usability problems means that testing is expensive and only minimal testing can be carried out within practical time and costs constraints. The scope of real-life user-work that is actually covered in association with the particular application is quite limited with methods that require a usability expert or professional to be present. It has been found that typically when the professional is not present during the actual evaluation session, the number of critical incidents that are reported from the users in the field are only a very small proportion of all those major and minor problems that are actually encountered by users.
It is also recognized that the users who are employed in the above described evaluation sessions are not trained as usability experts or professionals, nor would the software design and manufacturing company want this to be the case. As a result, the users often blame themselves that the application won't work, they have trouble articulating and documenting problems they encounter, they do not remember every problem they encounter or they simply do not want to identify problems. Thus any process for the collection of evaluation data pertaining to a particular application must minimize the impact of the above shortcomings and provide simple and convenient mean of indicating critical events by the users and simple and convenient means of returning product related data for analysis. It of course is desirable that the information returned be useful and identifies all major user problems and can identify a high percentage of existing problems quickly, contain a low percentage of false problems and that the information can readily be used for program design enhancement.
A number of prior art references and documentation that generally relate to known evaluation tools and product data gathering information have been identified.
U.S. Pat. No. 5,457,694 which issued Oct. 10, 1995 to Dale J. Smith, entitled “Method and Apparatus for Analyzing the ATA (IDE) Interface”, relates to a bus analyzer or analyzing and trouble shooting the ATA bus used commonly in personal computers to interface to hard disk drives. The analyzer records the events occurring on a computer system and provides easily understandable yet detailed description of those events to a technician for the purpose of diagnosing a computer system related problem or for measuring system performance. The analyzer described is portable, simple to operate and capable of transferring recorded data to almost any computer through a particular serial port. The analyzer has a memory system which stores recorded signals or events and a trigger circuit to select the starting and stopping point of the recording. The data being recorded are digital signals from the system and the events being captured are digital system performance related and are presented to a specialist via a display.
IBM Technical Disclosure Bulletin, Vol. 38, No. Feb. 2, 1995, Pgs, 377-375, entitled “User Documentation Tool Update Facility”, describes a tool that can be used by a software developer to create on-line help user guides. The user documentation is generated by recording how users perform application tasks and after the application is changed, the recorded tasks are replayed to verify or check for application changes. The tool records snap shots of application screens as an application familiar user demonstrates how to perform typical tasks. The user can directly control which snap shots are saved and which are not. The images captured are discreet static screen capture and provides the ability to record and replay a user's interaction with an application.
The publication of H. R. Hartson et al entitled “Remote Evaluation: The Network as an Extension of the Useability Elaboratory”, Proceedings of the CHI '96, Computer Human Interaction Conference, Apr. 13-18, 1996 describes a co
Hameluck Don E.
Velocci Vince V.
Doubet Marcia L.
Herndon Jerry W.
International Business Machines Corp.
Nguyen-Ba Hoang-Vu Antony
Powell Mark R.
LandOfFree
Buffered screen capturing software tool for usability... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Buffered screen capturing software tool for usability..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Buffered screen capturing software tool for usability... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2482881