Computer graphics processing and selective visual display system – Computer graphics processing – Animation
Reexamination Certificate
1998-02-05
2002-03-19
Nguyen, Phu K. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Animation
Reexamination Certificate
active
06359621
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a data converting device for encoding time series data such as movement (motion) data of an object, efficiently storing the encoded data, editing encoded data, and reusing the edited data, and a method thereof.
2. Description of the Related Art
In recent years, virtual characters (simply referred to as characters hereinafter) such as a character of a human-being, an animal, etc. have been frequently used in computer graphics (CG), video games, animations, and movies, etc. The demand for expressing various movements such as running, standing-up, etc. by freely moving a character displayed on a screen is on the rise.
To meet such a demand, a technique for generating and editing movement data of a body of a character by using a computer has been developed. Here, the movement data corresponds to, for example, time series data such as joint angles of the body, positions of hands and a head, etc., which are measured, for example, with a movement measurement technique such as a motion capture system, etc.
The method for generating movements of a character appearing on a screen by using a computer includes the following three major types.
(1) the method for generating movements of a character model by developing an algorithm for generating natural movements like a living thing
(2) the method for generating still pictures at suitable time intervals with CG, and automatically generating intermediate pictures by adopting a computer in a conventional animation generation process
(3) the method for capturing actual movements of a living thing such as a human being as movement data with a measurement device, and making a three-dimensional (3D) character model play back the measured movement data
The first method allows various movements to be arbitrarily generated if an algorithm for generating natural movements like a living thing can be developed. Additionally, this method has the great advantage of being able to cope with a change of a physical characteristic such as the body size of a character. However, development of an effective algorithm itself is extremely difficult, and there is the disadvantage of requiring a precise dynamics model of a character. Therefore, an algorithm and a dynamics model for limited movements are only developed in the present situation. More and more studies are needed in order to be able to generate various movements.
The second method is the most frequently used method at present. This method requires an innumerable number of still pictures for respective types of movements, which leads to the requirement of much time and labor. Whether or not to be able to express natural movements like a living thing depends on an ability of an image generator.
With the third method, actual movements of a living thing are measured, and the measured result is played back by using a computer, so that a character is made to make natural movements like the living thing. With this method, however, each actual movement such as walking, running, etc. must be measured and recorded. This is because an unrecorded movement cannot be reproduced.
Accordingly, to make a character make various movements, a large number of pieces of movement data must be measured and recorded beforehand, and a huge storage capacity is required to store the large number of pieces of data. Because the measured movement data depends on the size of a body to be measured, which makes an actual movement, and the situation, the movement data must be again measured if the character or the situation changes. As described above, the third method has the problems that movement data requires a huge storage capacity, and the degree of reusability of the movement data is low.
SUMMARY OF THE INVENTION
An object of the present invention is to provide a data converting device for efficiently storing movement data, and improving its reusability in a playback technique for making a virtual character play back movements based on measured movement data, and a method thereof.
In a first aspect of the present invention, the data converting device is implemented by using a computer, and comprises an approximating unit and a storing unit. The data converting device stores time series movement data obtained by measuring a movement of an arbitrary object including an arbitrary living thing, and performs information processing by using the movement data.
The approximating unit approximately represents the movement data with a weighted addition of arbitrary basis functions, and generates a weighting coefficient array of the weighted addition. The storing unit stores the weighting coefficient array as code data corresponding to the movement data.
The approximating unit approximately represents the movement data obtained with the weighted addition by using a smooth and local basis function such as a B-spline function. Additionally, the approximating unit stores the weighting coefficient of each basis function as a discrete code representing the movement data in the storing unit.
If the weighted addition of the basis functions is performed by using the code data stored in the storing unit as weighting coefficients, the original movement data can be easily restored. The shape of the basis function is predetermined, and does not change depending on the shape of input movement data. Therefore, if the data representing a set of required basis functions is stored, it can be used for restoring arbitrary movement data.
Normally, the data of the weighting coefficient array can be represented by an amount of data which is much smaller than that of measured movement data. Accordingly, the storage capacity can be significantly reduced in comparison with the case in which the movement data is stored unchanged.
In a second aspect of the present invention, the data converting device is implemented by using a computer, and comprises a storing unit, an encoding unit, an editing unit, a restoring unit, and an outputting unit. The data converting device performs information processing for outputting images of a virtual character by using time series movement data obtained by measuring a movement of an arbitrary object.
The encoding unit generates code data by encoding the movement data. The storing unit stores the code data. The editing unit extracts the code data from the storing unit, edits the data, and generates new code data. The restoring unit restores movement data corresponding to the new code data. The outputting unit generates images of a character using the restored movement data, and outputs the generated images.
REFERENCES:
patent: 5053760 (1991-10-01), Frasier et al.
patent: 5083201 (1992-01-01), Ohba
patent: 5093907 (1992-03-01), Hwong et al.
patent: 5347306 (1994-09-01), Nitta
Burdet Etienne
Hosogi Shinya
Maeda Yoshiharu
Takayama Kuniharu
Fujitsu Limited
Nguyen Phu K.
Staas & Halsey , LLP
LandOfFree
Data converting device for encoding and editing time series... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data converting device for encoding and editing time series..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data converting device for encoding and editing time series... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2830982