System and method for using context in navigation dialog

Data processing: speech signal processing – linguistics – language – Speech signal processing – Application

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

07831433

ABSTRACT:
Described is a navigation system. The navigation system comprises a route planning module and a route guidance module. The route planning module is configured to receive a request from a user for guidance to a particular destination. Based on a starting point, the route planning module determines a route from the starting point to the particular destination. The route guidance module is configured to receive the route, and based on the route and current location of the user, provide location-specific instructions to the user. The location-specific instructions include reference to specific visible objects within the vicinity of the user.

REFERENCES:
patent: 6078865 (2000-06-01), Koyanagi
patent: 6144318 (2000-11-01), Hayashi et al.
patent: 6317691 (2001-11-01), Narayan et al.
patent: 6480786 (2002-11-01), Watanabe et al.
patent: 6567744 (2003-05-01), Katayama et al.
patent: 6608910 (2003-08-01), Srinivasa et al.
patent: 7260473 (2007-08-01), Abe et al.
patent: 7424363 (2008-09-01), Cheng et al.
patent: 7463975 (2008-12-01), Bruelle-Drews
patent: 7474960 (2009-01-01), Nesbitt
patent: 7502685 (2009-03-01), Nakamura
patent: 7650237 (2010-01-01), Aoto
patent: 2003/0235327 (2003-12-01), Srinivasa
Bugmann, G., “Challenges in Verbal Instruction of Domestic,” Proceedings of the 1st International Workshop on Advances in Service Robotics, Bardolino, Italy, Mar. 13-15, 2003, pp. 112-116.
Theocharis Kyriacou, et al., “Vision-Based Urban navigation Procedures for Verbally Instructed Robots,” Proceedings of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems (IROS'02) EPFL,Lausanne, Switzerland, Oct. 2002, pp. 1326-1331.
Lauria, S., et al., “Converting Natural Language Route Instructions into Robot-Executable Procedures,” Proceedings of the 2002 IEEE Int. Workshop on Robot and Human Interactive Communication (Roman'02), Berlin, Germany, pp. 223-228.
Lauriar, S., et al., “Training personal Robots Using Natural Language Instruction,” IEEE Intelligent Systems, Sept./Oct. 2001, vol. 16, No. 5.
MaaB, W., et al., “Visual Grounding of route Descriptions in Dynamic Environments,” In: R.K. Srihari (ed.), Proc. Of the AAAI Fall Symposium on Computational Models for Integrating Language and Vision, Cambridge, MA, 1995.
Gapp, K.-P., “Object Localization: Selection of Optimal Reference Objects,” In: A. U. Frank, W. Kuhn (eds), Spatial Information Theory: A Theoretical Basis for GIS, Proc. Of the Int. Conference COSIT'95, Semmering, Austria, Berlin, Heidelberg: Springer, 1995.
Schirra, J.R.J., et al., “From Image Sequences to Natural Language: A First Step towards Automatic Perception and Description of Motions,” Applied Artificial Intelligence, 1, 287-305, 1987.
Herzog, G., and Wazinski, P., “Visual Translator: Linking Perceptions and Natural Language Descriptions,” Artificial Intelligence Review, 8(2): 175-187, 1994.
Fry, J., et al., “Natural Dialogue with the Jijo-2 Office Robot,” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems IROS-98, pp. 1278-1283, Victoria, B.C., Canada, Oct. 1998.
Laengle, T., et al., “KANTRA—A Natural Language Interface for Intelligent Robots,” Proceedings of the 4th International Conference on Intelligent Autonomous Systems, Mar. 1995.
Horn, B., et al., “Determining Optic Flow,” 1981, Artificial Intelligence, vol. 17, pp. 185-203.
Faugeras, O., “Three-dimensional Computer Vision—A Geometric Viewpoint,” 1993, MIT Press.
Rohr, K., “Towards model-based recognition of human movements in image sequences,” 1994, Computer Vision, Graphics and Image Processing: Image Understanding, vol. 59, pp. 94-115.
Curio, C., et al., “Walking Pedestrian Recognition,” in Proc. IEEE Intl. Conf. On Intelligent Transportation Systems, pp. 292-297, Oct. 1999.
Wohler, C., et al., A Time Delay Neural Network Algorithm for Real-time Pedestrian Detection, In Proc. Of IEEE Intelligent Vehicles Symposium, pp. 247-251, Oct. 1998.
Wachter, S., et al., “Tracking Persons in Monocular Image Sequences,” Computer Vision and Image Understanding, vol. 74, No. 3, pp. 174-192, Jun. 1999.
Pentland, A., et al., “Recovery of non-rigid motion and structure,” 1991, IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 13, pp. 730-742.
Wren, C., et al., “Pfinder: Real-Time Tracking of the Human Body,” SPIE, vol. 2615, pp. 89-98, 1996.
Xu, L.Q., et al., “Neural networks in human motion tracking—An experimental study,” in Proc. Of 7th British machine Vision Conference, vol. 2, pp. 405-414, 1996.
Masoud, O., et al., “A robust real-time multi-level model-based pedestrian tracking system,” in Proc. Of the ITS America Seventh Annual Meeting, Washington DC, Jun. 1997.
Dubuisson, M., et al., “Contour Extraction of Moving Objects in Complex Outdoor Scenes,” International Journal of Computer Vision, vol. 14, pp. 83-105.
Papageorgiou, C., et al., “A Trainable Pedestrian Detection System,” in Proc. Of IEEE Intelligent Vehicles Symposium, pp. 241-246, Oct. 1998.
Zhao, L., et al., “Stereo and Neural Network-based Pedestrian Detection,” in Proc. Of IEEE Intl. Conf on Intelligent Transportation Systems, pp. 298-303, Oct. 1999.
Broggi, A., et al., “Shape-based Pedestrian Detection,” Proc. Of IEEE Intelligent Vehicles Symposium, pp. 215-220, Oct. 2000.
Huttenlocher, D.P., et al., “Comparing Images Using the Hausdorf Distance,” IEEE Transactions on Pattern Analysis and machine Intelligence, vol. 15, No. 9, pp. 850-863, Sep. 1993.
Srinivasa, N., et al, “Fuzzy Edge Symmetry Features for Enhanced Intruder Detection,” IEEE International Conference on Fuzzy Systems, vol. 2, pp. 920-925, St. Luis, MO, May 2003.
Freeman, W.T., et al., “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, No. 9, pp. 891-906.
Rao, R.N., et al., “An active vision architecture based on Iconic Representations,” Artificial Intelligence, vol. 78, pp. 461-505, 1995.
Schmid and R. Mohr, “Local grayvalue invariants for image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 530-535, vol. 19, No. 5, May 1997.
Healey, G., et al., Global color constancy: recognition of objects by use of illumination—invariant properties of color distributions, vol. 11, No. 11, pp. 3003-3010, Nov. 1994.
Ohta, Y.I., et al., “Color information for region segmentation,” Computer Graphics and Image Processing, vol. 13, pp. 222-241, 1980.
Kanerva, P., “Sparse Distributed memory,” MIT Press 1988.
Rao, R.P.N. and Fuentes, O., “Learning navigational behaviors using a predictive sparse distributed memory,” Proc. Of the 4th International conference on Simulation of Adaptive Behavior, MIT Press, 1996.
Nowlan, S.J., Maximum likelihood competitive learning, Advances in Neural Information Processing Systems 2, pp. 574-582, Morgan Kaufmann, 1990.
Bugmann, G., et al., “Instruction-based learning for mobile robots,” Proceedings ACDM 2002.
Notice of Allowance for U.S. Appl. No. 11/051,747.
Torrance, M.C., “Natural Communication with Robots,” Master's thesis, MIT, Department of Electrical Engineering and Computer Science, Cambridge, MA, 1994.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for using context in navigation dialog does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for using context in navigation dialog, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for using context in navigation dialog will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4176316

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.