Ulster University Logo

Ulster Institutional Repository

A Spoken Dialogue System for Navigation in Non-Immersive Virtual Environments

Biomedical Sciences Research Institute Computer Science Research Institute Environmental Sciences Research Institute Nanotechnology & Advanced Materials Research Institute

McNeill, MDJ, Sayers, H, Wilson, S and McKevitt, P (2002) A Spoken Dialogue System for Navigation in Non-Immersive Virtual Environments. Computer Graphics Forum, 21 (4). pp. 713-722. [Journal article]

[img]
Preview
PDF - Published Version
169Kb

URL: http://onlinelibrary.wiley.com/doi/10.1111/1467-8659.00629/abstract

DOI: 10.1111/1467-8659.00629

Abstract

Navigation is the process by which people control their movement in virtual environments and is a core functional requirement for all virtual environment (VE) applications. Users require the ability to move, controlling orientation, direction of movement and speed, in order to achieve a particular goal within a VE. Navigation is rarely the end point in itself (which is typically interaction with the visual representations of data) but applications often place a high demand on navigation skills, which in turn means that a high level of support for navigation is required from the application. On desktop systems navigation in non-immersive systems is usually supported through the usual hardware devices of mouse and keyboard. Previous work by the authors shows that many users experience frustration when trying to perform even simple navigation tasks — users complain about getting lost,becoming disorientated and finding the interface ‘difficult to use’. In this paper we report on work in progress in exploiting natural language processing (NLP) technology to support navigation in non-immersive virtual environments. A multi-modal system has been developed which supports a range of high-level (spoken) navigation commands and indications are that spoken dialogue interaction is an effective alternative to mouse and keyboard interaction for many tasks. We conclude that multi-modal interaction, combining technologies such as NLP with mouse and keyboard may offer the most effective interaction with VEs and identify a number of areas where further work is necessary.

Item Type:Journal article
Faculties and Schools:Faculty of Arts
Faculty of Arts > School of Creative Arts and Technologies
Research Institutes and Groups:Computer Science Research Institute
Computer Science Research Institute > Intelligent Systems Research Centre
ID Code:27
Deposited By:Professor Paul McKevitt
Deposited On:06 Mar 2012 09:45
Last Modified:06 Mar 2012 09:45

Repository Staff Only: item control page