Sign In
Not register? Register Now!
You are here: HomeEssayIT & Computer Science
Pages:
7 pages/≈1925 words
Sources:
15 Sources
Level:
MLA
Subject:
IT & Computer Science
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 37.8
Topic:

User Interfaces for Visually Impaired People (Essay Sample)

Instructions:

Certain aspects associated with spoken language dialogue systems and user interfaces created for blind people are provided within the essay. Firstly, a discussion of specific requirements of blind individuals to user interfaces of dialogue systems and applications. The essay further explores certain aspects of the dialogue system design, which may be utilized for increasing the efficiency of communication between the dialogue systems and visually impaired users.
Afterwards, the key communication module utilized within the dialogue systems being developed is presented. Additionally, a brief discussion involving two applications namely the speech-oriented hypertext system AUDIS and the dialogue programming system is presented. The two systems are designed mainly for visually impaired people, particularly blind students and programmers.

source..
Content:
Name:
Institution:
Date:
User Interfaces for Visually Impaired People
Abstract
Certain aspects associated with spoken language dialogue systems and user interfaces created for blind people are provided within the essay. Firstly, a discussion of specific requirements of blind individuals to user interfaces of dialogue systems and applications. The essay further explores certain aspects of the dialogue system design, which may be utilized for increasing the efficiency of communication between the dialogue systems and visually impaired users.
Afterwards, the key communication module utilized within the dialogue systems being developed is presented. Additionally, a brief discussion involving two applications namely the speech-oriented hypertext system AUDIS and the dialogue programming system is presented. The two systems are designed mainly for visually impaired people, particularly blind students and programmers.
Introduction
Although current software systems can be considered user-friendly and sophisticated, they are often inconvenient for individuals with visual impairment. The reason revolves around the graphical interfaces and lack of features that fulfill the blind people’s special needs. Screen reader (screen access) and speech synthesizer software still denote basic facilities, which visually impaired individuals use to obtain information through computer means.
The current development within human-computer interactions as well as spoken language dialogue system (particularly multi-modality and move towards less-error prone speech recognition) presents new expectations and hopes, and challenges. The design of ideal dialogue strategies that appears to be a critical point for developing dialogue systems is more pertinent for individuals with impaired vision, supporting perspicuity for visually impaired users. This issue is premised on the notion that computers constitute the most crucial information sources for people with impaired vision. This essay will explore the aspect of user interfaces for visually impaired people.
Specific Demands on User Interfaces for Visually Impaired
In certain applications, differences are non-existent in the application of user interfaces between visually impaired and sighted users. A case in point is the dialogue systems, which can be accessed through telephone. However, several systems utilize graphics as a critical output data and in most instances, their assumptions do not consider that users are visually impaired, thus they overlook specific demands of these users. A recap of specific demands of visually impaired users includes:
* The system should allow for comfortable control through integration of key board (hot-key commands) and speech commands. Indeed, graphics may be utilized only specially as the additional output for partially blind users. Other output/input gadgets may be utilized for specific applications provided that they are effective.
* Speech commands ought to be complemented by speech (system-oriented) command dictionary, which allows users to express commands in a variety of ways, creating a more intuitive system control.
* Easy configuration and customization is a critical aspect of the system, particularly for visually impaired users that utilize the system regularly. This is associated with the structure of the information data, form and mode of synthesizing speech output, and control commands.
* It is imperative to allow users to acquire information promptly and allow them to gain informational overview. This characteristic is complemented by environmental sounds, earcons, audio glances, speech summaries, output rates of speech, and output speech modes.
* The user orientation ought to be complemented with the information concerning a position, which is accessible in form of speech and through environmental sounds, earcons, as well as audio glances.
Specific features of the Dialogue Design for Blind User-Oriented Systems
As cited earlier, one major issue encountered when developing spoken language dialogue systems for visually impaired users pertains to means of supplying enough information that guarantees the user full orientation. The ideal means of managing this is using sound. This may be actualized through:
* Synthesized voice that the syllable-oriented speech synthesizer produces. This kind of sound output may be utilized for reading textual data and producing output messages. The utilized speech synthesizer ought to apply main prosodic characteristics to increase speech output quality and distinguish different forms of speech. Additionally, it should have the capacity of utilizing different types of voices, which users can configure. Different forms of voice may be utilized for distinguishing different genres of information.
* Sampled voice that may be utilized for feedback messages to users. Different forms of sampled voices may be utilized for helping users to distinguish different messages.
* Sound that the special samples, wave tables, MIDI, and sound synthesizer generates. This kind of non-speech sound is utilized initially for environmental sounds that are applied to offer feedback to actions of users (Darvishi, 1996; Kahlish, 1996); secondly, it may be utilized for earcons (non-speech glances utilized for giving visually impaired user a summary through listening (Stevens et al., 1996).
Non-speech sound coupled with flexible application of different forms of speech may essentially assist in speeding up communication. In contrast, it may also contradict the user, if she/he is not acquainted with corresponding meanings. In view of this, a mechanism that is premised on the following points is suggested:
* The system should find out if it interacts with experienced or novice users. It may be undertaken by locating it from reactions of users or through user declarations;
* Depending upon the identified user experiences measure, the system selects corresponding strategies, that is, communication degree that integrates implicit and explicit information.
* In any instance users may utilize the command –EXPLAIN-that offers explanation on the meaning of implicit information, thus learn it through system use.
* The system monitors the communication; if it discovers that users show a tendency of using EXPLAIN frequently (or when the user discards EXPLAIN), it shifts the level of communication.
* The user may turn-off the aforementioned regulation to allow learning mode or set the level of communication.
The Structure of a Speech Interface for Visually Impaired Users
The aforementioned communication method should be supported using a module that creates the interface between recognizer tools and speech synthesizer, and hardware devices and dialogue- oriented application. The structure for such module is illustrated in figure 1. The interface is characterized by a modular structure and utilizes some previously designed tools including the prosody detection module, command recognition module RCG (Kopecek, 1999), and DEMOSTHENES (Kopecek, 1997). The interface is developed in a manner that allows it to be re-used in different types of applications, particularly within the DIALOG systems alongside other applications for blind people.
An application interacts communicatively with the major module. It conveys simple requests such as Play ‘asound.wav’, or Say ‘Hello, World’. The interface communicates to the application regarding execution of the requests. The application may resume or pause speech at a given period. Additionally, the interface communicates to the application upon recognizing voice commands.
The interface main module functions with the sound gadget and ensures that it has been opened for output/input whenever necessary. An application seeks to undertake certain tasks by calling the simple function. The main module solely processes requests and delivers them to relevant sub-module. Afterwards, the control is returned towards the application. The sub-modules operating in a different thread execute the requests and inform the major module regarding the present situation of execution. The main modules may inform (depending upon the configuration) the application through a message. The application may utilize message handlers to convey the messages.
Command recognition modules continuously listen to the input gadgets (usually microphones) and try to identify a command. When it identifies correct commands stored within the database, it communicates to the main modules. The main modules produce appropriate messages and dispatch them to the application.
Moreover, the main module liaises with prosody detection modules that distinguish three forms of intonation namely falling, level, and rising. A command uttered with rising intonation is considered a request for assistance. An utterance made at level intonation is considered a command, which will be accompanied by a set of parameters (or one of the parameters if the command has already preceded). An utterance spoken with falling intonation creates a non-parametric command or the final command parameters.
The module that generates sound processes texts on the sub-module queue through the main modules. Such tagged texts comprise requests for TTS system’s textual synthesis, synthesizer configuration (changes of present volume, rate, and voice), requests for speech and audio mixing or playing of audio files. Sound manager modules may either load random audio files or predefined selected sounds (speech messages) preserved within the database of the internal module. Additionally, it could synthesize sound through soundcard abilities.
Figure 1: Structure of the Dialogue Speech Interface System
Applications
Numerous applications designed for visually impaired users exist, they include, the IBM Home Page Reader (http://at.rsb.org.au/products/ibmhp...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

Need a Custom Essay Written?
First time 15% Discount!