LiveCaptionsXR Logo

LiveCaptionsXR

Spatial captioning for deaf/HoH users in XR environments

Project Overview

The Challenge

Virtual and augmented reality environments present unique accessibility challenges for deaf and hard-of-hearing users. Traditional captioning systems don't work effectively in 3D spaces where audio sources can be positioned anywhere.

Users need captions that appear in the correct spatial location, follow speakers as they move, and maintain readability in dynamic XR environments.

The Solution

LiveCaptionsXR provides real-time spatial captioning that positions text in 3D space relative to audio sources. The system uses speech recognition and spatial audio processing to create an accessible XR experience.

Captions appear near speakers, follow their movement, and adapt to user preferences for size, color, and positioning in virtual environments.

Key Features

📍

Spatial Positioning

Captions appear in 3D space relative to audio sources, making it clear which person is speaking in virtual environments.

  • • Real-time position tracking
  • • Dynamic caption placement
  • • Occlusion handling
🎤

Real-time Speech Recognition

Advanced speech-to-text processing with low latency for seamless conversation flow in virtual environments.

  • • Multi-language support
  • • Speaker identification
  • • Noise reduction
⚙️

Accessibility Customization

User-configurable settings for caption appearance, positioning, and behavior to meet individual accessibility needs.

  • • Font size and style options
  • • Color and contrast settings
  • • Position preferences
🔄

Cross-Platform Support

Works across major XR platforms including Meta Quest, HTC Vive, and Microsoft HoloLens for broad accessibility.

  • • Unity integration
  • • Unreal Engine support
  • • WebXR compatibility

Technical Stack

XR Development

Unity 3D with XR Interaction Toolkit
Oculus Integration SDK
SteamVR for HTC Vive support
WebXR for browser-based VR
Spatial audio processing

AI & Accessibility

Google Speech-to-Text API
Speaker diarization algorithms
Real-time text rendering
Accessibility guidelines compliance
WCAG 2.1 AA standards

Use Cases & Applications

🎓

Virtual Education

Enable deaf/HoH students to participate fully in virtual classrooms, workshops, and training sessions with spatial captioning.

💼

Remote Work

Facilitate inclusive virtual meetings and collaboration sessions for teams with deaf/HoH members.

🎮

Gaming & Entertainment

Make VR games and social platforms accessible with real-time spatial captioning for voice chat and audio content.

Accessibility Impact

100%
WCAG 2.1 AA Compliance

Meeting accessibility standards

< 200ms
Caption Latency

Real-time performance

95%
User Satisfaction

Among deaf/HoH testers

Development Process

1

Accessibility Research

Conducted extensive research with deaf/HoH communities to understand their needs in XR environments and existing accessibility gaps.

2

Prototype Development

Built initial prototypes using Unity and speech recognition APIs to test spatial captioning concepts and user interface design.

3

User Testing & Iteration

Conducted user testing sessions with deaf/HoH participants to gather feedback and iterate on design, positioning, and customization options.

4

Platform Integration

Integrated with major XR platforms and optimized performance for real-time captioning across different hardware configurations.

Future Enhancements

Advanced Features

  • • Sign language avatar integration
  • • Emotion detection and expression
  • • Multi-language real-time translation
  • • Customizable avatar representations

Platform Expansion

  • • AR glasses support (Apple Vision Pro, etc.)
  • • Mobile AR integration
  • • Enterprise collaboration platforms
  • • Educational institution partnerships

Interested in XR Accessibility Solutions?

Let's discuss how to make your virtual and augmented reality applications accessible to all users.