This workshop focuses on approaches and system architectures for content creation and management. Unlike digital content perceived by the user within MAR display devices during experience presentation, the content of concern to the participants of this workshop is linear, time-stamped content that can be captured and then played/reviewed and otherwise treated as “normal” digital video content, at any time in the future. This includes the possibility of multiple points of view (multiple cameras) and multiple microphones. It can also include other metadata about the user’s geospatial position and orientation, hand positions, gaze direction, audio provided the user, photos taken or video viewed and captured. It encompasses the media files in any/all formats and associated metadata.
The topics and questions on which this workshop will focus include:
- Components and/or systems, and architectures for MAR Experience streaming and capture
- Design, selection and integration of sensors for MAR experience capture
- Local power and processor management during MAR experience capture
- Compression during or following MAR experience capture
- MAR experience capture metadata (e.g., session dates, times, duration, geospatial position and orientation, hand positions, gaze direction, audio provided the user, photos taken or video viewed and captured)
- Novel visual interactions with archives of captured MAR experiences
- Network architectures for MAR experience capture and transport
- Components and/or systems for MAR Experience archive storage, replication, management and access
- Benefits and drawbacks of distributed architectures for MAR experience capture and management
- Policies and guidelines for MAR experience capture and management