AR Discovery Vision Paper

A. Executive Summary
In this paper we portray a possible future: When systems are in place (widely built into our tools and in use) to easily reference digital assets with one or more real world objects (people, places or things), the normal person will also be empowered with a suite of systems and services that filter digital information (combined we will call these "discovery systems/services") with the purpose of only presenting that which is most desired at the time and in the circumstances.

This paper proposes the paths by which systems to both produce and to discover AR-enabled assets will emerge (in the future and we only symbolically refer to that future as "2020") and the benefits these systems will bestow on those using high performance, network-connected mobile platforms designed for intuitive use of contextual information. We explore the obstacles to the development of such systems and suggest approaches to overcome them.

The paper's target audiences:

(a) Those developing technologies and content for end users (see below, see scenarios) and tools that create/format content for AR experiences who are not currently providing discovery services.

(b) Those for whom a tangible interface (AR) between the digital and physical worlds is compelling.

(c) Those who would prefer to use/be best served by the services offered through connecting the digital world with the physical using sensors (in the physical world, the sensors on/of others as well as the user's own senses).

These users (or maybe another word) are part of growing segments between those who are always off-line, those who are always immersed in a digital experience who are interested in better understanding/accessing information about the physical world in their surroundings without putting their personal information at risk or sharing their interests.

The AR scenarios that can benefit from discovery include solitary as well as collaborative (multi-sourced, multi-participant) experiences. [Help people to sense shared experiences, help to discover experiences with those who are nearby. Non-destructive graffiti experiences, use of discovery should support this as well.]

B. Origins (what came before this paper)

 * 1) This work is the result of the co-authors' mutual interest in discovery and convictions about the value of open systems based, where possible, on standards to provide highly valuable solutions for people (...and their machines?). The diverse backgrounds of the authors provides the opportunity to gain from experience with learning management systems, large scale information repositories, geospatial data management systems and augmented reality technologies.
 * 2) Prior to this paper co-authors met for approximately one year, exploring possible architectures for AR Discovery.
 * 3) The co-authors speak with others about AR Discovery, but find the future we can foresee is difficult for others to imagine in the context of their present day circumstances and tools. Most focus on the 3-6 month windows of opportunity for their innovations.

C. Goals (motivations)

 * 1) Describe the future we envisage in a manner that helps others to suspend their disbelief (share our concerns)
 * 2) Offer frameworks for thinking/planning for a future in which there are (a) open systems on which many businesses contribute to successful discovery services (in contrast with a single-vendor solution, e.g., Google Search) and (b) that there is a large social contract upon which information will be shared (e.g., societal values reflected in the IT architectures: I expose information about myself in return for valuable experiences)
 * 3) Propose/define the requirements of specific scenarios and use cases
 * 4) Suggest routes/formulate possible steps that will produce a (future) world that is flooded with digital information we can consume, and to address needs of the problems we envisage (abundance of AR-ready content from diverse sources too plentiful to be manually requested and in which context is the primary means by which to base discovery)

D. Augmented Reality in 2020

 * 1) Publishers of data are able to supply their unique value to all (potentially more) customers (possibly at lower cost)
 * 2) Users in all phases of daily and professional life are able to tap the full (potential) value of the data available about their surroundings, the people, places and objects with which they may wish or need to interact
 * 3) Scenarios: in daily personal life (e.g., learning, performing new tasks, shopping, cultural heritage), in professional life (e.g., emergency responders, performing repairs or service)
 * 4) See item 2b above about social framework for using AR. Keywords: participatory services, new social norm for participating in AR ecosystems (on-line information sharing and behaviors in the physical world)
 * 5) Benefits from high performance anticipatory services (the system anticipates what the user will want/need) based on the user's current context, preferences and likely future state(s). This without the user totally loosing control of the experience content.

E. What's missing?

 * 1) information in association with the physical world anchors to which it can bring value is too rare (and/or too expensive due to needing to handcraft AR experiences) or too plentiful/abundant. What's missing? Abundance of AR-ready content from diverse sources. Most AR experiences today are handcrafted. Consequently, there's no sense of urgency.
 * 2) Semantics to describe AR experiences in directories
 * 3) Semantics to describe user, user context and intent
 * 4) Safe and secure means by which personal data (identity, settings) is managed by users and between components
 * 5) Business models for those creating AR content and those addressing AR discovery
 * 6) Users [may be another name for these!]

F. Possible routes to 2020 vision

 * 1) Turn on the "spigot" of AR-ready data (assets that have a target and interaction encoded)
 * 2) Make "AR view" a component of all client (user agent) software
 * 3) Define and agree upon context definitions (temporal, geospatial, interaction, history)
 * 4) Involve vendors that control production of data (AutoDesk, Dassault Systems, Intergraph, ESRI, etc)

G. Addressing the challenges

 * 1) Communicate possible futures through documents (this text, figures), public speaking and running code
 * 2) Collaborate with groups that share the vision to prepare and develop missing components and services
 * 3) Define architecture "path" (sequence of architectures between here and the future with AR Discovery)

H. Next Steps
Develop working prototype

I. Vision paper references cited
This is where we will put the references

J. Vision paper editors/co-authors/contributors
Christine Perey

Josh Lieberman

Steve Browdy

Steve Hansen

Christian Glahn

Mikel Salazar

K. Amendments/edits
This is where we put in version numbers to keep track of significant version changes.

I. Annex
Links to resources on the Web where the reader can go find more in depth about specific use cases and applications in industries.