Geospatial AR Discovery Experiment

This page is to document the development of an initial AR discovery experiment involving CARIF documents and core user context query attributes, as discussed in the 11/03/2014 telecon.

Two phases:

- direct correspondence between attributes and metadata

- the recommendation level of discovery (instead of saying "this is what I want" the user is saying "this is who I am and this is my state")

Outline
The concept is to develop a discovery methodology and catalog service to demonstrate discovery of experiences (served in CARIF) in the City of Cambridge according to a core set of search and user context attributes.

Junaio plugin for excel can ingest spreadsheets into channels within their system, of geospatial experiences such as POI information. Creates AREL experiences that are compatible with their CARIF export.

Tools
Could use Excel Plugin for Metaio Creator to generate experiences in a Metaio channel that can then be accessed with any CARIF-compatible AR browser.

Metadata
Metadata allows the query parameters to work.

Required: Optional attributes (specific to experiences):
 * User's Geospatial Position (WGS-84 compliant)
 * User Focus (is this the same or a subset as User Orientation?)
 * Keyword
 * Category
 * Indication of scale
 * Indication of time
 * date/time of query
 * date/time the asset (experience POI) created
 * date/time last used
 * time frame that user is interested in receiving
 * Media Type
 * Text
 * Image
 * Video
 * 3D model
 * Other?
 * Visual relationship to real world (How the digital asset is intended to be associated with the physical world?)
 * floating
 * attached (to what?)
 * pertains to the feature as a whole? or a subset of the feature

Data Sources
City of Cambridge has an extensive database of 3D buildings and other city features with various attributes such as ownership. There is also POI data that has been developed in the course of W3C POI standard development. OpenPOI.net

- http://openpois.net/

The database is somewhat stale at this point, but could be refreshed (in the course of moving to Cloudant hosting) in the next couple of months.

Conflating these data and linking them to experiences would remain to be done.

Workflow scheme
The basic pattern is to use a model of typical user / location profile information to guide conflation of raw content (POI's, building outlines, etc. into both experiences and metadata records for an AR catalog.

Another model?
This is simpler, and may be missing some things, but if so they can be added back.

First, we have sources of information. Here there is list of libraries, a list of taxi stands, and a list of coffee shops.

All data sources can represent their resources in ARML (and perhaps other formats). This permits any AR client that supports ARML to access and use the data.

All of the relevant metadata about those data sources is indexed by and stored in one or more search engines: provider, type of content, bounding box, license, etc.

Now, the user. The user's context broadly represents the user's interests and location. The drawing doesn't show it but this would include "I always want to see libraries" and "I want to see taxi stands when I am in Cambridge" and "in the mornings I want to know where coffee shops are."

The AR client knows about the context and asks the search engine for what sources it should query to get POIs. (There is a line missing connecting the browser and the search engine---I will add it.) The browser then talks to the data sources to get relevant POIs. Every now and then as something changes the client can query the search engine again to get a fresh set of sources.

(There could be multiple search engines, of course, and the browser can store favourite POI providers.)

Information Flow
The sequence diagram is an attempt to show what objects we have and how they interact. The user agent starts by querying the experience catalogue for experiences. The query contains the user context (where am I, how do I hold my phone etc.). The information supplied through the request is enriched through the user profile and the location profile. Exactly where this information resides is not really relevant, I just put it in a profile registry. From the information provided by the user context, the user profile and the location profile, the catalogue selects a range of matching experiences and returns the list to the user agent. The user picks one (or more?) and requires it from the experience repository. The user can update her profile herself, and most probably the catalogue maintainers will enrich the profiles from information they gather through user behaviour. Another data flow is an experience creator uploading experiences to the experience repository. I guess that should trigger the repository to update the experience catalogue although that doesn't appear in the diagram.

What is absolutely not clear to me is what happens in the box "hic sunt dragones". This is essentially where all the data from POI, building outlines, map data etc. enters the location profile in the profile registry. I guess there will be some kind of processes/use cases doing this, but I think this is where I'm lost.

People
Phil Archer (W3C) is someone with whom we should engage (at least to inform him). [CP to reach out]

Raj Singh (IBM Cloudant) is someone who we want to invite [Josh L talking with him].

Paul Cody (consultant to City of Cambridge) has access to 3D building data. [Josh L to look into data dictionary]

Christian Glahn (HTW Chur, Switzerland)