User Context



During our meeting on Sept 29 2014 we discussed narrowing our “charter” and drilling down for a time into one of the components of the larger system that might have general applicability to other systems.

We discussed how using the state of the user (including context especially as it is defined by the user’s physical world parameters/perceptions of the physical world) needs to become the basis for automated queries (to an AR web service or other types of queries).

We agreed that it is a topic that has been envisaged by many for many years, particularly popular since mobile platforms became ubiquitous. We all have examples of projects we've heard/learned about and there are dozens if not hundreds of articles about the use of context for automatic delivery of personalized information (for learning, military, navigation, entertainment) in the literature. Yet, there are relatively few examples of "real" projects that have been commercially deployed. We discussed possible reasons for lack of deployment.

This page contains additional pointers to projects introduced in discussion on Oct 6 and added subsequently (documenting the e-mail thread).

Semantics of Context
Steve and Josh spoke about the semantics needed to consistently describe the dimensions of the context.

Sensor readings for Context
Christine suggested that there were many sensors for which we already have semantics. Maybe what we need is a way to bring them in with the other components of context.

Temporal elements of Context
The temporal axis for user state is extremely relevant to be able to deliver real time data.


 * What is the history of this user?


 * What is the history of other users who passed through this state?
 * What time is it at the time of the “state detection”?

 These and other questions need to be thought through far more deeply.
 * What is the likely future state of the user?

Gartner Group says we're going to have "Cognizant Computing"
In September 2013, Gartner anayst Jessica Ekholm briefed operators about Cognizant Computing.

There are four stages: Synch me, See me, Know me, Be Me

Synch Me is the expansion of the Personal Cloud: store, synch, share and stream content.

See Me is most concerned with automatic context acquisition. People will become more wary and will only share with companies they trust.

Know me is the phase in which analytics are extracting patterns from the "bulk data" that's gathered and is about understanding the user's history and preferences. Consumers interact with the data they supply.

Be me is the final stage in which the company is delivering anticipatory services.

The presentation is here. The replay of the webcast is here.

Context Projects and Resources
To increase our understanding of context, we are exploring what has been published/written elsewhere. This section is a collection of items our research gathers.

Ad Auctions and Networks
Every pixel on the user's screen is auctioned based on breadcrumbs that contribute to the auction agency having a "context" for a user and selling that knowledge to an advertiser. [Need Josh to add a reference and text here]

Contextual Gadgets
Google uses contextual gadgets in Gmail to insert advertising into any e-mail. It is based on keyword search (searching in the body of the e-mail) and probably other clues. [Need Steve to add a reference and text here]

Alohar Context Technology
Alohar develops Context Technology. "Alohar’s system is composed of two core technologies: Efficient Persistent Sensing and Automatic Place Detection. It leverages the power of Alohar’s novel real-time sensor-fusion algorithms and context detection systems in the cloud to enable intelligent Context-Aware applications. Context is about where we are, who we are with, what we are doing, how we feel and so much more. Knowing and sharing context helps us feel closer and more connected to our trusted loved ones and gives us peace of mind. Alohar is advancing context awareness by enabling mobile applications which automatically share your context with loved ones while giving you control over your information." This text comes from this page.

Sensors:
 * GPS
 * compass
 * motion
 * gyroscope
 * WiFi
 * Bluetooth
 * temperature
 * brightness
 * proximity

The sensor-fusion on a multi-sensor device uses Alohar's "adaptive-sensing algorithm to determine the timing and quantity of sensor data to sample, and is thus optimized for 24-7 persistent sensing with low power consumption."

This is a video interview with Robert Scoble on 11 April 2012.

Context in xAPI and Mobile Learning
Christian Glahn is a learning systems researcher at ETHZ who also participates in the xAPI group (xAPI Spec @ ALDnet.gov).

In a message on Oct 7 Christian cited a book chapter he wrote (2014) on a learning context modelling approach. the complete publication is here . Pages 141-156. Look at the figures. T he bibliography of his chapter is a GREAT resource.

Google Now
The question about what is available and how Google Now uses context to present information about the current user location is an important one we have yet to answer.

Did anyone do any digging to see what is available about the Field Trip function?

Network Context
The network conditions will have an impact on what the user can receive and send. It should be taken into account as an element of context. A thread of the W3C mailing list by Bryan Sullivan on topic of network context elements is found on the Network context wiki page.