Presented for TTI Vanguard "Shift Happens" conference (http://bit.ly/TTIVshifthappens) visit to PARC, this represents a slice of some of our work in contextual intelligence.
4. Combine context data and cognitive models Activity model: User goals, beliefs and desires Physical context Location, Time, Social context Behavior model Past actions (individual, population) Preference model Tastes, Interests, Expertise Electronic context Calendar, Calls, Email, Documents Advertising Community monitoring Information retrieval Spatial/Time patterns Time tracking Power savings Support of science Group default behavior Group coordination Better usability Health monitoring
5.
6. Magitti demo video http://www2. parc . com/csl/groups/ubicomp/videos/magitti_project_demonstration . wmv
7.
8. Levels of Contextual Intelligence email web pages plain text images audio forms office documents Systems observe typical action-in-context patterns and deliver information that the user would not otherwise have known to look for. Systems extract and present relationships discovered in information to augment human sense-making . Users manually search, sort, sift and associate to find meaning and make sense Systems filter and sort information based on the user’s current context to increase efficiencies of search and discovery . Contextual Intelligence
9.
10. To subscribe to the PARC Innovations Update e-newsletter or blog and other feeds, or to follow us on Twitter, go to www.parc.com/subscribe For more information, please contact: Bo Begole, Principal Scientist Bo. [email_address] .com Lawrence Lee, Business Development Lawrence. [email_address] .com
Notas do Editor
collect data about people’s every day lives. several data types (e.g., time, location, motion, computer use, cellphone use, object use, people nearby) several data sources (phones, computers, fixed infrastructure (security cameras, polycoms, etc.)) process this data. draw “higher level conclusions” such as place visited, activity performed, project worked on. exploit the data. e.g: help people find stores and restaurants they will like. help advertisers target better. help users coordinate with each other more easily. help epidemiologists understand disease sources. help enterprise workers automatically organize notes. ------------- Behavior -/ activity -aware systems: Just being aware of the situation is not sufficient Applications must behave appropriately in the situation - Models human behavior and activity (whereby “behavior” refers to a person’s actions or reactions, usually in relation to the environment, and can be conscious or unconscious, overt or covert, and voluntary or involuntary; while “activity” is a conscious, voluntary pursuit) - Enable applications that use context-aware data collection, with their primary value centers on the ability to infer, and potentially respond to, present behavior, instead of intent. Examples: - Consumer observation - Health monitoring - Elder care - Interruptibility modeling - In-situ information delivery - Security monitoring - “Contextual” reminders
The fieldwork led to this general architectural vision. I’ll show you a more detailed View on the next slide, but I’d like you to understand the high-level picture first. As I mentioned at the start, our goal is to spontaneously provide appropriate recommendations. The system does this by determining both long-term preferences of genres of things that you like, and your immediate situation through Contextual data. From contextual data, the system estimates what activities You are likely to perform, filters and ranks the items in its database and returns To the client a useful list. When the user reviews the list, they may leave feedback, which is later used to Update their preferences. Now to explain how activity is represented, I have to go into a little more detail about The content recommender server
Next, we rank each piece of content in the repository according to the likely utility of the content based on a model of the user's personal preferences which is generated from multiple sources: The user's explicitly stated preferences, The ratings they've made of items they've seen and done, The topics of documents and web pages they've looked at, And what the user has done in the past. This generates a utility score for each item which determines the ranking of each item in the interface.