St Andrews HCI Research Group

01

Jun 2012

Aaron invited to present in Haifa Israel


Professor Aaron Quigley has been invited to attend and present at a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa, Israel later this month (June 25th – June 28th).

 

Based on his research work with colleagues and current students Umer Rashid and Jakub Dostal along with former student Dr. Mike Bennett he will be presenting a talk entitled “You lookin’ at me? ‘Eyes, Gaze, Displays and User Interface Personalisation'”.

 

This presentation draws on different yet related strands of research from four research papers, three published and one under review for a Journal. The three research papers have appeared in UMAP, AVI, and the Second Workshop on Intelligibility and Control in Pervasive Computing. Aaron’s overarching vision on bridging the digital-physical divide is embodied in this work to ensure we have more seamless interaction with computers.

 

The abstract for this talk is as follows:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalise designs, user interfaces and artefacts? Can we model, measure and predict the cost of users altering their gaze in single or multi-display environments? If so, can we personalise interfaces using this knowledge. What about when moving and while the distance between user and screen is varying. Can this be considered a new modality and used to personalise the interfaces along with physiological differences and our current gaze. In this talk we seek to answer some of these questions. We introduce an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. We also report on controlled lab and outdoor experiments with real users. This is to measure both gaze and distance from the screen in an attempt to quantify the cost of attention switching along with the use of distance as a modality. In each case, for distance, gaze or expected eyesight we would like to develop models which can allow us to make predictions about how easy or hard it is to see visual information and visual designs, along with altering the designs to suit individual users based on their current context.

 
Prior to this workshop Professor Quigley was asked to comment on some of the grand challenges he saw for User Modelling and Ubiquitous Computing. The following are the challenges he posed:

  1. Are user models and context data so fundamental that future UbiComp operating systems need to have them built in as first order features of the OS? Or in your opinion is this the wrong approach? Discuss.
  2. There are many facets of a ubiquitous computing system from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. Where do User Models reside within this? Are they something only needed occasionally (or not at all) for some services or experiences or needed for all?
  3. Ubicomp is a model of computing in which computation is everywhere and computer functions are integrated into everything. It will be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence. If so, how do we know what the system knows, assumes or infers about us in its decision making.
  4. Ubicomp represents an evolution from the notion of a computer as a single device, to the notion of a computing space comprising personal and peripheral computing elements and services all connected and communicating as required; in effect, “processing power so distributed throughout the environment that computers per se effectively disappear” or the so-called Calm Computing. The advent of ubicomp does not mean the demise of the desktop computer in the near future. Is Ubiquitous User Modelling the key problem to solve in moving people from desktop/mobile computing into UbiComp use scenarios? If not, what is?
  5. Context data can be provided, sensed or inferred. Context includes information from the person (physiological state), the sensed environment (environmental state) and computational environment (computational state) that can be provided to alter an applications behaviour. How much or little of this should be incorporated into individual UbiComp User Models?