St Andrews HCI Research Group

News

Two CHI 2013 workshops


Two SACHI members, Per Ola Kristensson and Aaron Quigley are organizing with other colleagues workshops at the CHI 2013 the ACM SIGCHI Conference on Human Factors in Computing Systems in Paris in April 2013. These workshops are called Blended Interaction: Envisioning Future Collaborative Interactive Spaces and Grand Challenges in Text Entry.
Once the workshop websites are online, we will link from them here. (click on the CHI 2013 logo above to visit the main conference website).

UMUAI special issue on Ubiquitous and Pervasive User Modelling


Aaron Quigley, Judy Kay and Tsvi Kuflik are guest editors for a UMUAI special issue on Ubiquitous and Pervasive User Modelling. You can see the full call for papers for this special issue here.
 

Aaron Quigley, Inaugural Lecture on HCI


Today Professor Aaron Quigley will be giving his Inaugural Lecture in School III
The abstract for his talk is as follows: Billions of people are using interconnected computers and have come to rely on the computational power they afford us, to support their lives, or advance our global economy and society. However, how we interact with this computation is often limited to little “windows of interaction” with mobile and desktop devices which aren’t fully suited to their contexts of use. Consider the surgeon operating, the child learning to write or the pedestrian navigating a city and ask are the current devices and forms of human computer interaction as fluent as they might be? I contend there is a division between the physical world in which we live our lives and the digital space where the power of computation currently resides. Many day to day tasks or even forms of work are poorly supported by access to appropriate digital information. In this talk I will provide an overview of research I’ve been pursuing to bridge this digital-physical divide and my future research plans. This talk will be framed around three interrelated topics. Ubiquitous Computing, Novel Interfaces and Visualisation. Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. Everyday objects are sites for sensing, input, processing along with user output. Novel Interfaces, which draw the user interface closer to the physical world, both in terms of input to the system and output from the system. Finally, the use of computer-supported interactive visual representations of data to amplify cognition with visualisation. In this talk I will demonstrate that advances in human computer interaction require insights and research from across the sciences and humanities if we are to bridge this digital-physical divide.

Aaron delivers and invited seminar in the University of Zurich


Professor Quigley is presenting a seminar in the University of Zurich as an invited speaker by Dr Elaine Huang.
Seminar abstract: Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artefacts? Can we model, measure and predict the cost of users altering their gaze in single or multi-display environments? If so, can we personalize interfaces using this knowledge. What about when moving and while the distance between user and screen is varying. Can this be considered a new modality and used to personalize the interfaces along with physiological differences and our current gaze. In this talk we seek to answer some of these questions. We introduce an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. We also report on controlled lab and outdoor experiments with real users. This is to measure both gaze and distance from the screen in an attempt to quantify the cost of attention switching along with the use of distance as a modality. In each case, for distance, gaze or expected eyesight we would like to develop models which can allow us to make predictions about how easy or hard it is to see visual information and visual designs, along with altering the designs to suit individual users based on their current context.

Welcome to Uta Hinrichs


Welcome to Uta Hinrichs who has joined the SACHI group from the University of Calgary, Canada as a Research Fellow. Uta holds a Diplom (equiv. to MSc) in Computational Visualistics from the University of Magdeburg in Germany and is in the process of finishing her PhD in Computer Science with a specialization in Computational Media Design. Uta’s PhD research, that she conducted at the InnoVis Group of the University of Calgary, focuses on how to support open-ended information exploration on large displays in public exhibition spaces, combining information visualization with direct-touch interaction techniques. As part of this research, she has designed and studied large display installations in the context of a museum and art gallery, library, and an aquarium.
To learn more about Uta’s work here see her SACHI biography page or visit her own website here to get an overview of her previous research projects. Everyone in SACHI welcomes Uta!
 

New SICSA role for Aaron


Logo for SICSA

As part of his work in the School of Computer Science, from the start of August 2012 Aaron is joining the Scottish Informatics and Computer Science Alliance (SICSA) executive as the deputy director for knowledge exchange for two years. As a result, he is stepping down as  theme leader for Multimodal Interaction.  Aaron has enjoyed his time working with Professor Stephen Brewster and is looking forward to joining the executive next month.

Ubiquitous User Modeling – U2M'2012


This week Aaron has been attending a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa Israel. Aaron’s talk at this event was entitled Eyes, Gaze, Displays: User Interface Personalisation “You Lookin’ at me?”. In this he covered work with Mike Bennett, Umar Rashid, Jakub Dostal, Miguel A. Nacenta and Per Ola Kristensson from SACHI. The talk was a good way to show the interlocking and related research going on in SACHI.
His talk included references to a number of recent papers which include:

The alternative yet related viewpoints in this work made for a stimulating presentation and fruitful views for the international audience.
 

Pervasive 2012


This week Aaron Quigley and Tristan Henderson attended Pervasive 2012, the Tenth International Conference on Pervasive Computing, at Newcastle University.
On Monday Aaron attended Pervasive Intelligibility the Second Workshop on Intelligibility and Control in Pervasive Computing. Here he presented a paper entitled Designing Mobile Computer Vision Applications for the Wild: Implications on Design and Intelligibility (PDF) by
Per Ola Kristensson, Jakub Dostal and Aaron Quigley. Later, he was a panelists with Judy Kay and Simone Stumpf where they discussed the research challenges of intelligibility with pervasive computing along with all participants.
On Tuesday Tristan attended the First Workshop on recent advances in behavior prediction and pro-active pervasive computing where he presented the paper Predicting location-sharing privacy preferences in social network applications by Greg Bigwood, Fehmi Ben Abdesslem and Tristan Henderson.
On Tuesday Aaron chaired the Doctoral Consortium with Elaine Huang from the University of Zurich with five panellists and nine students. The panellists were Adrian Friday, University of Lancaster, UK, Jin Nakazawa, Keio University, Japan and AJ Brush, Microsoft Research Seattle, USA.
The Pervasive 2012 doctoral consortium provided a collegial and supportive forum in which PhD students could present and defend their doctoral research-in-progress with constructive feedback and discussion. The consortium was guided by a panel of experienced researchers and practitioners with both academic and industrial experience. It offered students a valuable opportunity to receive high-quality feedback and fresh perspectives from recognised international experts in the field, and to engage with other senior doctoral students.
The day ended with a career Q&A session with an extended panel including Tristan Henderson from SACHI and Rene Mayhofer. Following this the panellists and students were able to have dinner together to continue active research and career discussions.
Along with work as a member of the joint steering committee of pervasive and UbiComp Aaron was a session chair for the HCI session on Thursday.

Aaron invited to present in Haifa Israel


Professor Aaron Quigley has been invited to attend and present at a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa, Israel later this month (June 25th – June 28th).

 

Based on his research work with colleagues and current students Umer Rashid and Jakub Dostal along with former student Dr. Mike Bennett he will be presenting a talk entitled “You lookin’ at me? ‘Eyes, Gaze, Displays and User Interface Personalisation'”.

 

This presentation draws on different yet related strands of research from four research papers, three published and one under review for a Journal. The three research papers have appeared in UMAP, AVI, and the Second Workshop on Intelligibility and Control in Pervasive Computing. Aaron’s overarching vision on bridging the digital-physical divide is embodied in this work to ensure we have more seamless interaction with computers.

 

The abstract for this talk is as follows:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalise designs, user interfaces and artefacts? Can we model, measure and predict the cost of users altering their gaze in single or multi-display environments? If so, can we personalise interfaces using this knowledge. What about when moving and while the distance between user and screen is varying. Can this be considered a new modality and used to personalise the interfaces along with physiological differences and our current gaze. In this talk we seek to answer some of these questions. We introduce an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. We also report on controlled lab and outdoor experiments with real users. This is to measure both gaze and distance from the screen in an attempt to quantify the cost of attention switching along with the use of distance as a modality. In each case, for distance, gaze or expected eyesight we would like to develop models which can allow us to make predictions about how easy or hard it is to see visual information and visual designs, along with altering the designs to suit individual users based on their current context.

 
Prior to this workshop Professor Quigley was asked to comment on some of the grand challenges he saw for User Modelling and Ubiquitous Computing. The following are the challenges he posed:

  1. Are user models and context data so fundamental that future UbiComp operating systems need to have them built in as first order features of the OS? Or in your opinion is this the wrong approach? Discuss.
  2. There are many facets of a ubiquitous computing system from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. Where do User Models reside within this? Are they something only needed occasionally (or not at all) for some services or experiences or needed for all?
  3. Ubicomp is a model of computing in which computation is everywhere and computer functions are integrated into everything. It will be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence. If so, how do we know what the system knows, assumes or infers about us in its decision making.
  4. Ubicomp represents an evolution from the notion of a computer as a single device, to the notion of a computing space comprising personal and peripheral computing elements and services all connected and communicating as required; in effect, “processing power so distributed throughout the environment that computers per se effectively disappear” or the so-called Calm Computing. The advent of ubicomp does not mean the demise of the desktop computer in the near future. Is Ubiquitous User Modelling the key problem to solve in moving people from desktop/mobile computing into UbiComp use scenarios? If not, what is?
  5. Context data can be provided, sensed or inferred. Context includes information from the person (physiological state), the sensed environment (environmental state) and computational environment (computational state) that can be provided to alter an applications behaviour. How much or little of this should be incorporated into individual UbiComp User Models?

SACHI at International Working Conference on Advanced Visual Interfaces


LogoThis week three members of SACHI, Aaron Quigley, Miguel Nacenta and Umar Rashid are attending the 11th Advanced Visual Interfaces International Working Conference in Italy. “AVI 2012 is held on the island of Capri (Naples), Italy from May 21 to 25, 2012. Started in 1992 in Roma, and held every two years in different Italian towns, the Conference traditionally brings together experts in different areas of computer science who have a common interest in the conception, design and implementation of visual and, more generally, perceptual interfaces.”
We are presenting two full papers.
FatFonts: Combining the symbolic and visual aspects of numbers, Miguel Nacenta, Uta Hinrichs and Sheelagh Carpendale.
Abstract: “In this paper we explore numeric typeface design for visualization purposes. We introduce FatFonts, a technique for visualizing quantitative data that bridges the gap between numeric and visual representations. FatFonts are based on Arabic numerals but, unlike regular numeric typefaces, the amount of ink (dark pixels) used for each digit is proportional to its quantitative value. This enables accurate reading of the numerical data while preserving an overall visual context. We discuss the challenges of this approach that we identified through our design process and propose a set of design goals that include legibility, familiarity, readability, spatial precision, dynamic range, and resolution. We contribute four FatFont typefaces that are derived from our exploration of the design space that these goals introduce. Finally, we discuss three example scenarios that show how FatFonts can be used for visualization purposes as valuable representation alternatives.”
Read the FatFonts paper here. And also FatFonts features in the New Scientist.
and
The cost of display switching: A comparison of mobile, large display and hybrid UI configuration, Umar Rashid, Miguel Nacenta and Aaron Quigley
Abstract: “Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, workload and subjective preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.”
Read the Cost of Display Switching paper here.
Along with our colleagues in Nottingham and Birmingham we are chairing and organising the Workshop on Infrastructure and Design Challenges of Coupled Display Visual Interfaces PPD’12. The proceedings can be downloaded here. Finally, Aaron is the session chair for the Augmented Reality/Virtual Reality papers at AVI.