St Andrews HCI Research Group

News

Helen Ai He, One Size Does Not Fit All – Applying the "Stages of Change" Model to Eco-feedback Technology Design


<!–Speaker: Helen Ai He, University of Calgary, Canada
Date/Time: 1-2pm April 10th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Global warming, and the climate change it induces, is an urgent global issue. One remedy to this problem, and the focus of this talk, is to motivate eco-behaviors by people. One approach is the development of technologies that provide real-time feedback of energy usage (e.g. in the form of watts, monetary cost, or carbon emissions).
However, there is one problem – most technologies use a “one-size-fits-all” solution, providing the same feedback to differently motivated individuals at different stages of readiness, willingness and ableness to change. I synthesize a wide range of motivational psychology literature to develop a motivational framework based on the Stages of Change (aka Transtheoretical) Model. If you are at all interested in motivation, behaviour change, or designing technologies to motivate behaviour change, this talk may be useful for you.
About Helen:
Helen Ai He completed her Masters in Computer Science (specializing in HCI) at the University of Calgary, Canada, under the supervision of Dr. Saul Greenberg and Dr. Elaine May Huang.
She worked as a software developer in SMART Technologies for a year and a half, and plans to begin an HCI PhD in September 2012. She is particularly interested in topics such as personal informatics, cross-cultural research, technology design for developing regions, and sustainable interaction design. Aside from research, she enjoys doing karate, climbing, artwork, and eating!

Sriram Subramanian, Investigating New Forms of Interactive Systems


<!–Speaker: Sriram Subramanian, University of Bristol
Date/Time: 1-2pm March 6th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The talk will present some of the recent research endeavours of the Bristol Interaction and Graphics group. The group has been exploring various technical solutions to create the next generation of touch interfaces that support multi-point haptic feedback as well as dynamic allocation of views to different users. The talk will rely on a lot of videos of on-going work to illustrate and describe our systems. I expect the talk to be accessible to all computer scientists and even to the lay public. Thus I particularly welcome discussion, feedback, and critique from the community.
About Sriram:
Dr. Sriram Subramanian is a Reader at the University of Bristol with a research interests in Human-computer Interaction (HCI). He is specifically interested in new forms of physical input. Before joining the University of Bristol, he worked as a senior scientist at Philips Research Netherlands and as an Assistant Professor at the Department of Computer Science of the University of Saskatchewan, Canada. You can find more details of his research interests at his groups page http://big.cs.bris.ac.uk

SACHI member gives two presentations at the 17th ACM Conference on Intelligent User Interfaces


Per Ola Kristensson will give two presentations at IUI 2012: 17th ACM International Conference on Intelligent User Interfaces in Lisbon, Portugal on February 14-17, 2012.

IUI 2012


The first presentation is on Wednesday and is entitled “Performance comparisons of phrase sets and presentation styles for text entry evaluations”. This paper describes how we used crowdsourcing to empirically compare five different publicly-available phrase sets in two large-scale text entry experiments. We also investigated the impact of asking participants to memorise phrases before writing them versus allowing participants to see the phrase during text entry. This paper is co-authored with Keith Vertanen, an Assistant Professor of Computer Science at Montana Tech in USA.

Gesture recognition via the Kinect


The second presentation is on Thursday and is entitled “Continuous recognition of one-handed and two-handed gestures using 3D full-body motion tracking sensors”. This paper is co-authored with SACHI members Thomas Nicholson and Aaron Quigley. In this paper we present a new bimanual gesture interface for the Kinect. Among other things, our evaluation shows that the system recognises one-handed and two-handed gestures with an accuracy of 92.7%–96.2%.
Per Ola will also introduce the keynote speaker Chris Bishop from Microsoft Research Cambridge on Thursday. Chris will talk about “…the crucial role played by machine learning in the Kinect 3D full-body motion sensor, which has recently become the fastest-selling consumer electronics device in history.”
Per Ola is a Workshop Co-Chair for IUI 2012 together with Andreas Butz, a Professor of Computer Science at the University of Munich in Germany.

Annalu Waller, Augmentative and Alternative Communication across the Lifespan of Individuals with Complex Communication Needs


<!–Speaker: Annalu Waller, University of Dundee
Date/Time: 1-2pm February 21st, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Augmentative and alternative communication (AAC) attempts to augment natural speech, or to provide alternative ways to communicate for people with limited or no speech. Technology has played an increasing role in AAC. At the most simplest level, people with complex communication needs (CCN) can cause a prestored message to be spoken by activating a single switch. At the most sophisticated level, literate users can generate novel text. Although some individuals with CCN become effective communicators, most do not – they tend to be passive communicators, responding mainly to questions or prompts at a one or two word level. Conversational skills such as initiation, elaboration and story telling are seldom observed.
One reason for the reduced levels of communicative ability is that AAC technology provides the user with a purely physical link to speech output. The user is required to have sufficient language abilities and physical stamina to translate what they want to say into the code sequence of operations needed to produce the desired output. Instead of placing all the cognitive load on the user, AAC devices can be designed to support the cognitive and language needs of individuals with CCN, taking into account the need to scaffold communication as children develop into adulthood. A range of research projects, including systems to support personal narrative and language play, will be used to illustrate the application of Human Computer Interaction (HCI) and Natural Language Generation (NLG) in the design and implementation of electronic AAC devices.
About Annalu:
Dr Annalu Waller is a Senior Lecturer in the School of Computing at the University of Dundee. She has worked in the field of Augmentative and Alternate Communication (AAC) since 1985, designing communication systems for and with nonspeaking individuals. She established the first AAC assessment and training centre in South Africa in 1987 before coming to Dundee in 1989. Her PhD developed narrative technology support for adults with acquired dysphasia following stroke. Her primary research areas are human computer interaction, natural language generation, personal narrative and assistive technology. In particular, she focuses on empowering end users, including disabled adults and children, by involving them in the design and use of technology. She manages a number of interdisciplinary research projects with industry and practitioners from rehabilitation engineering, special education, speech and language therapy, nursing and dentistry. She is on the editorial boards of several academic journals and sits on the boards of a number of national and international organisations representing disabled people.

Adrian Friday, Ubicomp as a Lens on Energy Related Practice in Shared Student Accommodation


<!–Speaker: Adrian Friday, University of Lancaster
Date/Time: 4-5pm January 9th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:

Adrian Friday

Adrian Friday


Previous work in eco-feedback has focused either on new sensing technologies, or on people’s responses to specific feedback devices and other interventions placed in their homes. We attempt to take a comprehensive approach based on a large scale deployment of off the shelf sensors coupled with face to face interviews to account for both the amount of energy that specific appliances draw upon, and what occupant practices rely upon the services provided by these appliances. We performed a study in four student flats (each with 7–8 occupants) over a twenty-day period, collecting data from over two hundred sensors and conducting interviews with 11 participants. We build an account of life in the flats, and how that connects to the energy consumed. Our goal is to understand the challenges in accounting for both resources and practices at home, and what these challenges mean for the design of future feedback devices and interventions aimed at reducing energy consumption. In this talk we share results of our recent analysis and our experiences of conducting Ubicomp deployments using off the shelf sensors to study energy use.
 

Mark Wright, Design Informatics: Co-creation of Informatic Media


Mark Wright


<!–Speaker: Mark Wright, University of Edinburgh
Date/Time: 1-2pm December 6th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The cultural significance of Informatics is that it provides new forms of digital embodiment which lead to evolution of practices and meaning.
Engagement of Informatics with Design is a key approach to explore this relationship between technology and culture. This talk outlines how such an approach was developed over an extended period of engagement with the Arts and Humanities and Practitioners in the Digital Creative Industries.
Two projects in particular are used to illustrate this process Tacitus and Spellbinder.
Tacitus was a major AHRC/EPSRC project which explored tacit knowledge in designers and makers and how this could be support by computer design systems.
A novel haptic design system was developed which demonstrated significant improvements in ease of use.
Spellbinder was a new form of mobile application based on image matching using camera phones. Funded by the “Designing for the 21st Century” AHRC/EPSRC initiative we explored the potential of this new medium through an iterative series of workshops, intense design interventions and reflection which we termed Research by Design.

Miguel Nacenta, Perspective and Spatial Relationships for Interface Design


Miguel Nacenta showing Radar


<!–Speaker: Miguel Nacenta, SACHI University of St Andrews
Date/Time: 1-2pm November 22th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our daily activities are continuously mediated by the space that we occupy; we face people to talk to them, sit in circular arrangements for group discussions, and write documents facing monitors. However, current interfaces generally make strong assumptions about where we are (e.g., monitors assume we are perpendicular and in front) or outright ignore important aspects of our spatial environments (e.g., several people editing a document). In my research I deal with the perceptual, cognitive and social aspects of space relationships that will shape the design of next generation interfaces. In this talk I will discuss projects that address questions such as: what happens when you look at a display from the “wrong” place? What forms of input are most efficient to interact with displays from different locations? How does having a private display affect our awareness of the work of others?

Aaron Quigley, Creating Personalized Digital Human Models of Perception for Visual Analytics


<!–Speaker: Aaron Quigley, SACHI University of St Andrews
Date/Time: 1-2pm November 15th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artifacts? Within many disciplines Digital Human Models and Standard Observer Models are widely used and have proven to be very useful for modeling users and simulating humans. In this paper, we create personalized digital human models of perception (Individual Observer Models), particularly focused on how humans see. Individual Observer Models capture how our bodies shape our perceptions. Individual Observer Models are useful for adapting and personalizing user interfaces and artifacts to suit individual users’ bodies and perceptions. We introduce and demonstrate an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. An evaluation of the simulated eyes finds that they see eye charts the same as humans. Also demonstrated is the Individual Observer Model successfully making predictions about how easy or hard it is to see visual information and visual designs. The ability to predict and adapt visual information to maximize how effective it is is an important problem in visual design and analytics.
About Aaron:
In this talk Professor Aaron Quigley will present a talk for a paper he is presenting at the User Modeling, Adaptation and Personalization (UMAP) conference 2011 on July 12th in Barcelona Spain. This work on Creating Personalized Digital Human Models of Perception for Visual Analytics is the work with and of his former PhD student Dr. Mike Bennett and now postdoctoral fellow in the Department of Psychology in Stanford University.
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI and his appointment is part of SICSA, the Scottish Informatics and Computer Science Alliance. Aaron’s research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation.

Anke Brocke, Touch the Map: Making Maps Accessible for the Blind


<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Anke Brock, IRIT research lab, Toulouse, France
Title: Touch the Map: Making Maps Accessible for the Blind
Abstract:
Human navigation is a very complex phenomenon that mainly relies on vision. Indeed, vision provides the pedestrian with landmarks and dynamic cues (e.g. optic flow) that are essential for position and orientation updating, estimation of distance, etc. Hence, for a blind person, navigating in familiar environment is not obvious, and becomes especially complicated in unknown environments. Exploration of geographic maps at home (for travel preparation) or even on mobile phones (for guidance) may represent valuable assistance. As maps are visual by essence and hence inaccessible for the blind, multimodal interactive maps undoubtedly represent a solution. Multimodal interactive maps are based on a combination of multi-touch devices and tactile (e.g. embossed) paper maps. However, design and realization of interactive maps for the blind imply several challenges, as for example making multi-touch surfaces accessible for the blind. In this talk the concept and design of the maps, the work with the blind users, the technical challenges as well as the psychological background will be presented.
About Anke:
Anke Brock is currently a PhD candidate in Human-Computer Interaction at the IRIT research lab in Toulouse (France). She has worked several years as a research engineer for navigation and driver assistance systems at Bosch in Hildesheim (Germany). Anke has obtained her master’s degree in Human-Computer Interaction in September 2010 at the University of Toulouse. Since then her research interests include accessibility of technology for the blind, interactive maps, tabletops, multimodal interaction, spatial cognition and haptic exploration as well as accessibility of the participatory design process.

SACHI research on the university front page


The University of St Andrews has a story about how SACHI researcher Per Ola Kristensson and his collaborator Keith Vertanen used crowdsourcing, Twitter, and other online sources to create better statistical language models for AAC devices. These devices enable users with communication difficulties to speak via a predictive keyboard interface. The new language models are by far the largest that have been built so far and they provide a 5-12% reduction in the average number of keystrokes that users have to type to communicate.
The research paper is open access and you can read it for free in the Association for Computational Linguistics’s digital library.
Reference:
Vertanen, K. and Kristensson, P.O. 2011. The imagination of crowds: conversational AAC language modeling using crowdsourcing and large data sources. In Proceedings of the ACL Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). ACL: 700-711.