St Andrews HCI Research Group

News

Adrian Friday, Ubicomp as a Lens on Energy Related Practice in Shared Student Accommodation


<!–Speaker: Adrian Friday, University of Lancaster
Date/Time: 4-5pm January 9th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:

Adrian Friday

Adrian Friday


Previous work in eco-feedback has focused either on new sensing technologies, or on people’s responses to specific feedback devices and other interventions placed in their homes. We attempt to take a comprehensive approach based on a large scale deployment of off the shelf sensors coupled with face to face interviews to account for both the amount of energy that specific appliances draw upon, and what occupant practices rely upon the services provided by these appliances. We performed a study in four student flats (each with 7–8 occupants) over a twenty-day period, collecting data from over two hundred sensors and conducting interviews with 11 participants. We build an account of life in the flats, and how that connects to the energy consumed. Our goal is to understand the challenges in accounting for both resources and practices at home, and what these challenges mean for the design of future feedback devices and interventions aimed at reducing energy consumption. In this talk we share results of our recent analysis and our experiences of conducting Ubicomp deployments using off the shelf sensors to study energy use.
 

Mark Wright, Design Informatics: Co-creation of Informatic Media


Mark Wright


<!–Speaker: Mark Wright, University of Edinburgh
Date/Time: 1-2pm December 6th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The cultural significance of Informatics is that it provides new forms of digital embodiment which lead to evolution of practices and meaning.
Engagement of Informatics with Design is a key approach to explore this relationship between technology and culture. This talk outlines how such an approach was developed over an extended period of engagement with the Arts and Humanities and Practitioners in the Digital Creative Industries.
Two projects in particular are used to illustrate this process Tacitus and Spellbinder.
Tacitus was a major AHRC/EPSRC project which explored tacit knowledge in designers and makers and how this could be support by computer design systems.
A novel haptic design system was developed which demonstrated significant improvements in ease of use.
Spellbinder was a new form of mobile application based on image matching using camera phones. Funded by the “Designing for the 21st Century” AHRC/EPSRC initiative we explored the potential of this new medium through an iterative series of workshops, intense design interventions and reflection which we termed Research by Design.

David Flatla, Situation-Specific Models of Colour Differentiation


<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)
Second Talk:
Speaker: David Flatla, Interaction Lab, University of Saskatchewan, Canada–>
Title: Using Situation-Specific Models of Colour Differentiation to Assist Individuals with Colour Vision Deficiency
Abstract:
Approximately 10% of the world’s population experiences either congenital, acquired, or situationally-induced colour vision deficiency (CVD – commonly called colour blindness). People with CVD often confuse colours that those without CVD can distinguish. When working in digital environments, CVD can lead to problems ranging from minor nuisances (e.g., being unable to distinguish ‘visited’ from ‘not visited’ links on a webpage) to major safety concerns (e.g., not seeing colour-coded warning messages).
Recently, recolouring tools have been developed that modify the colours presented on a display to eliminate the colour confusion that people with CVD experience. However, these tools are limited to individuals with dichromatic CVD – a particularly severe and somewhat rare form of congenital CVD. As a result, individuals with acquired and situationally-induced CVD as well as those with non-dichromatic forms of congenital CVD continue to have difficulties.
In this talk, I will present my PhD research toward a new recolouring tool based on situation-specific models of colour differentiation. I will first present my work on situation-specific models that capture the colour differentiation abilities of any individual in any environment through a two-minute in-situ calibration procedure. I will then discuss my most recent work on developing a recolouring tool based on situation-specific models of colour differentiation.
About David:
David Flatla is a PhD student at the University of Saskatchewan in Canada under the supervision of Dr. Carl Gutwin. His research focusses on the field of accessibility, particularly on how to help individuals with colour vision deficiency (CVD – commonly called colour blindness). To do this, he invented situation-specific models of colour differentiation that utilize in-situ calibration to accurately capture how people differentiate colors. He publishes at conferences like CHI and ASSETS. At UIST this year, he presented research exploring how to make boring calibrations fun by turning them into games.

Miguel Nacenta, Perspective and Spatial Relationships for Interface Design


Miguel Nacenta showing Radar


<!–Speaker: Miguel Nacenta, SACHI University of St Andrews
Date/Time: 1-2pm November 22th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our daily activities are continuously mediated by the space that we occupy; we face people to talk to them, sit in circular arrangements for group discussions, and write documents facing monitors. However, current interfaces generally make strong assumptions about where we are (e.g., monitors assume we are perpendicular and in front) or outright ignore important aspects of our spatial environments (e.g., several people editing a document). In my research I deal with the perceptual, cognitive and social aspects of space relationships that will shape the design of next generation interfaces. In this talk I will discuss projects that address questions such as: what happens when you look at a display from the “wrong” place? What forms of input are most efficient to interact with displays from different locations? How does having a private display affect our awareness of the work of others?

Aaron Quigley, Creating Personalized Digital Human Models of Perception for Visual Analytics


<!–Speaker: Aaron Quigley, SACHI University of St Andrews
Date/Time: 1-2pm November 15th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artifacts? Within many disciplines Digital Human Models and Standard Observer Models are widely used and have proven to be very useful for modeling users and simulating humans. In this paper, we create personalized digital human models of perception (Individual Observer Models), particularly focused on how humans see. Individual Observer Models capture how our bodies shape our perceptions. Individual Observer Models are useful for adapting and personalizing user interfaces and artifacts to suit individual users’ bodies and perceptions. We introduce and demonstrate an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. An evaluation of the simulated eyes finds that they see eye charts the same as humans. Also demonstrated is the Individual Observer Model successfully making predictions about how easy or hard it is to see visual information and visual designs. The ability to predict and adapt visual information to maximize how effective it is is an important problem in visual design and analytics.
About Aaron:
In this talk Professor Aaron Quigley will present a talk for a paper he is presenting at the User Modeling, Adaptation and Personalization (UMAP) conference 2011 on July 12th in Barcelona Spain. This work on Creating Personalized Digital Human Models of Perception for Visual Analytics is the work with and of his former PhD student Dr. Mike Bennett and now postdoctoral fellow in the Department of Psychology in Stanford University.
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI and his appointment is part of SICSA, the Scottish Informatics and Computer Science Alliance. Aaron’s research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation.

Anke Brocke, Touch the Map: Making Maps Accessible for the Blind


<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Anke Brock, IRIT research lab, Toulouse, France
Title: Touch the Map: Making Maps Accessible for the Blind
Abstract:
Human navigation is a very complex phenomenon that mainly relies on vision. Indeed, vision provides the pedestrian with landmarks and dynamic cues (e.g. optic flow) that are essential for position and orientation updating, estimation of distance, etc. Hence, for a blind person, navigating in familiar environment is not obvious, and becomes especially complicated in unknown environments. Exploration of geographic maps at home (for travel preparation) or even on mobile phones (for guidance) may represent valuable assistance. As maps are visual by essence and hence inaccessible for the blind, multimodal interactive maps undoubtedly represent a solution. Multimodal interactive maps are based on a combination of multi-touch devices and tactile (e.g. embossed) paper maps. However, design and realization of interactive maps for the blind imply several challenges, as for example making multi-touch surfaces accessible for the blind. In this talk the concept and design of the maps, the work with the blind users, the technical challenges as well as the psychological background will be presented.
About Anke:
Anke Brock is currently a PhD candidate in Human-Computer Interaction at the IRIT research lab in Toulouse (France). She has worked several years as a research engineer for navigation and driver assistance systems at Bosch in Hildesheim (Germany). Anke has obtained her master’s degree in Human-Computer Interaction in September 2010 at the University of Toulouse. Since then her research interests include accessibility of technology for the blind, interactive maps, tabletops, multimodal interaction, spatial cognition and haptic exploration as well as accessibility of the participatory design process.

Sean Lynch, Interaction and Visualization Approaches for Artistic Applications


<!–Speaker: Sean Lynch, Innovis group/Interactions lab, University of Calgary, Canada
Date/Time: 1-2pm September 28th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Information visualization and new paradigms of interaction are generally applied to productive processes (i.e., at work) or for personal and entertainment purposes. In my work, I have looked instead at how to apply new technologies and visualization techniques to art. I will present mainly two projects that focus on multi-touch music composition and performance, and the visual analysis of the history and visual features of fine paintings.
About Sean:
Sean Lynch is a Master’s Student in Computer Science at the Interactions Lab at the University of Calgary. Sean’s research interests span interactive technologies (e.g., multi-touch), interactive art, and information visualization.

Mark Shovman, Measuring the Effectiveness of Abstract Data Visualisations


<!–Speaker: Mark Shovman, University of Abertay, Dundee
Date/Time: 2-3pm September 13th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In natural and social sciences, novel insights are often derived from visual analysis of data. But what principles underpin the extraction of meaningful content from these visualisations? Abstract data visualisation can be traced at least as far back as 1801; but with the increase in the quantity and complexity of data that require analysis, standard tools and techniques are no longer adequate for the task. The ubiquity of computing power enables novel visualisations that are rich, multimodal and interactive; but what is the most effective way to exploit this power to support analysis of large, complex data sets? Often, the lack of fundamental theory is pointed out as a central ‘missing link’ in the development and assessment of efficient novel visualisation tools and techniques.
In this talk, I will present some first steps towards the theory of visualisation comprehension, drawing heavily on existing research in natural scene perception and reading comprehension. The central inspiration is the Reverse Hierarchy Theory of perceptual organisation, which is a recent (2002) development of the near-centennial Laws of Gestalt. The proposed theory comes complete with a testing methodology (the ‘pop-out’ effect testing) that is based on our understanding of the cognitive processes involved in visualisation comprehension.
About Mark:
Mark Shovman is a SICSA Lecturer in Information Visualisation in the Institute of Arts, Media and Computer Games Technology in the University of Abertay Dundee. He is an interdisciplinary researcher, studying the perception and cognition aspects of information visualisations, computer games, and immersive virtual reality. His recent research projects include the application of dynamic 3D link-charts in Systems Biology; alleviating cyber-sickness in VR helmets; and immersive VR as an art medium. Mark was born in Tbilisi, Georgia, and lived in Jerusalem, Israel since 1990. He can be found on LinkedIn

Short Talks by MSc Students


<!–Date/Time: 1-2pm August 30th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Yemliha Kamber, , University of St Andrews
Title: Empirical Investigation of The Memorability of Gesture Sets
Abstract:
As technology is becoming increasing advanced and sophisticated, various input technologies and new experimental input technologies allow for large numbers of gestures types and new ways in controlling software. However very little is
known about the eects they have and how people learn and remember the various gestures. This presentation will briefly talk about what gestures are, the related work, our experimental study and the results which we analysed.
Speaker: Asset Nurboluly, University of St Andrews
Title: Multi touch transparent input output device
Abstract:
In recent years multi-touch interfaces have become popular. They have been giving a new way of human interaction. Multi-touch interfaces can be used in various aspects of human lives, such as medicine, education, military, science etc. The main aim of this project is to build a transparent display device, which uses multi-touch interface. Transparent display is a future trend in a technology world, setting out first in sci-fi movies and turning into reality nowadays. Usage of such devices is significant. They can be used in cars as a windshield, where a driver can see the GPS information as well as watching what happens on the road. In everyday life we can integrate them in mirrors, so in the morning during tooth-brushing we can check out the news, weather etc.
Regarding medicine and military areas, transparent displays can be used to show real-time interactive information for soldiers and surgeons.
The Prototype has been built by using laser light plane technology for multi touch sensing. An image projected from a projector, IR camera was used to capture the touches. The built-in system is a transparent multi-touch display.

Petteri Nurmi, Energy-efficient Location-awareness on Mobile Devices


<!–Speaker: Peterri Nurmi,  Helsinki Institute for Information Technology HIIT
Date/Time: 12pm-1pm July 29th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Contemporary mobile phones readily support different positioning techniques. In addition to integrated GPS receivers, GSM and WiFi can be used for position estimation, and other sensors such as accelerometers and digital compasses can be used to support positioning, e.g., through dead reckoning or the detection of stationary periods. Selecting which sensor technologies to use for positioning is, however, a non-trivial task as available sensor technologies vary considerably in terms of their energy demand and the accuracy of location estimates. To improve the energy-efficiency of mobile devices and to provide as accurate position estimates as possible, novel on-device positioning technologies together with techniques that select optimal sensor modalities based on positioning accuracy requirements are required. In this talk we first introduce novel GSM and WiFi fingerprinting algorithms that run directly on mobile devices with minimal energy consumption [1]. We also introduce our recent work on minimizing the power consumption of continuous location and trajectory tracking on mobile devices [2].
[1] P. Nurmi, S. Bhattacharya, J. Kukkonen: “A grid-based algorithm for on-device GSM positioning.” Proc. 12th ACM International Conference on Ubiquitous Computing (Ubicomp, Copenhagen, Denmark, September 2010). ACM Press, 2010, 227-236.
[2] M. B. Kjaergaard, S. Bhattacharya, H. Blunck, P. Nurmi, “Energy-efficient Trajectory Tracking for Mobile Devices”, Proc. 9th International Conference on Mobile Systems, Applications and Services (MobiSys, June-July, 2011).
About Petteri:
Dr. Petteri Nurmi is a Senior Researcher at the Helsinki Institute for Information Technology HIIT. He received a PhD in Computer Science from the University of Helsinki in 2009. He is currently co-leading the Adaptive Computing research group at HIIT together with Doc. Patrik Floréen. His research focuses on ubiquitous computing, user modeling and interaction with a view of making the life of ordinary people easier through easy-to-use mobile services. He regularly serves as Program Committee Member and reviewer for numerous leading conferences and journals. More information about his research can be found from the webpage of the research group: http://www.hiit.fi/adapc/