St Andrews HCI Research Group

News

Anke Brocke, Touch the Map: Making Maps Accessible for the Blind


<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Anke Brock, IRIT research lab, Toulouse, France
Title: Touch the Map: Making Maps Accessible for the Blind
Abstract:
Human navigation is a very complex phenomenon that mainly relies on vision. Indeed, vision provides the pedestrian with landmarks and dynamic cues (e.g. optic flow) that are essential for position and orientation updating, estimation of distance, etc. Hence, for a blind person, navigating in familiar environment is not obvious, and becomes especially complicated in unknown environments. Exploration of geographic maps at home (for travel preparation) or even on mobile phones (for guidance) may represent valuable assistance. As maps are visual by essence and hence inaccessible for the blind, multimodal interactive maps undoubtedly represent a solution. Multimodal interactive maps are based on a combination of multi-touch devices and tactile (e.g. embossed) paper maps. However, design and realization of interactive maps for the blind imply several challenges, as for example making multi-touch surfaces accessible for the blind. In this talk the concept and design of the maps, the work with the blind users, the technical challenges as well as the psychological background will be presented.
About Anke:
Anke Brock is currently a PhD candidate in Human-Computer Interaction at the IRIT research lab in Toulouse (France). She has worked several years as a research engineer for navigation and driver assistance systems at Bosch in Hildesheim (Germany). Anke has obtained her master’s degree in Human-Computer Interaction in September 2010 at the University of Toulouse. Since then her research interests include accessibility of technology for the blind, interactive maps, tabletops, multimodal interaction, spatial cognition and haptic exploration as well as accessibility of the participatory design process.

Oct 7th: University of Edinburgh Seminar by Aaron Quigley


Aaron will be giving a seminar in the School of Informatics in the Univeristy of Edinburgh on October 7th 2011 on the topic of:

Challenges in Social Network Visualisation

Information Visualisation is a research area that focuses on the use of graphical techniques to present abstract data in an explicit form. Such static (pictures) or dynamic presentations help people formulate an understanding of data and an internal model of it for reasoning about. Such pictures of data are an external artefact supporting decision making. While sharing many of the same goals of Scientific Visualisation, Human Computer Interaction, User Interface Design and Computer Graphics, Information Visualisation focuses on the visual presentation of data without a physical or geometric form.
As such it relies on research in mathematics, data mining, data structures, algorithms, graph drawing, human-computer interaction, cognitive psychology, semiotics, cartography, interactive graphics, imaging and visual design. In this talk Aaron will present a brief history of social-network analysis and visualisation, introduce analysis and layout algorithms we have developed for visualising such data. Our recent analysis focuses on actor identification through network tuning and our Social Network Assembly Pipeline, SNAP which operates on the premise of “social network inference” where we have studied it experimentally with the analysis of 10,000,000 record sets without explicit relations. Our visulisation has focussed on large scale node-link diagrams, small multiples, dynamic network displays and egocentric layouts.  The talk concludes with a number of challenges and open research questions we face as researchers in using visualisation in an attempt to present dynamic data sources.

SACHI research on the university front page


The University of St Andrews has a story about how SACHI researcher Per Ola Kristensson and his collaborator Keith Vertanen used crowdsourcing, Twitter, and other online sources to create better statistical language models for AAC devices. These devices enable users with communication difficulties to speak via a predictive keyboard interface. The new language models are by far the largest that have been built so far and they provide a 5-12% reduction in the average number of keystrokes that users have to type to communicate.
The research paper is open access and you can read it for free in the Association for Computational Linguistics’s digital library.
Reference:
Vertanen, K. and Kristensson, P.O. 2011. The imagination of crowds: conversational AAC language modeling using crowdsourcing and large data sources. In Proceedings of the ACL Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). ACL: 700-711.

Sean Lynch, Interaction and Visualization Approaches for Artistic Applications


<!–Speaker: Sean Lynch, Innovis group/Interactions lab, University of Calgary, Canada
Date/Time: 1-2pm September 28th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Information visualization and new paradigms of interaction are generally applied to productive processes (i.e., at work) or for personal and entertainment purposes. In my work, I have looked instead at how to apply new technologies and visualization techniques to art. I will present mainly two projects that focus on multi-touch music composition and performance, and the visual analysis of the history and visual features of fine paintings.
About Sean:
Sean Lynch is a Master’s Student in Computer Science at the Interactions Lab at the University of Calgary. Sean’s research interests span interactive technologies (e.g., multi-touch), interactive art, and information visualization.

Mark Shovman, Measuring the Effectiveness of Abstract Data Visualisations


<!–Speaker: Mark Shovman, University of Abertay, Dundee
Date/Time: 2-3pm September 13th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In natural and social sciences, novel insights are often derived from visual analysis of data. But what principles underpin the extraction of meaningful content from these visualisations? Abstract data visualisation can be traced at least as far back as 1801; but with the increase in the quantity and complexity of data that require analysis, standard tools and techniques are no longer adequate for the task. The ubiquity of computing power enables novel visualisations that are rich, multimodal and interactive; but what is the most effective way to exploit this power to support analysis of large, complex data sets? Often, the lack of fundamental theory is pointed out as a central ‘missing link’ in the development and assessment of efficient novel visualisation tools and techniques.
In this talk, I will present some first steps towards the theory of visualisation comprehension, drawing heavily on existing research in natural scene perception and reading comprehension. The central inspiration is the Reverse Hierarchy Theory of perceptual organisation, which is a recent (2002) development of the near-centennial Laws of Gestalt. The proposed theory comes complete with a testing methodology (the ‘pop-out’ effect testing) that is based on our understanding of the cognitive processes involved in visualisation comprehension.
About Mark:
Mark Shovman is a SICSA Lecturer in Information Visualisation in the Institute of Arts, Media and Computer Games Technology in the University of Abertay Dundee. He is an interdisciplinary researcher, studying the perception and cognition aspects of information visualisations, computer games, and immersive virtual reality. His recent research projects include the application of dynamic 3D link-charts in Systems Biology; alleviating cyber-sickness in VR helmets; and immersive VR as an art medium. Mark was born in Tbilisi, Georgia, and lived in Jerusalem, Israel since 1990. He can be found on LinkedIn

Videos from MMI Summer School now online


A big thank you to Timothy Sheridan an undergrad working in the SACHI group for editing down and polishing up the videos of the MMI summer school final project presentations. Thanks also to Miguel and Jakub for handling and arranging the video equipment. The video presentations are from the final project presentations at the SICSA MMI Summer School in June 2011. In addition you can see the final projects page here with images, text and links to other resources.

Short Talks by MSc Students


<!–Date/Time: 1-2pm August 30th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Yemliha Kamber, , University of St Andrews
Title: Empirical Investigation of The Memorability of Gesture Sets
Abstract:
As technology is becoming increasing advanced and sophisticated, various input technologies and new experimental input technologies allow for large numbers of gestures types and new ways in controlling software. However very little is
known about the eects they have and how people learn and remember the various gestures. This presentation will briefly talk about what gestures are, the related work, our experimental study and the results which we analysed.
Speaker: Asset Nurboluly, University of St Andrews
Title: Multi touch transparent input output device
Abstract:
In recent years multi-touch interfaces have become popular. They have been giving a new way of human interaction. Multi-touch interfaces can be used in various aspects of human lives, such as medicine, education, military, science etc. The main aim of this project is to build a transparent display device, which uses multi-touch interface. Transparent display is a future trend in a technology world, setting out first in sci-fi movies and turning into reality nowadays. Usage of such devices is significant. They can be used in cars as a windshield, where a driver can see the GPS information as well as watching what happens on the road. In everyday life we can integrate them in mirrors, so in the morning during tooth-brushing we can check out the news, weather etc.
Regarding medicine and military areas, transparent displays can be used to show real-time interactive information for soldiers and surgeons.
The Prototype has been built by using laser light plane technology for multi touch sensing. An image projected from a projector, IR camera was used to capture the touches. The built-in system is a transparent multi-touch display.

SACHI members presenting papers at conferences


Several SACHI members are presenting papers at leading international conferences in the upcoming months.
Aaron Quigley presented a paper co-authored with Mike Bennett at Stanford University entitled “Creating Personalized Digital Human Models Of Perception For Visual Analytics” at UMAP 2011 in Girona, Spain, on Thursday July 14th. Umer Rashid and Aaron Quigley co-authored a paper with Jarmo Kauko and Jonna Häkkiläat at Nokia Research Center entitled “Proximal and Distal Selection of Widgets: Designing Distributed UI for Mobile Interaction with Large Display”. It will be presented by Umer Rashid at MobileHCI 2011 in Stockholm, Sweden on Friday September 2nd. Aaron Quigley also co-authored a paper with Michael Farrugia and Neil Hurely entitled “SNAP: Towards a validation of the Social Network Assembly Pipeline” which was presented by Michael Farrugia at the International Conference on Advances in Social Networks Analysis and Mining in Kaohsiung City, Taiwan, on Monday July 25th.
Miguel Nacenta is a keynote speaker at the Integrating multi-touch and interactive surfaces into the research environment workshop in Oxford, UK, in September 16-17. He has also co-authored a paper with Sean Lynch and Sheelagh Carpendale which will be presented by Sean Lynch at Interact 2011 in Lisbon, Portugal. The talk is entitled: “ToCoPlay: Graphical Multi-touch Interaction for Composing and Playing Music”.
Per Ola Kristensson presented a paper on Thursday July 28th at the Association for Computational Linguistics‘s Conference on Empirical Methods in Natural Language Processing (EMNLP 2011) in Edinburgh, UK. The talk was entitled “The Imagination of Crowds: Conversational AAC Language Modeling using Crowdsourcing and Large Data Sources”. On Monday August 29th he will present a paper at Interspeech 2011 in Florence, Italy. This talk will be in the multimodal signal processing session and it is entitled: “Asynchronous Multimodal Text Entry using Speech and Gesture Keyboards”. Shortly thereafter, on Thursday September 1st, he will present a paper at MobileHCI 2011 in Stockholm, Sweden. This talk is entitled “A Versatile Dataset for Text Entry Evaluations Based on Genuine Mobile Emails”. These papers were co-authored with Keith Vertanen at Princeton University. He also co-authored a paper which was presented on Saturday August 6th by Leif Denby at the 8th Eurographics Symposium on Sketch-Based Interfaces and Modeling (SBIM 2011) in Vancouver, Canada. The talk was entitled: “Continuous Recognition and Visualization of Pen Strokes and Touch-Screen Gestures”.

Petteri Nurmi, Energy-efficient Location-awareness on Mobile Devices


<!–Speaker: Peterri Nurmi,  Helsinki Institute for Information Technology HIIT
Date/Time: 12pm-1pm July 29th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Contemporary mobile phones readily support different positioning techniques. In addition to integrated GPS receivers, GSM and WiFi can be used for position estimation, and other sensors such as accelerometers and digital compasses can be used to support positioning, e.g., through dead reckoning or the detection of stationary periods. Selecting which sensor technologies to use for positioning is, however, a non-trivial task as available sensor technologies vary considerably in terms of their energy demand and the accuracy of location estimates. To improve the energy-efficiency of mobile devices and to provide as accurate position estimates as possible, novel on-device positioning technologies together with techniques that select optimal sensor modalities based on positioning accuracy requirements are required. In this talk we first introduce novel GSM and WiFi fingerprinting algorithms that run directly on mobile devices with minimal energy consumption [1]. We also introduce our recent work on minimizing the power consumption of continuous location and trajectory tracking on mobile devices [2].
[1] P. Nurmi, S. Bhattacharya, J. Kukkonen: “A grid-based algorithm for on-device GSM positioning.” Proc. 12th ACM International Conference on Ubiquitous Computing (Ubicomp, Copenhagen, Denmark, September 2010). ACM Press, 2010, 227-236.
[2] M. B. Kjaergaard, S. Bhattacharya, H. Blunck, P. Nurmi, “Energy-efficient Trajectory Tracking for Mobile Devices”, Proc. 9th International Conference on Mobile Systems, Applications and Services (MobiSys, June-July, 2011).
About Petteri:
Dr. Petteri Nurmi is a Senior Researcher at the Helsinki Institute for Information Technology HIIT. He received a PhD in Computer Science from the University of Helsinki in 2009. He is currently co-leading the Adaptive Computing research group at HIIT together with Doc. Patrik Floréen. His research focuses on ubiquitous computing, user modeling and interaction with a view of making the life of ordinary people easier through easy-to-use mobile services. He regularly serves as Program Committee Member and reviewer for numerous leading conferences and journals. More information about his research can be found from the webpage of the research group: http://www.hiit.fi/adapc/

News on Professional Activities of SACHI members