St Andrews HCI Research Group

News

Urska Demsar, Visualising movement trajectories in geoinformatics


<!–Speaker: Dr Urška Demšar, Centre for GeoInformatics, University of St Andrews
Date/Time: 1-2pm Feb 19, 2013
Location: 1.33a Jack Cole, University of St Andrews–>

Abstract:
Recent developments and ubiquitous use of global positioning devices have revolutionised movement analysis. Scientists are able to collect increasingly larger movement data sets at increasingly smaller spatial and temporal resolutions. These data consist of trajectories in space and time, represented as time series of measured locations for each moving object. In geoinformatics such data are visualised using various methodologies, e.g. simple 2D spaghetti maps, traditional time-geography space-time cubes (where trajectories are shown as 3D polylines through space and time) and attribute-based linked views. In this talk we present an overview of typical trajectory visualisations and then focus on space-time visual aggregations for one particular application area, movement ecology, which tracks animal movement.
Bio:
Dr Urška Demšar is lecturer in geoinformatics at the Centre for GeoInformatics (CGI), School of Geography & Geosciences, University of St. Andrews, Scotland, UK. She has a PhD in Geoinformatics from the Royal Institute of Technology (KTH), Stockholm, Sweden and two degrees in Applied Mathematics from the University of Ljubljana, Slovenia. Previously she worked as a lecturer at the National Centre for Geocomputation, National University of Ireland Maynooth, as researcher at the Geoinformatics Department of the Royal Institute of Technology in Stockholm and as a teaching asistant in Mathematics at the Faculty of Electrical Engineering at the University of Ljubljana. Her primary research interests are in geovisual analytics and geovisualisation. She is combining computational and statistical methods with visualisation for knowledge discovery from geospatial data. She is also interested in spatial analysis and mathematical modelling, with one particular application in analysis of movement data and spatial trajectories.

David Heyman, Purposeful Map-Design


Purposeful Map-Design: What it Means to Be a Cartographer when Everyone is Making Maps

<!–Speaker: David Heyman, Axis Maps
Date/Time: Fri, Feb. 1, 2pm
Location: School III, St Salvator’s Quad, St. Andrews–>
Abstract:
The democratizing technologies of the web have brought the tools and raw-materials required to make a map to a wider audience than ever before. This proliferation of mapping has redefined modern Cartography beyond the general practice of “making maps” to the purposeful design of maps. Purposeful Cartographic design is more than visuals and aesthetics; there is room for the Cartographer’s design decisions at every step between the initial earthly phenomenon and the end map user’s behavior. This talk will cover the modern mapping workflow from collecting and manipulating data, to combining traditional cartographic design with a contemporary UI/UX, to implementing these maps through code across multiple platforms. I will examine how these design decisions are shaped by the purpose of the map and the desire to use maps to clearly and elegantly present the world.
Bio:
David Heyman is the founder and Managing Director of Axis Maps, a global interactive mapping company formed out of the cartography graduate program of the University of Wisconsin. Established in 2006, the goal of Axis Maps has been to bring the tenants and practices of traditional cartography to the medium of the Internet. Since then, they have designed and built maps for the New York Times, Popular Science, Emirates Airlines, Earth Journalism Network, Duke University and many others. They have also released the freely available indiemapper and ColorBrewer to help map-makers all over the world apply cartographic best-practices to their maps. Recently, their series of handmade typographic maps have been a return to their roots of manual cartographic production. David currently lives in Marlborough, Wiltshire.

Loraine Clarke, Multimodal interaction in museums


<!–Speaker: Loraine Clarke, University of Strathclyde
Date/Time: 1-2pm Jan 15, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Interactive exhibits have become highly expected in traditional museums today. The presence of hands-on exhibits in science centres along with our familiarity of high quality media experiences in everyday life has increased our expectations of digital interactive exhibits in museums. Increased accessibility to affordable technology has provided an achievable means to create novel interactive in museums. However, there is a need to question the value and effectiveness of these interactive exhibits in the museum context. Are these exhibits contributing to the desired attributes of a visitors’ experience, social interactions and visitors’ connection with subject matter or hindering these factors? The research focuses specifically on multimodal interactive exhibits and the inappropriate or appropriate combination of modalities applied in interactive exhibits relative to subject matter, context and target audience. The research aims to build an understanding of the relationships between different combinations of modalities used in exhibits with museum visitors experience, engagement with a topic, social engagement and engagement with the exhibit itself. The talk will present two main projects carried out during the first year of the PhD research. The first project presented will describe the design, development and study of a Multimodal painting installation exhibited for 3 months in a children’s cultural centre for children. The second project presented is an on-going study with the Riverside Transport Museum in Glasgow of six existing multimodal installations in the Transport Museum.
Bio:
Loraine Clarke is a PhD student at the University of Strathclyde. Loraine’s research involves examining interaction with existing museum exhibits that engage visitors in multimodal interaction, developing multimodal exhibits and carrying out field based studies. Loraine’s background is in Industrial Design and Interaction Design through industry and academic experience. Loraine has experience in industry relating to the design and production of kayaking paddles. Loraine has some experience as an interaction designer through projects with a software company. Loraine holds a BDes degree in Industrial Design from the National College of Art and Design, Dublin and a MSc in Interactive Media from the University of Limerick.

John McCaffery, Virtual Worlds as a platform for rapid prototyping and HCI experimentation


<!–Speaker: John McCaffery, University of St. Andrews
Date/Time: 2-3pm Dec 12, 2012
Location: 1.33a Jack Cole, University of St Andrews –>
Abstract:
Open Virtual Worlds are a platform of several advantages. They provide an out of the box mechanism for content creation, distributed access and programming. They are open source so can be manipulated as necessary. There is also a large amount of content that has already been created within a Virtual World. As such, in the field of HCI experimentation they provide an interesting opportunity. When experimenting with novel modes of interaction prototypes can be created within a Virtual World relatively easily. Once the prototype has been created, users can be put into use case scenarios based around existing content. Alternatively, custom environments with very constrained parameters can quickly be created for controlled experimentation.
This talk will cover some of the interaction modes currently being experimented with by the OpenVirtualWorlds group.
Bio:
John McCaffery is a PhD student in the Open Virtual Worlds group. John works on investigating how the open frameworks for distributing, programming and manipulating 3D data provided by Open Virtual Worlds can be used to provide a model for how the 3D web may develop. Open Virtual Worlds is a general term for open source, open protocol client / server architectures for streaming and modifying 3D data. Examples include the SecondLife viewer and its derivatives and the SecondLife and OpenSim server platforms. John’s work includes investigating how the programming possibilities of Virtual Worlds can be extended and how Virtual World access can be modified to provide new experiences and new experimental possibilities, built around existing content. For more information on John’s Work see his research blog.

Miguel Nacenta and Aaron Quigley, Impressions from ITS 2012 with Interesting Research Papers, Videos and Demos from UIST 2012


<!–Speakers: Miguel Nacenta and Aaron Quigley, Computer Science University of St. Andrews
Date/Time: 1-2pm November 20, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Miguel Nacenta recently attended ITS 2012. The ACM international conference on Interactive Tabletops and Surfaces brings together researchers and innovators from a variety of backgrounds including engineering, computer science, design, and social sciences. Miguel is going to share with us his impressions of the Research Papers, Demos, Tutorials and Workshops he participated in. The ideas and perspectives shared at this year’s ITS include multi-touch and gesture-based interfaces, 3D interaction, interactive surfaces in education and for children, multi-display environments, non-flat surfaces, multitouch development, sketching the user interfaces and high-performance ITS technologies.
Aaron Quigley attendance at the recent UIST 2012.conference, allows Aaron to offer insight into the interesting Research Papers, Videos and Demos he enjoyed there. UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces. UIST brings together researchers and practitioners from diverse areas. Some of the topics we can expect to hear about are traditional graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW.
As both UIST 2013 and ITS 2013 are taking place here in St. Andrews next October, it would be worthwhile attending to get a flavour of what to expect next year.
About Miguel:
Dr. Miguel Nacenta has been a University of St Andrews lecturer since May 2011, where he cofounded the SACHI group. Prior to this he was a post-doctoral fellow at the Interactions Lab, University of Calgary, Canada. He holds an electrical engineering degree from the Technical University of Madrid (Ingeniero Superior, UPM), and a doctorate from the University of Saskatchewan, Canada, under the supervision of Prof. Carl Gutwin.
About Aaron:
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI, the St Andrews Computer Human Interaction research group, His appointment is part of SICSA the Scottish Informatics and Computer Science Alliance. From August of 2012 he is the SICSA deputy director for knowledge exchange. He is the general co-chair for UIST 2013 and ITS 2013 (in St Andrews in Oct 2013).

Aaron Quigley, Inaugural Lecture on HCI


Today Professor Aaron Quigley will be giving his Inaugural Lecture in School III
The abstract for his talk is as follows: Billions of people are using interconnected computers and have come to rely on the computational power they afford us, to support their lives, or advance our global economy and society. However, how we interact with this computation is often limited to little “windows of interaction” with mobile and desktop devices which aren’t fully suited to their contexts of use. Consider the surgeon operating, the child learning to write or the pedestrian navigating a city and ask are the current devices and forms of human computer interaction as fluent as they might be? I contend there is a division between the physical world in which we live our lives and the digital space where the power of computation currently resides. Many day to day tasks or even forms of work are poorly supported by access to appropriate digital information. In this talk I will provide an overview of research I’ve been pursuing to bridge this digital-physical divide and my future research plans. This talk will be framed around three interrelated topics. Ubiquitous Computing, Novel Interfaces and Visualisation. Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. Everyday objects are sites for sensing, input, processing along with user output. Novel Interfaces, which draw the user interface closer to the physical world, both in terms of input to the system and output from the system. Finally, the use of computer-supported interactive visual representations of data to amplify cognition with visualisation. In this talk I will demonstrate that advances in human computer interaction require insights and research from across the sciences and humanities if we are to bridge this digital-physical divide.

Laurel Riek, Facing Healthcare's Future: Designing Facial Expressivity for Robotic Patient Mannequins


<!–Speaker: Laurel Riek, Computer Science and Engineering University of Notre Dame
Date/Time: 1-2pm September 4, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.
While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.
In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.
About Laurel:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.

Luke Hutton, Virtual Walls: Studying the effectiveness of the privacy metaphor in the real world


<!–Speaker: Luke Hutton, SACHI
Date/Time: 1-2pm July 10, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The virtual wall is a simple privacy metaphor for ubiquitous computing environments. By expressing the transparency of a wall and the people to which the wall applies, a user can easily manage privacy policies for sharing their sensed data in a ubiquitous computing system.
While previous research shows that users understand the wall metaphor in a lab setting, the metaphor has not been studied for its practicality in the real world. This talk will describe a smartphone-based experience sampling method study (N=20) to demonstrate that the metaphor is sufficiently expressive to be usable in real-world scenarios. Furthermore, while people’s preferences for location sharing are well understood, our study provides insight into sharing preferences for a multitude of contexts. We find that whom data are shared with is the most important factor for users, reinforcing the walls approach of supporting apply-sets and abstracting away further granularity to provide improved usability.
About Luke:
Luke’s bio on the SACHI website.

Lindsay MacDonald, A Very Delicate Agreement: A Process of Collaboration Between Disciplines


<!–Speaker: Lindsay MacDonald, University of Calgary, Canada
Date/Time: 1-2pm July 3, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In contrast to the romantic image of an artist working in alone in a studio, large-scale media art pieces are often developed and built by interdisciplinary teams. Lindsay MacDonald will describe the process of creating and developing one of these pieces, A Delicate Agreement, within such a team, and offer personal insight on the impact that this has had her artistic practice.
A Delicate Agreement is a gaze-triggered interactive installation that explores the potentially awkward act of riding in an elevator with another person. It is a set of elevator doors with a peephole in each door that entices viewers to peer inside and observe an animation of the passengers. Each elevator passenger, or character, has a programmed personality that enables them to act and react to the other characters’ behaviour and the viewers’ gaze. The result is the emergence of a rich interactive narrative made up of encounters in the liminal time and space of an elevator ride.
A Delicate Agreement is currently part of the New Alberta Contemporaries exhibition at the Esker Foundation in Calgary, Canada. For more information about the piece, please visit http://www.lindsaymacdonald.net/portfolio/a-delicate-agreement/.
About Lindsay:
Lindsay MacDonald is a Ph. D. student, artist, designer and interdisciplinary researcher from the Interactions Lab (iLab) at the University of Calgary in Canada. Lindsay’s approach to research and creative production combines methodology both from computer science and art, and she divides her time between the iLab and her studio in the Department of Art. Her research interests include interaction design, coded behaviour and performance and building interactive art installations.

Carman Neustaedter, Connecting Families over Distance


<!–Speaker: Carman Neustaedter, Simon Fraser University, Canada
Date/Time: 1-2pm June 18 (Monday), 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Families often have a real need and desire to stay connected with their remote family members and close friends. For example, grandparents want to see their grandchildren grow up, empty-nest parents want to know about the well being of their adult children, and parents want to be involved in their children’s daily routines and happenings while away from them. Video conferencing is one technology that is increasingly being used by families to support this type of need. In this talk, I will give an overview of the research that my students and I have done in this space. This includes studies of the unique ways in which families with children, long-distance couples, and teenagers make use of existing video chat systems to support ‘presence’ and ‘connection’ over distance. I will also show several systems we have designed to support always-on video connections that move beyond ‘talking heads’ to ‘shared experiences’.
About Carman:
Dr. Carman Neustaedter is an Assistant Professor in the School of Interactive Arts and Technology at Simon Fraser University, Canada. Dr. Neustaedter specializes in the areas of human-computer interaction, domestic computing, and computer-supported collaboration. He is the director of the Connections Lab, an interdisciplinary research group focused on the design and use of technologies for connecting people through space and time. This includes design for families and friends, support for workplace collaboration, and bringing people together through pervasive games. For more information, see:
Connections Lab
Carman Neustaedter