When: Wednesday 12th of September 9:30am – 5pm (with a 1 hour break for lunch)
Where: Sub-honours lab in Jack Cole building (0.35)
As part of this competition, you may be offered an opportunity to participate in a Human-Computer Interaction study on subtle interaction. Participation in this study is completely voluntary.
There will be two competitive categories:
HCI study participants:
1st prize: 7” Samsung Galaxy Tab 2
2nd prize: £50 Amazon voucher
3rd prize: £20 Amazon voucher
Everyone:
1st prize: £50 Amazon voucher
2nd prize: £20 Amazon voucher
3rd prize: £10 Amazon voucher
We will try to include as many programming languages as is reasonable, so if you have any special requests, let us know.
If you have one, bring a laptop in case we run out of lab computers!
If you have any questions, please email Jakub on jd67@st-andrews.ac.uk
News
Welcome to Uta Hinrichs who has joined the SACHI group from the University of Calgary, Canada as a Research Fellow. Uta holds a Diplom (equiv. to MSc) in Computational Visualistics from the University of Magdeburg in Germany and is in the process of finishing her PhD in Computer Science with a specialization in Computational Media Design. Uta’s PhD research, that she conducted at the InnoVis Group of the University of Calgary, focuses on how to support open-ended information exploration on large displays in public exhibition spaces, combining information visualization with direct-touch interaction techniques. As part of this research, she has designed and studied large display installations in the context of a museum and art gallery, library, and an aquarium.
To learn more about Uta’s work here see her SACHI biography page or visit her own website here to get an overview of her previous research projects. Everyone in SACHI welcomes Uta!
<!–Speaker: Laurel Riek, Computer Science and Engineering University of Notre Dame
Date/Time: 1-2pm September 4, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.
While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.
In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.
About Laurel:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.
Participants wanted for an experiment on gesture user interfaces – £20 in amazon vouchers.
See the page of the study for more details!
The SACHI group (Human-Computer Interaction) at the University of St Andrews, Scotland’s first university, is offering a full scholarship to join the School of Computer Science as a doctoral researcher for 3.5 years. The scholarship covers tuition fees and provides a living-expenses stipend.
The work will focus on the creation of new forms of visualization with gaze-contingent displays (electronic displays that have access to the location of the person’s gaze), their evaluation through laboratory studies, and the implementation of new visualization and interaction techniques. The student will work closely with Dr. Miguel Nacenta and within the SACHI group.
Please, visit Dr. Nacenta’s site for more detail.

As part of his work in the School of Computer Science, from the start of August 2012 Aaron is joining the Scottish Informatics and Computer Science Alliance (SICSA) executive as the deputy director for knowledge exchange for two years. As a result, he is stepping down as theme leader for Multimodal Interaction. Aaron has enjoyed his time working with Professor Stephen Brewster and is looking forward to joining the executive next month.
<!–Speaker: Luke Hutton, SACHI
Date/Time: 1-2pm July 10, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The virtual wall is a simple privacy metaphor for ubiquitous computing environments. By expressing the transparency of a wall and the people to which the wall applies, a user can easily manage privacy policies for sharing their sensed data in a ubiquitous computing system.
While previous research shows that users understand the wall metaphor in a lab setting, the metaphor has not been studied for its practicality in the real world. This talk will describe a smartphone-based experience sampling method study (N=20) to demonstrate that the metaphor is sufficiently expressive to be usable in real-world scenarios. Furthermore, while people’s preferences for location sharing are well understood, our study provides insight into sharing preferences for a multitude of contexts. We find that whom data are shared with is the most important factor for users, reinforcing the walls approach of supporting apply-sets and abstracting away further granularity to provide improved usability.
About Luke:
Luke’s bio on the SACHI website.
Dr Apu Kapadia is a Distinguished SICSA Visitor in August 2012. As part of his visit we are organising a pair of masterclasses in running mobile user studies. These masterclasses are open to all SICSA PhD students. Students will be need to be available to attend both masterclasses:
- Thursday 2 August, University of Glasgow
- Thursday 9 August, University of St Andrews
The classes will cover how to design and run a mobile user study using smartphones, and in particularly cover the use of the experience sampling method (ESM), a currently popular methodology for collecting rich data from real-world participants. In the first class, attendees will learn about the methodology and be given a smartphone. Attendees will then carry the smartphone and participate in a small study, and we will cover data analysis in the second class in St Andrews. The organisers have experience in running ESM studies which have looked at mobility, social networking, security and privacy, but the methodology should be of interest to PhD students in both the NGI and MMI themes.
If you have any questions or would like to attend, please e-mail Tristan Henderson (tnhh@st-andrews.ac.uk) before the 16th of July.
Biography of Dr Apu Kapadia:
Apu Kapadia is an Assistant Professor of Computer Science and Informatics at the School of Informatics and Computing, Indiana University. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in October 2005.
Dr Kapadia has published over thirty peer-reviewed conference papers and journal articles focused on privacy, with several of these at top-tier venues such as ACM TISSEC, IEEE TDSC, PMC, CCS, NDSS, Pervasive, and SOUPS. For his work on accountable anonymity, two of his papers were named as “Runners-up for PET Award 2009: Outstanding Research in Privacy Enhancing Technologies”, a prestigious award in the privacy community. His work on usable metaphors for controlling privacy was given the “Honorable Mention Award (Runner-up for Best Paper)” at Pervasive. Dr Kapadia’s recent work on smartphone “sensory” malware that make use of onboard sensors was published at NDSS and received widespread media coverage. His work on analyzing privacy leaks on Twitter also received media attention naming his work as one of the “7 Must-Read Twitter Studies from 2011”, and one of “The 10 Most Interesting Social Media Studies of 2011”.
Dr Kapadia is interested in topics related to systems’ security and privacy. He is particularly interested in security and privacy issues related to mobile sensing, privacy-enhancing technologies to facilitate anonymous access to services with some degree of accountability, usable mechanisms to improve security and privacy, and security in decentralized and mobile environments.
This week Aaron has been attending a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa Israel. Aaron’s talk at this event was entitled Eyes, Gaze, Displays: User Interface Personalisation “You Lookin’ at me?”. In this he covered work with Mike Bennett, Umar Rashid, Jakub Dostal, Miguel A. Nacenta and Per Ola Kristensson from SACHI. The talk was a good way to show the interlocking and related research going on in SACHI.
His talk included references to a number of recent papers which include:
- Factors Influencing Visual Attention Switch in Multi-Display User Interfaces: A Survey, Umar Rashid, Miguel A. Nacenta, Aaron J. Quigley, International Symposium on Pervasive Displays, 2012
- The cost of display switching: a comparison of mobile, large display and hybrid UI configurations, Umar Rashid, Miguel A. Nacenta, Aaron J. Quigley, International Working Conference on Advanced Visual Interfaces, AVI ’12: Proceedings of the International Working Conference on Advanced Visual Interfaces
- Workshop on Infrastructure and Design Challenges of Coupled Display Visual Interfaces: in conjunction with Advanced Visual Interfaces 2012 (AVI’12)
Aaron Quigley, Alan Dix, Miguel Nacenta, Tom Rodden, AVI ’12: Proceedings of the International Working Conference on Advanced Visual Interfaces
- Designing Mobile Computer Vision Applications for the Wild: Implications on Design and Intelligibility, Jakub Dostal, Per Ola Kristensson and Aaron J. Quigley at Pervasive Intelligibility, the Second Workshop on Intelligibility and Control in Pervasive Computing.
- Creating Personalized Digital Human Models of Perception for Visual Analytics, Mike Bennett, Aaron J. Quigley, UMAP 2011 – 19th International Conference on User Modeling, Adaptation and Personalization, pp. 25-37, Girona, Spain, 2011.
- A taxonomy for and analysis of multi-person-display ecosystems, Terrenghi L, Quigley A. and Dix A., Journal of Personal and Ubiquitous Computing (2009) 13:583–598
The alternative yet related viewpoints in this work made for a stimulating presentation and fruitful views for the international audience.
<!–Speaker: Lindsay MacDonald, University of Calgary, Canada
Date/Time: 1-2pm July 3, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In contrast to the romantic image of an artist working in alone in a studio, large-scale media art pieces are often developed and built by interdisciplinary teams. Lindsay MacDonald will describe the process of creating and developing one of these pieces, A Delicate Agreement, within such a team, and offer personal insight on the impact that this has had her artistic practice.
A Delicate Agreement is a gaze-triggered interactive installation that explores the potentially awkward act of riding in an elevator with another person. It is a set of elevator doors with a peephole in each door that entices viewers to peer inside and observe an animation of the passengers. Each elevator passenger, or character, has a programmed personality that enables them to act and react to the other characters’ behaviour and the viewers’ gaze. The result is the emergence of a rich interactive narrative made up of encounters in the liminal time and space of an elevator ride.
A Delicate Agreement is currently part of the New Alberta Contemporaries exhibition at the Esker Foundation in Calgary, Canada. For more information about the piece, please visit http://www.lindsaymacdonald.net/portfolio/a-delicate-agreement/.
About Lindsay:
Lindsay MacDonald is a Ph. D. student, artist, designer and interdisciplinary researcher from the Interactions Lab (iLab) at the University of Calgary in Canada. Lindsay’s approach to research and creative production combines methodology both from computer science and art, and she divides her time between the iLab and her studio in the Department of Art. Her research interests include interaction design, coded behaviour and performance and building interactive art installations.