St Andrews HCI Research Group

News

Luke Hutton, Virtual Walls: Studying the effectiveness of the privacy metaphor in the real world


<!–Speaker: Luke Hutton, SACHI
Date/Time: 1-2pm July 10, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The virtual wall is a simple privacy metaphor for ubiquitous computing environments. By expressing the transparency of a wall and the people to which the wall applies, a user can easily manage privacy policies for sharing their sensed data in a ubiquitous computing system.
While previous research shows that users understand the wall metaphor in a lab setting, the metaphor has not been studied for its practicality in the real world. This talk will describe a smartphone-based experience sampling method study (N=20) to demonstrate that the metaphor is sufficiently expressive to be usable in real-world scenarios. Furthermore, while people’s preferences for location sharing are well understood, our study provides insight into sharing preferences for a multitude of contexts. We find that whom data are shared with is the most important factor for users, reinforcing the walls approach of supporting apply-sets and abstracting away further granularity to provide improved usability.
About Luke:
Luke’s bio on the SACHI website.

Masterclass in mobile user studies


Dr Apu Kapadia is a Distinguished SICSA Visitor in August 2012. As part of his visit we are organising a pair of masterclasses in running mobile user studies. These masterclasses are open to all SICSA PhD students. Students will be need to be available to attend both masterclasses:

  • Thursday 2 August, University of Glasgow
  • Thursday 9 August, University of St Andrews

The classes will cover how to design and run a mobile user study using smartphones, and in particularly cover the use of the experience sampling method (ESM), a currently popular methodology for collecting rich data from real-world participants. In the first class, attendees will learn about the methodology and be given a smartphone. Attendees will then carry the smartphone and participate in a small study, and we will cover data analysis in the second class in St Andrews. The organisers have experience in running ESM studies which have looked at mobility, social networking, security and privacy, but the methodology should be of interest to PhD students in both the NGI and MMI themes.

If you have any questions or would like to attend, please e-mail Tristan Henderson (tnhh@st-andrews.ac.uk) before the 16th of July.

Biography of Dr Apu Kapadia:

Apu Kapadia is an Assistant Professor of Computer Science and Informatics at the School of Informatics and Computing, Indiana University. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in October 2005.

Dr Kapadia has published over thirty peer-reviewed conference papers and journal articles focused on privacy, with several of these at top-tier venues such as ACM TISSEC, IEEE TDSC, PMC, CCS, NDSS, Pervasive, and SOUPS. For his work on accountable anonymity, two of his papers were named as “Runners-up for PET Award 2009: Outstanding Research in Privacy Enhancing Technologies”, a prestigious award in the privacy community. His work on usable metaphors for controlling privacy was given the “Honorable Mention Award (Runner-up for Best Paper)” at Pervasive. Dr Kapadia’s recent work on smartphone “sensory” malware that make use of onboard sensors was published at NDSS and received widespread media coverage. His work on analyzing privacy leaks on Twitter also received media attention naming his work as one of the “7 Must-Read Twitter Studies from 2011”, and one of “The 10 Most Interesting Social Media Studies of 2011”.

Dr Kapadia is interested in topics related to systems’ security and privacy. He is particularly interested in security and privacy issues related to mobile sensing, privacy-enhancing technologies to facilitate anonymous access to services with some degree of accountability, usable mechanisms to improve security and privacy, and security in decentralized and mobile environments.

Ubiquitous User Modeling – U2M'2012


This week Aaron has been attending a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa Israel. Aaron’s talk at this event was entitled Eyes, Gaze, Displays: User Interface Personalisation “You Lookin’ at me?”. In this he covered work with Mike Bennett, Umar Rashid, Jakub Dostal, Miguel A. Nacenta and Per Ola Kristensson from SACHI. The talk was a good way to show the interlocking and related research going on in SACHI.
His talk included references to a number of recent papers which include:

The alternative yet related viewpoints in this work made for a stimulating presentation and fruitful views for the international audience.
 

Lindsay MacDonald, A Very Delicate Agreement: A Process of Collaboration Between Disciplines


<!–Speaker: Lindsay MacDonald, University of Calgary, Canada
Date/Time: 1-2pm July 3, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In contrast to the romantic image of an artist working in alone in a studio, large-scale media art pieces are often developed and built by interdisciplinary teams. Lindsay MacDonald will describe the process of creating and developing one of these pieces, A Delicate Agreement, within such a team, and offer personal insight on the impact that this has had her artistic practice.
A Delicate Agreement is a gaze-triggered interactive installation that explores the potentially awkward act of riding in an elevator with another person. It is a set of elevator doors with a peephole in each door that entices viewers to peer inside and observe an animation of the passengers. Each elevator passenger, or character, has a programmed personality that enables them to act and react to the other characters’ behaviour and the viewers’ gaze. The result is the emergence of a rich interactive narrative made up of encounters in the liminal time and space of an elevator ride.
A Delicate Agreement is currently part of the New Alberta Contemporaries exhibition at the Esker Foundation in Calgary, Canada. For more information about the piece, please visit http://www.lindsaymacdonald.net/portfolio/a-delicate-agreement/.
About Lindsay:
Lindsay MacDonald is a Ph. D. student, artist, designer and interdisciplinary researcher from the Interactions Lab (iLab) at the University of Calgary in Canada. Lindsay’s approach to research and creative production combines methodology both from computer science and art, and she divides her time between the iLab and her studio in the Department of Art. Her research interests include interaction design, coded behaviour and performance and building interactive art installations.

Pervasive 2012


This week Aaron Quigley and Tristan Henderson attended Pervasive 2012, the Tenth International Conference on Pervasive Computing, at Newcastle University.
On Monday Aaron attended Pervasive Intelligibility the Second Workshop on Intelligibility and Control in Pervasive Computing. Here he presented a paper entitled Designing Mobile Computer Vision Applications for the Wild: Implications on Design and Intelligibility (PDF) by
Per Ola Kristensson, Jakub Dostal and Aaron Quigley. Later, he was a panelists with Judy Kay and Simone Stumpf where they discussed the research challenges of intelligibility with pervasive computing along with all participants.
On Tuesday Tristan attended the First Workshop on recent advances in behavior prediction and pro-active pervasive computing where he presented the paper Predicting location-sharing privacy preferences in social network applications by Greg Bigwood, Fehmi Ben Abdesslem and Tristan Henderson.
On Tuesday Aaron chaired the Doctoral Consortium with Elaine Huang from the University of Zurich with five panellists and nine students. The panellists were Adrian Friday, University of Lancaster, UK, Jin Nakazawa, Keio University, Japan and AJ Brush, Microsoft Research Seattle, USA.
The Pervasive 2012 doctoral consortium provided a collegial and supportive forum in which PhD students could present and defend their doctoral research-in-progress with constructive feedback and discussion. The consortium was guided by a panel of experienced researchers and practitioners with both academic and industrial experience. It offered students a valuable opportunity to receive high-quality feedback and fresh perspectives from recognised international experts in the field, and to engage with other senior doctoral students.
The day ended with a career Q&A session with an extended panel including Tristan Henderson from SACHI and Rene Mayhofer. Following this the panellists and students were able to have dinner together to continue active research and career discussions.
Along with work as a member of the joint steering committee of pervasive and UbiComp Aaron was a session chair for the HCI session on Thursday.

Two papers presented at the Pervasive Displays symposium, in Porto.



Last week (4-5 June, 2012) two papers from SACHI were presented at the Pervasive Displays international Symposium. The symposium took place in Porto, Portugal, and it was a great showcase of work on pervasive display systems from research groups in Europe, Japan and North America.
We presented two papers:
Factors Influencing Visual Attention Switch in Multi-Display User Interfaces: A Survey, which is part of the dissertation work of Umar Rashid, and
The LunchTable: A Multi-User, Multi-Display System for Information Sharing in Casual Group Interactions, which is part of collaborations with the ilab, at the University of Calgary.
Both papers were well-received and will be soon available through the ACM Digital Library, but the most interesting part was to share the energy of the pervasive displays community and the amazing discussions. We are hoping for a new edition next year, which will be in California.

Carman Neustaedter, Connecting Families over Distance


<!–Speaker: Carman Neustaedter, Simon Fraser University, Canada
Date/Time: 1-2pm June 18 (Monday), 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Families often have a real need and desire to stay connected with their remote family members and close friends. For example, grandparents want to see their grandchildren grow up, empty-nest parents want to know about the well being of their adult children, and parents want to be involved in their children’s daily routines and happenings while away from them. Video conferencing is one technology that is increasingly being used by families to support this type of need. In this talk, I will give an overview of the research that my students and I have done in this space. This includes studies of the unique ways in which families with children, long-distance couples, and teenagers make use of existing video chat systems to support ‘presence’ and ‘connection’ over distance. I will also show several systems we have designed to support always-on video connections that move beyond ‘talking heads’ to ‘shared experiences’.
About Carman:
Dr. Carman Neustaedter is an Assistant Professor in the School of Interactive Arts and Technology at Simon Fraser University, Canada. Dr. Neustaedter specializes in the areas of human-computer interaction, domestic computing, and computer-supported collaboration. He is the director of the Connections Lab, an interdisciplinary research group focused on the design and use of technologies for connecting people through space and time. This includes design for families and friends, support for workplace collaboration, and bringing people together through pervasive games. For more information, see:
Connections Lab
Carman Neustaedter

Jim Young, Leveraging People's Everyday Skill Sets for Interaction with Robots


<!–Speaker: Jim Young, University of Manitoba, Canada
Date/Time: 1-2pm June 12, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Human-Robot Interaction (HRI), broadly, is the study of how people and robots can work together. This includes core interaction design problems of creating interfaces for effective robot control and communication with people, and sociological and psychological studies of how people and robots can share spaces or work together. In this talk I will introduce several of my past HRI projects, ranging from novel control schemes for collocated or remote control, programming robotic style by demonstration, and developing foundations for evaluating human-robot interaction, and will briefly discuss my current work in robotic authority and gender studies of human-robot interaction. In addition, I will introduce the JST ERATO Igarashi Design Interface Project, a large research project directed by Dr. Takeo Igarashi, which I have been closely involved over the last several years.
About Jim:
James (Jim) Young is an Assistant Professor at the University of Manitoba, Canada, where he founded the Human-Robot Interaction lab, and is involved with the Human-Computer Interaction lab with Dr. Pourang Irani and Dr. Andrea Bunt. He received his BSc from Vancouver Island University in 2005, and completed his PhD in Social Human-Robot Interaction at the University of Calgary in 2010 with Dr. Ehud Sharlin, co-supervised by Takeo Igarashi at the University of Tokyo. His background is rooted strongly in the intersection of sociology and human-robot interaction, and in developing robotic interfaces which leverage people’s existing skills rather than making them learn new ones.

Special Issue on Privacy Methodologies in HCI


Tristan Henderson is co-editing a special issue of the International Journal of Human-Computer Studies on Privacy Methodologies in HCI.
http://www.journals.elsevier.com/international-journal-of-human-computer-studies/call-for-papers/special-issue-of-international-journal-of/

Topic:

Privacy has become one of the most contested social issues of the information age. For researchers and practitioners of human-computer interaction (HCI), interest in privacy is not only sparked by these changes in the scale and scope of personal information collected and stored about people, but also because of the increasing ubiquity, sociability and mobility of personal technology. However, privacy has proven to be a particularly difficult construct to study. As a construct, privacy is also open to investigation from multiple perspectives and ontological approaches, with key research coming from law, psychology, computer science and economics.
The special issue on privacy methodologies in HCI invites high quality research papers that use a variety of methods where the author(s) reflect on and evaluate the method itself, both as applied in their specific context, and more widely, as well as the privacy aspect under consideration.

Authors are asked to consider these key questions in their papers:

  • What was the privacy context being researched?
  • Why was the particular methodology chosen for a given context?
  • What selection criteria were used? What were the advantages and disadvantages of the methodology?
  • How was bias and priming avoided? Was there evidence of a ‘measurement problem’?
  • How did the researcher ensure the sample was representative, avoiding sample-based biases?
  • What were the results? How could this method be used to study other aspects of HCI and privacy?

Submission instructions:

Manuscripts should generally not exceed 8000 words. Papers should be prepared according to the IJHCS Guide for authors, and should be submitted online according to the journal’s instructions. The IJHCS Guide for authors and online submission are available at http://www.elsevier.com/locate/ijhcs.

Important dates:

  • Submission deadline: October 15, 2012
  • Notify authors: January 5, 2013
  • Publication date: late 2013

Guest Editors:

  • Dr. Asimina Vasalou (University of Birmingham)
  • Dr. Tristan Henderson (University of St Andrews)
  • Dr. Adam Joinson (University of Bath)

Aaron invited to present in Haifa Israel


Professor Aaron Quigley has been invited to attend and present at a research workshop of the Israel Science Foundation on Ubiquitous User Modeling (U2M’2012) – State of the art and current challenges in Haifa, Israel later this month (June 25th – June 28th).

 

Based on his research work with colleagues and current students Umer Rashid and Jakub Dostal along with former student Dr. Mike Bennett he will be presenting a talk entitled “You lookin’ at me? ‘Eyes, Gaze, Displays and User Interface Personalisation'”.

 

This presentation draws on different yet related strands of research from four research papers, three published and one under review for a Journal. The three research papers have appeared in UMAP, AVI, and the Second Workshop on Intelligibility and Control in Pervasive Computing. Aaron’s overarching vision on bridging the digital-physical divide is embodied in this work to ensure we have more seamless interaction with computers.

 

The abstract for this talk is as follows:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalise designs, user interfaces and artefacts? Can we model, measure and predict the cost of users altering their gaze in single or multi-display environments? If so, can we personalise interfaces using this knowledge. What about when moving and while the distance between user and screen is varying. Can this be considered a new modality and used to personalise the interfaces along with physiological differences and our current gaze. In this talk we seek to answer some of these questions. We introduce an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. We also report on controlled lab and outdoor experiments with real users. This is to measure both gaze and distance from the screen in an attempt to quantify the cost of attention switching along with the use of distance as a modality. In each case, for distance, gaze or expected eyesight we would like to develop models which can allow us to make predictions about how easy or hard it is to see visual information and visual designs, along with altering the designs to suit individual users based on their current context.

 
Prior to this workshop Professor Quigley was asked to comment on some of the grand challenges he saw for User Modelling and Ubiquitous Computing. The following are the challenges he posed:

  1. Are user models and context data so fundamental that future UbiComp operating systems need to have them built in as first order features of the OS? Or in your opinion is this the wrong approach? Discuss.
  2. There are many facets of a ubiquitous computing system from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. Where do User Models reside within this? Are they something only needed occasionally (or not at all) for some services or experiences or needed for all?
  3. Ubicomp is a model of computing in which computation is everywhere and computer functions are integrated into everything. It will be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence. If so, how do we know what the system knows, assumes or infers about us in its decision making.
  4. Ubicomp represents an evolution from the notion of a computer as a single device, to the notion of a computing space comprising personal and peripheral computing elements and services all connected and communicating as required; in effect, “processing power so distributed throughout the environment that computers per se effectively disappear” or the so-called Calm Computing. The advent of ubicomp does not mean the demise of the desktop computer in the near future. Is Ubiquitous User Modelling the key problem to solve in moving people from desktop/mobile computing into UbiComp use scenarios? If not, what is?
  5. Context data can be provided, sensed or inferred. Context includes information from the person (physiological state), the sensed environment (environmental state) and computational environment (computational state) that can be provided to alter an applications behaviour. How much or little of this should be incorporated into individual UbiComp User Models?