Today’s logo at slashdot.org was created by Jason Jacques, a new PhD student in the SACHI group!
From slashdot.org: Artist Jason Jacques says: “While the main text itself is “obvious” in its fully animated form this logo provides additional challenge in that the remainder of the message must be decoded. Can you figure it out? If so, mail answer to logo15@slashdot.org. How did he do it? After calculating the necessary sizes and bit patterns on paper, the static image of the entire message was generated using Pixlemator on Mac OS X (Lion). This image was then processed using ImageMagick (and a short shell script) using Ubuntu. Additional editing was done to the logo portion in Pixelmator (OS X). These frames were then assembled into an animated gif using Jasc Animation Shop on Windows XP. Finally, the images were optimized to minimise their size using ImageOptim, back on OS X.”
News
When: Wednesday 12th of September 9:30am – 5pm (with a 1 hour break for lunch)
Where: Sub-honours lab in Jack Cole building (0.35)
As part of this competition, you may be offered an opportunity to participate in a Human-Computer Interaction study on subtle interaction. Participation in this study is completely voluntary.
There will be two competitive categories:
HCI study participants:
1st prize: 7” Samsung Galaxy Tab 2
2nd prize: £50 Amazon voucher
3rd prize: £20 Amazon voucher
Everyone:
1st prize: £50 Amazon voucher
2nd prize: £20 Amazon voucher
3rd prize: £10 Amazon voucher
We will try to include as many programming languages as is reasonable, so if you have any special requests, let us know.
If you have one, bring a laptop in case we run out of lab computers!
If you have any questions, please email Jakub on jd67@st-andrews.ac.uk
<!–Speaker: Laurel Riek, Computer Science and Engineering University of Notre Dame
Date/Time: 1-2pm September 4, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.
While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.
In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.
About Laurel:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.
<!–Speaker: Luke Hutton, SACHI
Date/Time: 1-2pm July 10, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The virtual wall is a simple privacy metaphor for ubiquitous computing environments. By expressing the transparency of a wall and the people to which the wall applies, a user can easily manage privacy policies for sharing their sensed data in a ubiquitous computing system.
While previous research shows that users understand the wall metaphor in a lab setting, the metaphor has not been studied for its practicality in the real world. This talk will describe a smartphone-based experience sampling method study (N=20) to demonstrate that the metaphor is sufficiently expressive to be usable in real-world scenarios. Furthermore, while people’s preferences for location sharing are well understood, our study provides insight into sharing preferences for a multitude of contexts. We find that whom data are shared with is the most important factor for users, reinforcing the walls approach of supporting apply-sets and abstracting away further granularity to provide improved usability.
About Luke:
Luke’s bio on the SACHI website.
<!–Speaker: Lindsay MacDonald, University of Calgary, Canada
Date/Time: 1-2pm July 3, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In contrast to the romantic image of an artist working in alone in a studio, large-scale media art pieces are often developed and built by interdisciplinary teams. Lindsay MacDonald will describe the process of creating and developing one of these pieces, A Delicate Agreement, within such a team, and offer personal insight on the impact that this has had her artistic practice.
A Delicate Agreement is a gaze-triggered interactive installation that explores the potentially awkward act of riding in an elevator with another person. It is a set of elevator doors with a peephole in each door that entices viewers to peer inside and observe an animation of the passengers. Each elevator passenger, or character, has a programmed personality that enables them to act and react to the other characters’ behaviour and the viewers’ gaze. The result is the emergence of a rich interactive narrative made up of encounters in the liminal time and space of an elevator ride.
A Delicate Agreement is currently part of the New Alberta Contemporaries exhibition at the Esker Foundation in Calgary, Canada. For more information about the piece, please visit http://www.lindsaymacdonald.net/portfolio/a-delicate-agreement/.
About Lindsay:
Lindsay MacDonald is a Ph. D. student, artist, designer and interdisciplinary researcher from the Interactions Lab (iLab) at the University of Calgary in Canada. Lindsay’s approach to research and creative production combines methodology both from computer science and art, and she divides her time between the iLab and her studio in the Department of Art. Her research interests include interaction design, coded behaviour and performance and building interactive art installations.
<!–Speaker: Carman Neustaedter, Simon Fraser University, Canada
Date/Time: 1-2pm June 18 (Monday), 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Families often have a real need and desire to stay connected with their remote family members and close friends. For example, grandparents want to see their grandchildren grow up, empty-nest parents want to know about the well being of their adult children, and parents want to be involved in their children’s daily routines and happenings while away from them. Video conferencing is one technology that is increasingly being used by families to support this type of need. In this talk, I will give an overview of the research that my students and I have done in this space. This includes studies of the unique ways in which families with children, long-distance couples, and teenagers make use of existing video chat systems to support ‘presence’ and ‘connection’ over distance. I will also show several systems we have designed to support always-on video connections that move beyond ‘talking heads’ to ‘shared experiences’.
About Carman:
Dr. Carman Neustaedter is an Assistant Professor in the School of Interactive Arts and Technology at Simon Fraser University, Canada. Dr. Neustaedter specializes in the areas of human-computer interaction, domestic computing, and computer-supported collaboration. He is the director of the Connections Lab, an interdisciplinary research group focused on the design and use of technologies for connecting people through space and time. This includes design for families and friends, support for workplace collaboration, and bringing people together through pervasive games. For more information, see:
Connections Lab –
Carman Neustaedter
<!–Speaker: Jim Young, University of Manitoba, Canada
Date/Time: 1-2pm June 12, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Human-Robot Interaction (HRI), broadly, is the study of how people and robots can work together. This includes core interaction design problems of creating interfaces for effective robot control and communication with people, and sociological and psychological studies of how people and robots can share spaces or work together. In this talk I will introduce several of my past HRI projects, ranging from novel control schemes for collocated or remote control, programming robotic style by demonstration, and developing foundations for evaluating human-robot interaction, and will briefly discuss my current work in robotic authority and gender studies of human-robot interaction. In addition, I will introduce the JST ERATO Igarashi Design Interface Project, a large research project directed by Dr. Takeo Igarashi, which I have been closely involved over the last several years.
About Jim:
James (Jim) Young is an Assistant Professor at the University of Manitoba, Canada, where he founded the Human-Robot Interaction lab, and is involved with the Human-Computer Interaction lab with Dr. Pourang Irani and Dr. Andrea Bunt. He received his BSc from Vancouver Island University in 2005, and completed his PhD in Social Human-Robot Interaction at the University of Calgary in 2010 with Dr. Ehud Sharlin, co-supervised by Takeo Igarashi at the University of Tokyo. His background is rooted strongly in the intersection of sociology and human-robot interaction, and in developing robotic interfaces which leverage people’s existing skills rather than making them learn new ones.
<!–Speaker: Tristan Henderson, SACHI
Date/Time: 1-2pm May 29, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The prevalence of social network sites and smartphones has led to many people sharing their locations with others. Privacy concerns are seldom addressed by these services; the default privacy settings may be either too restrictive or too lax, resulting in under-exposure or over-exposure of location information.
One mechanism for alleviating over-sharing is through personalised privacy settings that automatically change according to users’ predicted preferences. This talk will describe how we use data collected from a location-sharing user study (N=80) to investigate whether users’ willingness to share their locations can be predicted. We find that while default settings match actual users’ preferences only 68% of the time, machine-learning classifiers can predict up to 85% of users’ preferences. Using these predictions instead of default settings would reduce the over-exposed location information by 40%.
This work has mainly been performed by my PhD student Greg Bigwood, but Tristan will be presenting the paper (at the AwareCast Pervasive workshop) because Greg will be busy in St Andrews graduating!
About Tristan
<!–Speaker: Helen Purchase, University of Glasgow
Date/Time: 1-2pm May 15, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The visual design of an interface is not merely an ‘add-on’ to the functionality provided by a system: it is well-known that it can affect user preference, engagement and motivation, but does it have any effect on user performance? Can the efficiency or effectiveness of a system be improved by its visual design? This seminar will report on experiments that investigate whether any such effect can be quantified and tested. Key to this question is the definition of an unambiguous, quantifiable characterisation of an interface’s ‘visual aesthetic’: ways in which this could be determined will be discussed.
About Helen:
Dr Helen Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. She has worked in the area of empirical studies of graph layout for several years, and also has research interests in visual aesthetics, task-based empirical design, collaborative learning in higher education, and sketch tools for design. She is currently writing a book on empirical methods for HCI research.
<!–Speaker: Umer Rashid, University of St Andrews, UK
Date/Time: 1-2pm May 1, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
A very apparent drawback of mobile devices is that their screens do not allow for the display of large amounts of information at once without requiring interaction, which limits the possibilities for information access and manipulation on the go. Attaching a large external display can help a mobile device user view more content at once. We report on a study investigating how different configurations of input and output across displays affect task performance, subjective workload and preferences in map, text and photo search tasks. After conducting a detailed analysis of the performance differences across different UI configurations, we provide recommendations for the design of distributed user interfaces.
About Umer:
Umer Rashid has conducted my PhD research under the supervision of Prof. Aaron Quigley in the School of Computer Science at University of St Andrews. The goal of his research is to look into the ways mobile interaction with external large displays can complement the inherent capabilities of each device, thus resulting into an enhanced user experience.