St Andrews HCI Research Group

News

Two CHI 2013 workshops


Two SACHI members, Per Ola Kristensson and Aaron Quigley are organizing with other colleagues workshops at the CHI 2013 the ACM SIGCHI Conference on Human Factors in Computing Systems in Paris in April 2013. These workshops are called Blended Interaction: Envisioning Future Collaborative Interactive Spaces and Grand Challenges in Text Entry.
Once the workshop websites are online, we will link from them here. (click on the CHI 2013 logo above to visit the main conference website).

UMUAI special issue on Ubiquitous and Pervasive User Modelling


Aaron Quigley, Judy Kay and Tsvi Kuflik are guest editors for a UMUAI special issue on Ubiquitous and Pervasive User Modelling. You can see the full call for papers for this special issue here.
 

Miguel Nacenta and Aaron Quigley, Impressions from ITS 2012 with Interesting Research Papers, Videos and Demos from UIST 2012


<!–Speakers: Miguel Nacenta and Aaron Quigley, Computer Science University of St. Andrews
Date/Time: 1-2pm November 20, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Miguel Nacenta recently attended ITS 2012. The ACM international conference on Interactive Tabletops and Surfaces brings together researchers and innovators from a variety of backgrounds including engineering, computer science, design, and social sciences. Miguel is going to share with us his impressions of the Research Papers, Demos, Tutorials and Workshops he participated in. The ideas and perspectives shared at this year’s ITS include multi-touch and gesture-based interfaces, 3D interaction, interactive surfaces in education and for children, multi-display environments, non-flat surfaces, multitouch development, sketching the user interfaces and high-performance ITS technologies.
Aaron Quigley attendance at the recent UIST 2012.conference, allows Aaron to offer insight into the interesting Research Papers, Videos and Demos he enjoyed there. UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces. UIST brings together researchers and practitioners from diverse areas. Some of the topics we can expect to hear about are traditional graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW.
As both UIST 2013 and ITS 2013 are taking place here in St. Andrews next October, it would be worthwhile attending to get a flavour of what to expect next year.
About Miguel:
Dr. Miguel Nacenta has been a University of St Andrews lecturer since May 2011, where he cofounded the SACHI group. Prior to this he was a post-doctoral fellow at the Interactions Lab, University of Calgary, Canada. He holds an electrical engineering degree from the Technical University of Madrid (Ingeniero Superior, UPM), and a doctorate from the University of Saskatchewan, Canada, under the supervision of Prof. Carl Gutwin.
About Aaron:
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI, the St Andrews Computer Human Interaction research group, His appointment is part of SICSA the Scottish Informatics and Computer Science Alliance. From August of 2012 he is the SICSA deputy director for knowledge exchange. He is the general co-chair for UIST 2013 and ITS 2013 (in St Andrews in Oct 2013).

Aaron Quigley, Inaugural Lecture on HCI


Today Professor Aaron Quigley will be giving his Inaugural Lecture in School III
The abstract for his talk is as follows: Billions of people are using interconnected computers and have come to rely on the computational power they afford us, to support their lives, or advance our global economy and society. However, how we interact with this computation is often limited to little “windows of interaction” with mobile and desktop devices which aren’t fully suited to their contexts of use. Consider the surgeon operating, the child learning to write or the pedestrian navigating a city and ask are the current devices and forms of human computer interaction as fluent as they might be? I contend there is a division between the physical world in which we live our lives and the digital space where the power of computation currently resides. Many day to day tasks or even forms of work are poorly supported by access to appropriate digital information. In this talk I will provide an overview of research I’ve been pursuing to bridge this digital-physical divide and my future research plans. This talk will be framed around three interrelated topics. Ubiquitous Computing, Novel Interfaces and Visualisation. Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. Everyday objects are sites for sensing, input, processing along with user output. Novel Interfaces, which draw the user interface closer to the physical world, both in terms of input to the system and output from the system. Finally, the use of computer-supported interactive visual representations of data to amplify cognition with visualisation. In this talk I will demonstrate that advances in human computer interaction require insights and research from across the sciences and humanities if we are to bridge this digital-physical divide.

Aaron delivers and invited seminar in the University of Zurich


Professor Quigley is presenting a seminar in the University of Zurich as an invited speaker by Dr Elaine Huang.
Seminar abstract: Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artefacts? Can we model, measure and predict the cost of users altering their gaze in single or multi-display environments? If so, can we personalize interfaces using this knowledge. What about when moving and while the distance between user and screen is varying. Can this be considered a new modality and used to personalize the interfaces along with physiological differences and our current gaze. In this talk we seek to answer some of these questions. We introduce an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. We also report on controlled lab and outdoor experiments with real users. This is to measure both gaze and distance from the screen in an attempt to quantify the cost of attention switching along with the use of distance as a modality. In each case, for distance, gaze or expected eyesight we would like to develop models which can allow us to make predictions about how easy or hard it is to see visual information and visual designs, along with altering the designs to suit individual users based on their current context.

Interactive Technologies for Libraries and Museums


3 presentations on the state of the art
Open to all public
Tue Oct 16 – 1:30pm to 2:30pm
School VI Lecture Theatre, St Salvator’s Quad
Visit http://sachi.cs.st-andrews.ac.uk/activities/workshops/interactive-technologies-for-libraries-and-museums/ for more details.
 

Today's slashdot.org logo created by SACHI PhD student


Today’s logo at slashdot.org was created by Jason Jacques, a new PhD student in the SACHI group!
From slashdot.org: Artist Jason Jacques says: “While the main text itself is “obvious” in its fully animated form this logo provides additional challenge in that the remainder of the message must be decoded. Can you figure it out? If so, mail answer to logo15@slashdot.org. How did he do it? After calculating the necessary sizes and bit patterns on paper, the static image of the entire message was generated using Pixlemator on Mac OS X (Lion). This image was then processed using ImageMagick (and a short shell script) using Ubuntu. Additional editing was done to the logo portion in Pixelmator (OS X). These frames were then assembled into an animated gif using Jasc Animation Shop on Windows XP. Finally, the images were optimized to minimise their size using ImageOptim, back on OS X.”

St Andrews Algorithmic Programming Competition


When: Wednesday 12th of September 9:30am – 5pm (with a 1 hour break for lunch)
Where: Sub-honours lab in Jack Cole building (0.35)

As part of this competition, you may be offered an opportunity to participate in a Human-Computer Interaction study on subtle interaction. Participation in this study is completely voluntary.
There will be two competitive categories:
HCI study participants:
1st prize: 7” Samsung Galaxy Tab 2
2nd prize: £50 Amazon voucher
3rd prize: £20 Amazon voucher
Everyone:
1st prize: £50 Amazon voucher
2nd prize: £20 Amazon voucher
3rd prize: £10 Amazon voucher
We will try to include as many programming languages as is reasonable, so if you have any special requests, let us know.
If you have one, bring a laptop in case we run out of lab computers!
If you have any questions, please email Jakub on jd67@st-andrews.ac.uk

Welcome to Uta Hinrichs


Welcome to Uta Hinrichs who has joined the SACHI group from the University of Calgary, Canada as a Research Fellow. Uta holds a Diplom (equiv. to MSc) in Computational Visualistics from the University of Magdeburg in Germany and is in the process of finishing her PhD in Computer Science with a specialization in Computational Media Design. Uta’s PhD research, that she conducted at the InnoVis Group of the University of Calgary, focuses on how to support open-ended information exploration on large displays in public exhibition spaces, combining information visualization with direct-touch interaction techniques. As part of this research, she has designed and studied large display installations in the context of a museum and art gallery, library, and an aquarium.
To learn more about Uta’s work here see her SACHI biography page or visit her own website here to get an overview of her previous research projects. Everyone in SACHI welcomes Uta!
 

Laurel Riek, Facing Healthcare's Future: Designing Facial Expressivity for Robotic Patient Mannequins


<!–Speaker: Laurel Riek, Computer Science and Engineering University of Notre Dame
Date/Time: 1-2pm September 4, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.
While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.
In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.
About Laurel:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.