St Andrews HCI Research Group

News

Jacob Eisenstein, Interactive Topic Visualization for Exploratory Text Analysis


<!–Speaker: Jacob Eisenstein, Georgia Institute of Technology,
Date/Time: 1-2pm July 23, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Large text document collections are increasingly important in a variety of domains; examples of such collections include news articles, streaming social media, scientific research papers, and digitized literary documents. Existing methods for searching and exploring these collections focus on surface-level matches to user queries, ignoring higher-level thematic structure. Probabilistic topic models are a machine learning technique for finding themes that recur across a corpus, but there has been little work on how they can support end users in exploratory analysis. In this talk I will survey the topic modeling literature and describe our ongoing work on using topic models to support digital humanities research. In the second half of the talk, I will describe TopicViz, an interactive environment that combines traditional search and citation-graph exploration with a dust-and-magnet layout that links documents to the latent themes discovered by the topic model.
This work is in collaboration with:
Polo Chau, Jaegul Choo, Niki Kittur, Chang-Hyun Lee, Lauren Klein, Jarek Rossignac, Haesun Park, Eric P. Xing, and Tina Zhou

Bio:
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.

Olivier Penacchio, A neurodynamical model of luminance perception


<!–Speaker: Olivier Penacchio, University of St Andrews
Date/Time: 1-2pm June 11, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
The perception of such simple visual features as black, greys and white may sound simple. However, the luminance we perceive, also called brightness, does not match the luminance as physically measured. Instead, the perceived intensity of an area is modulated by the luminance of surrounding areas. This phenomenon is known as brightness induction and provides a striking demonstration that visual perception cannot be considered a simple pixel-wise sampling of the environment.
The talk will start with an overview of the classical examples of brightness induction and a quick look at the different theories underlying this phenomenon. We will next introduce a neurodynamical model of brightness induction, recently published*. This model is based on the architecture of the primary visual cortex and successfully accounts for well-known psychophysical effects both in static and dynamic contexts. It suggests that a common simple mechanism may underlie different fundamental processes of visual perception such as saliency and brightness perception. Finally, we will briefly outline potential applications in the arena of computer vision and medical imaging.
* Penacchio O, Otazu X, Dempere-Marco L (2013) A Neurodynamical Model of Brightness Induction in V1. PLoS ONE 8(5): e64086. doi:10.1371/journal.pone.0064086
luminance
Bio:
Olivier Penacchio is a postdoctoral researcher in the school of Psychology and Neuroscience of St Andrews University. He is currently working on the perception and evolution of counter-shading camouflage. His background is in pure mathematics, algebraic geometry, and he is a recent convert to the area of vision research.

Kristian Wasen, The Visual Touch Regime: Real-Time 3D Image-Guided Robotic Surgery and 4D and “5D” Scientific Illustration at Work


<!–Speaker: Dr Kristian Wasen, University of Gothenberg
Date/Time: 12-1pm Wed May 22, 2013
Location: 1.33a Jack Cole, University of St Andrews,(directions)–>
Abstract:
Dr Kristian Wasen will be presenting his recently published paper co-authored by Meaghan Brierly. Emerging multidimensional imaging technologies (3D/4D/“5D”) open new ground for exploring visual worlds and rendering new image-based knowledge, especially in areas related to medicine and science. 3D imaging is defined as three visual dimensions. 4D imaging is three visual dimensions plus time. 4D imaging can also be combined with functional transitions (i.e., following radioactive tracer isotope through the body in positron emission tomography). 4D imaging plus functionality is defined as “5D” imaging. We propose the idea of “visual touch”, a conceptual middle ground between touch and vision, as a basis for future explorations of contemporary institutional standards of image-based work. “Visual touch” is both the process of reconciling the senses (human and artificial) and the end result of this union of the senses. We conclude that while new multi-dimensional imaging technology emphasises vision, new forms of image-based work using visual materials cannot solely be classified as “visual”.
Bio:
Dr. Kristian Wasen is working as a postdoctoral fellow at the University of Gothenburg in Sweden. He holds two bachelor degrees in Social Psychology and Industrial Management from the University of Skovde, Sweden, and he earned a master’s degree and a Ph.D. degree in Business Administration from the School of Business, Economics and Law in Gothenburg. His current research interests include health care robotics, technology’s social and professional impact, organisational design, evaluation of usability, user acceptance, and efficient human-robot interaction.

Vinodh Rajan, Modeling & Analyzing development of scripts


<!–Speaker: Vinodh Rajan, University of St. Andrews
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Human handwriting is a process that often generates variable output. Scripts generally begin with characters possessing consistent shape. But the effects of variations tend to accumulate and modify the scripts’ appearance over time. The talk will start with a brief overview of scripts and related concepts. Then the example of the Brahmic family of scripts will be addressed, and in particular the variations that led to their development.This will be followed by a general introduction to handwriting modeling methods along with techniques such as trajectory reconstruction, stroke segmentation and stroke modeling. There will further be a discussion of methods and techniques to model and analyze development of scripts, with prospective applications, and lastly there will be a demo of what was achieved so far.
Bio:
Vinodh Rajan is a PhD student based within the School of Computing, here at the University of St. Andrews. Read more about Vinodh here.

Kyle Montague, The SUM Framework: An Exploration of Shared User Models and Adaptive Interfaces to Improve Accessibility of Touchscreen Interactions


<!–Speaker: Kyle Montague, University of Dundee
Date/Time: 1-2pm May 14, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Touchscreens are ever-present in technologies today. The large featureless sensors are rapidly replacing the physical keys and buttons on a wide array of digital technologies, the most common is the mobile device. Gaining popularity across all demographics and endorsed for their superior interface soft design flexibility and rich gestural interactions, the touchscreen currently plays a pivotal role in digital technologies. However, just as touchscreens have enabled many to engage with digital technologies, its barriers to access are excluding many others with visual and motor impairments. The contemporary techniques to address the accessibility issues fail to consider the variable nature of abilities between people, and the ever changing characteristics of an individuals impairment. User models for personalisation are often constructed from stereotypical generalisations of the similarities of people with disabilities, neglecting to recognise the unique characteristics of the individuals themselves. Existing strategies for measuring abilities and performance require users to complete exhaustive training exercises that are disruptive from the intended interactions, and result in the creation of descriptions of a users performance for that particular instance.
This research aimed to develop novel techniques to support the continuous measurement of individual user’s needs and abilities through natural touchscreen device interactions. The goal was to create detailed interaction models for individual users, in order to understand the short and long-term variances of their abilities and characteristics. Resulting in the development of interface adaptions that better support interaction needs of people with visual and motor impairments.
Bio:
Kyle Montague is a PhD student based within the School of Computing at the University of Dundee. Kyle works as part of the Social Inclusion through the Digital Economy (SiDE) research hub. He is investigating the application of shared user models and adaptive interfaces to improve the accessibility of digital touchscreen technologies for people with vision and motor impairments.
His doctoral studies explore novel methods of collecting and aggregating user interaction data from multiple applications and devices, creating domain independent user models to better inform adaptive systems of individuals needs and abilities.
Alongside his research Kyle works for iGiveADamn, a small digital design company he set up with a fellow graduate Jamie Shek. Past projects have included iGiveADamn Connect platform for charities, Scottish Universities Sports and the Blood Donation mobile apps. Prior to this he completed an undergraduate degree in Applied Computing at the University of Dundee.

Patrick Olivier, Digital tabletops: in the lab and in the wild


<!–Speaker: Patrick Olivier, Culture Lab, Newcastle University
Date/Time: 1-2pm May 7, 2013
Location: 1.33a Jack Cole, University of St. Andrews–>
Abstract:
The purpose of this talk will be to introduce Culture Lab’s past and current interaction design research into digital tabletops. The talk will span our interaction techniques and technologies research (including pen-based interaction, authentication and actuated tangibles) but also application domains (education, play therapy and creative practice) by reference to four Culture Lab tabletop studies: (1) Digital Mysteries (Ahmed Kharrufa’s classroom-based higher order thinking skills application); (2) Waves (Jon Hook’s expressive performance environment for VJs); (3) Magic Land (Olga Pykhtina’s tabletop play therapy tool); and (4) StoryCrate (Tom Bartindale’s collaborative TV production tool). I’ll focus on a number of specific challenges for digital tabletop research, including selection of appropriate design approaches, the role and character of evaluation, the importance of appropriate “in the wild” settings, and avoiding the trap of simple remediation when working in multidisciplinary teams.
Bio:
Patrick Olivier is a Professor of Human-Computer Interaction in the School of Computing Science at Newcastle University. He leads the Digital Interaction Group in Culture Lab, Newcastle’s centre for interdisciplinary practice-based research in digital technologies. Their main interest is interaction design for everyday life settings and Patrick is particularly interested in the application of pervasive computing to education, creative practice, and health and wellbeing, as well as the development of new technologies for interaction (such as novel sensing platforms and interaction techniques).

Pre-CHI day in St Andrews sponsored by SICSA


This year from across SICSA we have at least 16 notes, papers and TOCHI papers being presented at CHI in Paris along with numerous WIPs, workshop papers, SIGs etc. On April 23rd we hosted 35+ people from across SICSA for a Pre-CHI day which allowed all presenters a final dry run of their talks with feedback. This was also an opportunity to inform others across SICSA about their work while allowing everyone an opportunity to snap-shot HCI research in Scotland.
Pre-CHI Day – April 23, 2013: 10am – 4:30pm – Location: Medical and Biological Sciences Building, Seminar Room 1
9:30 – 10:00 Coffee/Tea
10:00 – 10:25 Memorability of Pre-designed and User-defined Gesture Sets M. Nacenta, Y. Kamber, Y. Qiang, P.O. Kristensson (Univ. of St Andrews, UK) Paper
10:25 – 10:50 Supporting Personal Narrative for Children with Complex Communication Needs R. Black (Univ. of Dundee, UK), A. Waller (Univ. of Dundee, UK) R. Turner (Data2Text, UK), E. Reiter (Univ. of Aberdeen, UK) TOCHI paper
10:50 – 11:05 Coffee Break
11:05 – 11:30 Use of an Agile Bridge in the Development of Assistive Technology S. Prior (Univ. of Abertay Dundee, UK). A. Waller (Univ. of Dundee, UK) T. Kroll (Univ. of Dundee, UK), R. Black (Univ. of Dundee, UK) Paper
11:30 – 11:45 Multiple Notification Modalities and Older Users D. Warnock, S. Brewster, M. McGee-Lennon (Univ. of Glasgow, UK) Note
11:45 – 11:55 Visual Focus-Aware Applications and Services in Multi-Display Environments J. Dostal, P.O. Kristensson, A. Quigley (Univ. of St Andrews) Workshop paper (Workshop on Gaze Interaction in the Post-WIMP)
12:00 – 1:00 Lunch Break & Poster Presentations
1:00 – 1:25 ‘Digital Motherhood’: How does technology help new mothers? L. Gibson and V. Hanson (Univ. of Dundee, UK) Paper
1:25 – 1:40 Combining Touch and Gaze for Distant Selection in a Tabletop Setting M. Mauderer (Univ. of St Andrews), F. Daiber (German Research Centre for Artificial Intelligence – DFKI), A. Krüger (DFKI) Workshop paper (Workshop on Gaze Interaction in the Post-WIMP)
1:40 – 2:05 Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement H. Pohl (Univ. of Hanover, DE) and R. Murray-Smith (Univ. of Glasgow, UK) Paper
2:05 – 2:20 Seizure Frequency Analysis Mobile Application: The Participatory Design of an Interface with and for Caregivers Heather R. Ellis (Univ. of Dundee) Student Research Competition
2:20 – 2:40 Coffee Break
2:40 – 3:05 Exploring & Designing Tools to Enhance Falls Rehabilitation in the Home S. Uzor and L. Baillie (Glasgow Caledonian Univ., UK) Paper
3:05 – 3:30 Understanding Exergame Users’ Physical Activity, Motivation and Behavior Over Time A. Macvean and J. Robertson (Heriot-Watt Univ., UK) Paper
3:30 – 3:45 Developing Efficient Text Entry Methods for the Sinhalese Language S. Reyal (Univ. of St Andrews), K. Vertanen (Montana Tech), P.O. Kristensson (Univ. of St Andrews) Workshop paper (Grand Challenges in Text Entry)
3:45 – 4:10 The Emotional Wellbeing of Researchers: Considerations for Practice W. Moncur (Univ. of Dundee, UK) Paper
4:10 – 4:30 Closing remarks and (optional) pub outing.
 
 
 

Maria Wolters, Reminding and Remembering – What do we do with Little Miss Scatterbrain?


<!–Speaker: Maria Wolters, University of Edinburgh
Date/Time: 1-2pm April 2, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
In this talk, I will give an overview of recent work on reminding and remembering that I have been involved in. I will argue two main points.
– Reminding in telehealthcare is not about putting an intervention in place that enforces 100% adherence to the protocol set for the patient by their wise clinicians. Instead, we need to work with users to select cues that will help them remember and that are solidly anchored in their conceptualisation of their own health and abilities, their life, and their home.
– When tracking a person’s mental health, the stigma of being monitored can outweigh the benefits of monitoring. We don’t remember everything perfectly – if we did, that would be pathological. But this is a problem when we’re asked to report our own feelings, activity levels, sleeping patterns, etc. over a period of several days or weeks, which is important for identifying mental health problems. Is intensive monitoring the solution? Only if it is unobtrusive and non-stigmatising.
I will conclude with a short discussion of the EU project Forget-IT that started in February 2013 and looks at contextualised remembering and intelligent preservation of individual data (such as a record of trips a person made or photos they’ve taken) and organisational data (such as web sites).
Bio:
Maria Wolters is a Research Fellow at the University of Edinburgh who works on the cognitive and perceptual foundations of human computer interaction. She specialises in dialogue and auditory interfaces. The main application areas are eHealth, telehealthcare, and personal digital archiving. Maria is the Scientific coordinator of the EU FP7 STREP Help4Mood, which supports the treatment of people with depression in the community, and is a researcher on the EU FP7 IP Forget-IT, which looks at sustainable digital archiving. She used to work on the EPSRC funded MultiMemoHome project, which finished in February

"Cognitive Computing: Watson's path from Jeopardy to real-world Big (and dirty) Data."


On July 8th, Rónan McAteer from the Watson Solutions Development Software Group in IBM Ireland will give a talk as part of the Big Data Information Visualisation Summer School here in St Andrews. This talk is entitled “Cognitive Computing: Watson’s path from Jeopardy to real-world Big (and dirty) Data.”
While this talk is part of the summer school, we are trying to host it in a venue in central St Andrews during the evening of July 8th so that people from across St Andrews and SICSA can attend if they wish.
The Abstract for Rónan’s talk is below:
Building on the success of the Jeopardy Challenge in 2011, IBM is now preparing Watson for use in commercial applications. At first glance, the original challenge appears to present an open-domain question answering problem. However, moving from the regular, grammatical, well-formed nature of gameshow questions, to the malformed and error-strewn data that exists in the real world, is very much a new and complex challenge for IBM. Unstructured and noisy data in the form of natural language text, coming from sources like instant messaging and recorded conversations, automatically digitised text (OCR), human shorthand notes (replete with their individual sets of prose and typos), must all be processed in a matter of seconds, to find the proverbial ‘needle in the haystack’.
In this talk we’ll take a look at how Watson is rising to meet these new challenges. We’ll go under the hood to take a look at how the system has changed since the days of Jeopardy, using state-of-the-art IBM hardware and software, to significantly reduce the cost while increasing the capability.

Jakub Dostal, Subtle Gaze-Dependent Techniques for Visualising Display Changes in Multi-Display Environments


<!–Speaker: Jakub Dostal, SACHI, School of Computer Science, University of St Andrews
Date/Time: 1-2pm March 5, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Modern computer workstation setups regularly include multiple displays in various configurations. With such multi-monitor or multi-display setups we have reached a stage where we have more display real-estate available than we are able to comfortably attend to. This talk will present the results of an exploration of techniques for visualising display changes in multi-display environments. Apart from four subtle gaze-dependent techniques for visualising change on unattended displays, it will cover the technology used to enable quick and cost-effective deployment to workstations. An evaluation of the technology as well as the techniques themselves will be presented as well. The talk will conclude with a brief discussion on the challenges in evaluating subtle interaction techniques.
About Jakub:
Jakub’s bio on the SACHI website.