St Andrews HCI Research Group

Welcome to the website for SACHI which aims to act a focal point for human computer interaction research across the University of St Andrews and beyond.

SACHI is the St Andrews Computer Human Interaction research group (a HCI Group) based in the School of Computer Science. Members of SACHI co-supervise research students, collaborate on various projects and activities, share access to research equipment and our HCI laboratory. Established in 2011, we now have a regular seminar series, social activities, summer schools and organise workshops and conferences together. Along with the above links, you can find more news about us here and our new YouTube video channel here.

News & Events

Welcome Chris Norval


This week sees a new member of the group join SACHI.  Chris Norval is a postdoctoral researcher working with Tristan Henderson on a project to predict when social media users consent to having their data used for health research.

Chris completed his PhD in Human Computer Interaction at the University of Dundee in 2014. His thesis explored reasons why older adults were less likely to use social networking sites than younger adults. Issues (such as privacy and security concerns, a lack of skill, negative preconceptions) were identified from user studies, and recommendations for the designers of such sites were derived to mitigate or avoid these issues. These recommendations were then verified in a user study which compared a prototype replica of a mainstream social networking site to a version which utilised the recommendations.

After finishing his PhD, Chris worked as a data analysts in games industry for two year before joining SACHI.  Learn more about Chris here.

Professor Roderick Murray Smith: University of Glasgow

Event details

When: 18th November 2016 13:00 - 14:00
Where: Cole 1.33b
Speaker: Roderick Murray Smith

    Seminar placeholder.  Details to be confirmed.

    Dr Rebecca Fiebrink: Goldsmiths University of London

    Event details

    When: 1st November 2016 14:00 - 15:00
    Where: Cole 1.33b
    Speaker: Rebecca Fiebrink


      Title: Designing Real-time Interactions Using Machine Learning

      Abstract: Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are crucial in the design of new musical instruments and other embodied interactions. Realising the creative potential of these algorithms requires a rethinking of the interfaces through which people provide data and build models, providing for tight interaction-feedback loops and efficient mechanisms for people to steer and explore algorithm behaviours.

      In this talk, I will discuss my research on better enabling composers, musicians, and developers to employ supervised learning in the design of new real-time systems. I will show a live demo of tools that I have created for this purpose, centering around the Wekinator software toolkit for interactive machine learning. I’ll discuss some of the outcomes from 7 years of creating machine learning-based tools and observing people using these tools in creative contexts. These outcomes include a better understanding how machine learning can be used as a tool for design by end users and developers, and how using machine learning as a design tool differs from more conventional application contexts.

      Biography: Dr. Rebecca Fiebrink is a Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning (with a new version just released in 2015!), a co-creator of the Digital Fauvel platform for interactive musicology, and a Co-I on the £1.6M Horizon 2020-funded RAPID-MIX project on Real-time Adaptive Prototyping for Industrial Design of Multimodal Expressive Technology. She is the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.