Title: The Collaborative Design of Tangible Interactions in Museums
Abstract: Interactive technology for cultural heritage has long been a subject of study for Human-Computer Interaction. Findings from a number of studies suggest that, however, technology can sometime distance visitors from heritage holdings rather than enabling people to establish deeper connections to what they see. Furthermore, the introduction of innovative interactive installations in museum is often seen as an interesting novelty but seldom leads to substantive change in how a museum approaches visitor engagement. This talk will discuss work on the EU project “meSch” (Material EncounterS with Digital Cultural Heritage) aimed at creating a do-it-yourself platform for cultural heritage professionals to design interactive tangible computing installations that bridge the gap between digital content and the materiality of museum objects and exhibits. The project has adopted a collaborative design approach throughout, involving cultural heritage professionals, designers, developers and social scientist. The talk will feature key examples of how collaboration unfolded and relevant lessons learned, particularly regarding the shared envisioning of tangible interaction concepts at a variety of heritage sites including archaeology and art museums, hands-on exploration centres and outdoor historical sites.
Biography: Dr. Luigina Ciolfi is Reader in Communication at Sheffield Hallam University. She holds a Laurea (Univ. of Siena, Italy) and a PhD (Univ. of Limerick, Ireland) in Human-Computer Interaction. Her research focuses on understanding and designing for human situated practices mediated by technology in both work and leisure settings, particularly focusing on participation and collaboration in design. She has worked on numerous international research projects on heritage technologies, nomadic work and interaction in public spaces. She is the author of over 80 peer-reviewed publications, has been an invited speaker in ten countries, and has advised on research policy around digital technologies and cultural heritage for several European countries. Dr. Ciolfi serves in a number of scientific committees for international conferences and journals, including ACM CHI, ACM CSCW, ACM GROUP, ECSCW, COOP and the CSCW Journal. She is a member of the EUSSET (The European Society for Socially Embedded Technologies) and of the ACM CSCW Steering Groups. Dr. Ciolfi is a senior member of the ACM. Full information on her work can be found at http://luiginaciolfi.com
News
A display of life-saving medical technology by University of St Andrews researchers stole the show at the annual Universities Scotland reception for MSPs in Scottish Parliament within the Garden Lobby at Holyrood last week.
Dr David Harris-Birtill, founder of Beyond Medics, and David Morrison had a steady stream of politicians eager to try out a working prototype of their ground-breaking Automated Remote Pulse Oximetry system which automatically displays the individual’s vital signs – heart rate and blood oxygenation level – through a remote camera, without the need for clips and wires.

Dr David Harris-Birtill (left) and David Morrison (centre) with Edinburgh South MSP Daniel Johnson (right), a St Andrews alumnus.
Congratulations to Hui-Shyong Yeo, Aaron Quigley and colleagues, who won best paper honorable mention award for the paper WatchMI at MobileHCI 2016.
Yeo also attended the Doctoral Consortium and demoed the WatchMI during the demo session.
The “WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches” paper which appears in the Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’16) can be accessed via:
- Directly to the ACM Digital Library page for WatchMI
- The ACM SIGCHI OpenTOC page for the MobileHCI 2016 (search for WatchMI), for free until Sep 2017
The Doctor Consortium and Demo paper can be accessed via:
- Directly to the ACM Digital Library page for Single-handed interaction for mobile and wearable computing
- Directly to the ACM Digital Library page for WatchMI’s applications
- The ACM SIGCHI OpenTOC page for the MobileHCI 2016 (search for WatchMI), for free until Sep 2017
Press
- Gizmodo: Students Hacked a Chip to Give Your Smartphone a Sense of Touch
- Engadget: Google’s mini radar can identify virtually any object
- TheVerge: Google’s miniature radars can now identify objects
- Fast Co Design: Google’s Project Soli Can Now Identify Any Object
Curated video by Futurism, with more than 1.2 million views!
Congratulations to Hui-Shyong Yeo, Aaron Quigley and colleagues, who won best poster at UIST2016.
The Sidetap and Slingshot paper which appears in the Adjunct Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16) can be accessed via:
- The ACM SIGCHI OpenTOC page for the Adjunct Proceedings for UIST 2016 (search for SideTap), for free until Oct 2017
- Directly to the ACM Digital Library page for Sidetap and Slingshot
- Or via the University of St Andrews Research portal.
RadarCat (Radar Categorization for Input & Interaction) was presented at UIST 2016 this week in Tokyo, Japan. RadarCat is a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices.
The RadarCat paper which appears in the Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16) can be accessed via:
- The ACM SIGCHI OpenTOC page for UIST 2016
(search for RadarCat), for free until Oct 2017 - Directly to the ACM Digital Library page for RadarCat
- Or via the University of St Andrews Research portal.
Some Media Coverage
- Android Headlines
- Wareable: WatchMI wants to bring new gesture controls to existing smartwatches
- Android Police: Researchers in UK develop amazing new way to interact with Android Wear devices
- Engadget: WatchMI: Touchscreen-Interaktionen auf SmartWatches die Spass machen
- Silicon India
- India Today
- Computing
- SlashGear
- ACM TechNews – SIGCHI Edition
This week sees a new member of the group join SACHI. Chris Norval is a postdoctoral researcher working with Tristan Henderson on a project to predict when social media users consent to having their data used for health research.
MORE
Title: Control Theoretical Models of Pointing
Speaker: Rod Murray-Smith, University of Glasgow
http://www.dcs.gla.ac.uk/~rod/
Abstract: I will talk about two topics:
1. (Joint work with Jörg Müller & Antti Oulasvirta) I will present an empirical comparison of four models from manual control theory on their ability to model targetting behaviour by human users using a mouse: McRuer’s Crossover, Costello’s Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.
2. Casual control. How and why we can design systems to work at a range of levels of engagement.
Biography: Roderick Murray-Smith is a Professor of Computing Science at Glasgow University, in the “Inference, Dynamics and Interaction” research group and the Head of the Information, Data and Analysis Section. He works in the overlap between machine learning, interaction design and control theory. In recent years his research has included multimodal sensor-based interaction with mobile devices, mobile spatial interaction, Brain-Computer interaction and nonparametric machine learning. Prior to this he held positions at the Hamilton Institute, NUIM, Technical University of Denmark, M.I.T., and Daimler-Benz Research, Berlin, and was the Director of SICSA, the Scottish Informatics and Computing Science Alliance. He works closely with the mobile phone industry, having worked together with Nokia, Samsung, FT/Orange, Microsoft and Bang & Olufsen. He was a member of Nokia’s Scientific Advisory Board and is a member of the Scientific Advisory Board for the Finnish Centre of Excellence in Computational Inference Research. He has co-authored three edited volumes, 22 journal papers, 16 book chapters, and 88 conference papers.
Title: Designing Real-time Interactions Using Machine Learning
Abstract: Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are crucial in the design of new musical instruments and other embodied interactions. Realising the creative potential of these algorithms requires a rethinking of the interfaces through which people provide data and build models, providing for tight interaction-feedback loops and efficient mechanisms for people to steer and explore algorithm behaviours.
In this talk, I will discuss my research on better enabling composers, musicians, and developers to employ supervised learning in the design of new real-time systems. I will show a live demo of tools that I have created for this purpose, centering around the Wekinator software toolkit for interactive machine learning. I’ll discuss some of the outcomes from 7 years of creating machine learning-based tools and observing people using these tools in creative contexts. These outcomes include a better understanding how machine learning can be used as a tool for design by end users and developers, and how using machine learning as a design tool differs from more conventional application contexts.
Biography: Dr. Rebecca Fiebrink is a Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning (with a new version just released in 2015!), a co-creator of the Digital Fauvel platform for interactive musicology, and a Co-I on the £1.6M Horizon 2020-funded RAPID-MIX project on Real-time Adaptive Prototyping for Industrial Design of Multimodal Expressive Technology. She is the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.