St Andrews HCI Research Group

News

Seminar: Learning Vocabulary in Augmented Reality Supported 10th April 2024


VocabulARy: Learning Vocabulary in Augmented Reality Supported by Keyword Method

Abstract:

The “keyword method” is an effective mnemonic technique for learning vocabulary in a foreign language. It involves creating a mental association between the object the foreign word represents and a word in one’s native language that sounds similar (called the keyword). Learning foreign language vocabulary is enhanced when we encounter words in context. This context can be provided by the place or activity we are engaged with. This talk will present our work “VocabulARy” which enhances the language learning process by providing users with keywords and their visualisations in context using augmented reality (AR).

Bio:

Maheshya Weerasinghe is a Research Associate in Human-Computer Interaction (HCI) at the University of Glasgow, School of Computing Science, UK. Her research centres on extended reality and guided learning environments. She obtained her joint PhD in Computer Science at the University of St Andrews, UK, and the University of Primorska, Slovenia (2023).
Before joining the University of Glasgow, Maheshya has engaged in many collaborative research work with the HICUP Lab, University of Primorska, Slovenia; SACHI Lab, University of St Andrews, UK; Mixed Reality Lab, University of Coburg, Germany; IDM Lab, Nara Institute of Science & Technology, Japan; and the Monash University, Malaysia.

More about Maheshya Weerasinghe Arachchillage

Event details:

  • When: 10th April 2024 12:30 – 13:30
  • Where: Jack Cole 1.19

 

If you’re interested in attending any of the seminars in room 1.19, please email the SACHI seminar coordinator: aaa8@st-andrews.ac.uk so they can make appropriate arrangements for the seminar based on the number of attendees.

Seminar:User Language and Perspective in Speech-Based Human-Machine Dialogue 3rd April 2024


Perspective taking, partner models and user language use in speech based human-machine dialogue

Abstract:

Speech based conversational user interfaces (CUIs) such as speech agents are now commonplace. Design is critical in supporting and informing our perceptions of speech agents as dialogue partners (i.e. our partner models), which are commonly used to inform perspective taking in dialogue. My talk will explore how CUI design shapes our beliefs of a machine partner’s abilities, the dimensions relevant to partner models, how partner models are crucial to consider in terms of speech agent interaction, and how this concept can help us begin to explain our language interactions with conversational AI more broadly.

Bio:

Benjamin R Cowan is Professor of Human-Computer Interaction at University College Dublin’s School of Information & Communication Studies in Ireland. He completed his undergraduate studies in Psychology & Business Studies (2006) as well as his PhD in Usability Engineering (2011) at the University of Edinburgh. His research lies at the juncture between psychology, human-computer interaction and communication systems in investigating how design impacts aspects of user behaviour in social, collaborative and communicative AI interactions.
Prof. Cowan is the co-founder and co-director of the HCI@UCD group, one of the largest HCI research groups in Ireland. He is also Co-Principal investigator in the SFI funded ADAPT Centre, a world leading €90+ million Research Centre on AI driven content technologies, where he leads the Interaction and Control research strand. Prof. Cowan is also the co-founder of the ACM International Conferences Series on Conversational User Interfaces (ACM CUI) and has been heavily involved in the ACM CHI conference, having acted as Associate Chair (AC-2017-2018; 2021) and Subcommittee Chair (SC- 2022 & 2023) of the Understanding People Quantitative Methods Subcommittee.

Event details:

  • When: 3rd April 2024 12:30 – 13:30
  • Where: Jack Cole 1.19

 

If you’re interested in attending any of the seminars in room 1.19, please email the SACHI seminar coordinator: aaa8@st-andrews.ac.uk so they can make appropriate arrangements for the seminar based on the number of attendees.

Seminar: Tangible User Interfaces 13th March 2024


We have 2 presentations on the 13th March focusing on Tangible interfaces by Laura Pruszko and Anna Carter.

Talk 1: Designing for Modularity – a modular approach to physical user interfaces

Abstract:

Designing for Modularity – a modular approach to physical user interfaces by Laura Pruszko
Physical user interfaces, future or history? While some of our old physical UIs get progressively replaced by their graphical counterparts, humans still rely on physicality for eye-free interaction. Shape-changing user interfaces — i.e. physical devices able to change their shape to accommodate the user, the task, or the environment – are often presented as a way to bridge the gap between the physicality of physical user interfaces and the flexibility of graphical user interfaces, but they come with their fair share of challenges. In this presentation, we will talk about these challenges under the specific scope of modular shape-changing interfaces: how do we design for modularity? What is the impact on the user? As these kinds of interfaces are not commonplace in our everyday lives, they introduce novel usability considerations for the HCI community to explore.

Bio:

Laura Pruszko is a lecturer in the Applied Computer Games department of Glasgow Caledonian University. Her research focuses on interaction with physical user interfaces and modular systems. She obtained her PhD from Grenoble Alpes University in 2023, as part of the multidisciplinary Programmable Matter consortium. This consortium brings together people from different horizons such as artists, entrepreneurs, HCI and robotics researchers, to collaborate towards enabling the long-term vision of Claytronics.

Talk 2: Sense of Place, Cultural Heritage and Civic Engagement

Abstract:

In this presentation, I will provide an overview of my recent work, where I implemented a range of interactive probes, exploring sense of place and cultural heritage within a regenerating city centre. Through these digital multimodal interactions, citizens actively participated in the sharing of cultural heritage, fostering a sense of belonging and nostalgia. Looking ahead, I’ll discuss how these insights inform my ongoing work at the intersection of the Digital Civics project and the Centre for Digital Citizens project. This presentation will not only offer my personal insights but also open the floor for collaborative discussions on integrating these crucial aspects into future embedded research.

Bio:

Anna Carter is a Research Fellow at Northumbria University she has extensive experience in designing technologies for local council regeneration programs, her work focuses on creating accessible digital experiences in a variety of contexts using human-centred methods and participatory design. She works on building Digital Civics research capacities of early career researchers as part of the EU funded DCitizens Programme and on digital civics, outdoor spaces and sense of place as part of the EPSRC funded Centre for Digital Citizens.

Event details:

  • When: 13th March 2024 12:00 – 14:00. There’ll be cakes and soft drinks from 12 onwards. The talks will be from 12:30 – 13:30
  • Where: Jack Cole 1.33 (Soft drinks and cake provided by F&D)

Seminar: Rights-driven Development 28th Feb 2024


Abstract:

Alex will discuss a critique of modern software engineering and outline how it systematically produces systems that have negative social consequences. To help counter this trend, he offers the notion of rights-driven development, which puts the concept of a right at the heart of software engineering practices. Alex’s first step to develop rights-driven practices is to introduce a language for rights in software engineering. He provides an overview of the elements such a language must contain and outlines some ideas for developing a domain-specific language that can be integrated with modern software engineering approaches. 

Bio:

Alex Voss, who’s an Honorary Lecturer here at the school and an external member of our group. Alex was also a Technology Fellow at the Carr Center for Human Rights Policy at Harvard’s John F. Kennedy School of Government and an Associate in the Department of Philosophy at Harvard.

Alex holds a PhD in Informatics and works at the intersection of the social sciences and computer science. His current research aims to develop new representations, practices and tools for rights-respecting software engineering. He is also working on the role that theories of causation have in making sense of complex socio-technical systems.

His research interests include: causality in computing, specifically in big data and machine learning applications; human-centric co-realization of technologies; responsible innovation; computing and society; computer-based and computer-aided research methods.

More about Alex: https://research-portal.st-andrews.ac.uk/en/persons/alexander-voss

Event details:

  • When: 28th February 2024 12:30 – 13:30
  • Where: Jack Cole 1.19

 

If you’re interested in attending any of the seminars in room 1.19, please email the SACHI seminar coordinator: aaa8@st-andrews.ac.uk so they can make appropriate arrangements for the seminar based on the number of attendees.

Seminar: Deep Digitality, and Digital Thinking


Abstract:

In an ACM Interactions column and an Irish HCI keynote I have explored Deep Digitality, an approach to the radical re-imagination of large scale systems of society: manufacturing, healthcare, government and healthcare.  Deep Digitality takes the counter-factual premise asking what these systems would be like of digital technology had preceded the industrial revolution, the Medicis or even Hippocrates.  Paradoxically, in some of these digital-first scenarios, digital technology is sparse and yet there is clearly a digital mindset at play.  It is the kind of thinking that underlies some of the more radical digital apps and products, and builds on the assumptions of a world where computation and sensing are cheap, communication and information are pervasive, and digital fabrication is mainstream. This digital thinking connects with other ‘thinkings’ (computational, design, management, systems) and but appears distinct – less focused on decomposition and engineering than computational thinking, but more principle rather than process driven than design thinking.  I have been trying to distill some of the defining features and heuristic principles of Digital Thinking and this talk captures some of this nascent work in progress.

Bio:

Alan Dix is Director of the Computational Foundry at Swansea University.  Previously he has spent 10 years in a mix of academic and commercial roles. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms.   For ten years Alan lived on Tiree, a small Scottish island, where he engaged in a number of community research projects relating to heritage, communications, energy use and open data and organised a twice-yearly event Tiree Tech Wave that has now become peripatetic.  In 2013, Alan walked the complete periphery of Wales, over a thousand miles.  This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way.
Alan’s role at the Computational Foundry has brought him back to his homeland.  The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit.  Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society.  The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.

Event details

  • When: 18th February 2020 14:00 - 15:00
  • Where: Cole 1.33b

Seminar: Blocks-based programming for fun and profit


Event Details

  • When: Friday 06 March 2020, 2-3pm

  • Where: JCB:1.33b – Teaching Laboratory

Abstract:

Visual programming environments have long been applied in an educational context for encouraging uptake of computer science, with a more recent focus on blocks-based programming as a means to teach computational thinking concepts.  Today, students in primary, secondary and even tertiary education are learning to code through blocks-based environments like Scratch and App Inventor, and studies in these settings have shown that they ease the transition to ‘real’ programming in high-level languages such as Java and Python.  My question is, do we need to bother with that transition?  Can we accomplish more with blocks than just programming for its own sake?  More ‘serious’ visual programming environments like LabVIEW for engineers, and Blueprints embedded in the Unreal Engine for game developers are testament to visual programming producing more than just toy programs, so how far could blocks go?  In this talk, I’ll give an overview of blocks-based programming and its applications outside education, including its role in my PhD project and current postdoctoral research in allowing end-users with no programming experience to tailor spoken dialog systems.

Bio:

Daniel is a postdoctoral research fellow working in the HCI group in UCD on the B-SPOKE project with Dr Ben Cowan.  The goal of this project is to open up the development of Spoken Dialog Systems to the end-user without programming experience, through techniques from the field of end-user development.  Prior to this, Daniel completed his PhD at the University of St Andrews, focusing on the adoption of an end-user development tool for psychology researchers to create their own data collection apps.  Daniel is especially interested in applying blocks-based programming (the visual approach to learning code used in well-known tools like Scratch) to domain-specific applications, allowing end-users to customise their software experiences without writing a single line of code.

 

Seminar: Harnessing Usability, UX and Dependability for Interactions in Safety Critical Contexts


Event Details

  • When: Monday 03 February 2020, 11am – 12hrs
  • Where: JCB:1.33A – Teaching Laboratory

Abstract: Innovation and creativity are the research drivers of the Human-Computer Interaction (HCI) community which is currently investing a vast amount of resources in the design and evaluation of “new” user interfaces and interaction techniques, leaving the correct functioning of these interfaces at the discretion of the helpless developers.  In the area of formal methods and dependable systems the emphasis is usually put on the correct functioning of the system leaving its usability to secondary-level concerns (if at all addressed).  However, designing interactive systems requires blending knowledge from these domains in order to provide operators with enjoyable, usable and dependable systems.  The talk will present possible research directions and their benefits for combining several complementary approaches to engineer interactive critical systems.  Due to their specificities, addressing this problem requires the definition of methods, notations, processes and tools to go from early informal requirements to deployed and maintained operational interactive systems.  The presentation will highlight the benefits of (and the need for) an integrated framework for the iterative design of operators’ procedures and tasks, training material and the interactive system itself.  The emphasis will be on interaction techniques specification and validation as their design is usually the main concern of HCI conferences.  A specific focus will be on automation that is widely integrated in interactive systems both at interaction techniques level and at application level.  Examples will be taken from interactive cockpits on large civil commercial aircrafts (such as the A380), satellite ground segment application and Air Traffic Control workstations.

Bio: Dr. Philippe Palanque is Professor in Computer Science at the University Toulouse 3 “Paul Sabatier” and is head of the Interactive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. Since the late 80s he has been working on the development and application of formal description techniques for interactive system. He has worked for more than 10 years on research projects to improve interactive Ground Segment Systems at the Centre National d’Etudes Spatiales (CNES) and is also involved in the development of software architectures and user interface modeling for interactive cockpits in large civil aircraft (funded by Airbus). He was involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR programme which targets at building the future European air traffic management system. The main driver of Philippe’s research over the last 20 years has been to address in an even way Usability, Safety and Dependability in order to build trustable safety critical interactive systems. He is the secretary of the IFIP Working group 13.5 on Resilience, Reliability, Safety and Human Error in System Development, was steering committee chair of the CHI conference series at ACM SIGCHI and chair of the IFIP Technical Committee 13 on Human-Computer Interaction.

 

Event details

  • When: 3rd February 2020 11:00 - 12:00
  • Where: Cole 1.33a

Seminar: Toward magnetic force based haptic rendering and friction based tactile rendering


Event Details

  • When: Thursday 14 November 2019, 2-3pm
  • Where: JCB:1.33B – Teaching Laboratory

Title: Toward magnetic force based haptic rendering and friction based tactile rendering

Abstract: Among all senses, the haptic system provides a unique and bidirectional communication channel between humans and the real word around them.  Extending the frontier of traditional visual rendering and auditory rendering, haptic rendering enables human operators to actively feel, touch and manipulate virtual (or remote) objects through force and tactile feedback, which further increases the quality of Human-Computer Interaction.  It has been effectively used for a number of applications including surgical simulation and training, virtual prototyping, data visualization, nano-manipulation, education and other interactive applications.  My work will explore the design and construction of our magnetic haptic interface for force feedback and our surface friction based tactile rendering system through combining electrovibration effect and squeeze film effect.

Bio: Dr XIONG LU is an Associate Professor in College of Control Engineering at Nanjing University of Aeronautics and Astronautics and is an academic visitor in St Andrews HCI research group in the School of Computer Science at University of St Andrews.  He received his Ph.D. degree in Measuring and Testing Technologies and Instruments from Southeast University, in China.  His mainly research interests is Human-Computer Interaction, Haptic Rendering and Tactile Rendering.

 

DLS: Multimodal human-computer interaction: past, present and future


Event details

  • When: 8th October 2019 09:30 – 15:15
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

Speaker: Stephen Brewster (University of Glasgow)
Venue: The Byre Theatre

Timetable:

9:30: Lecture 1: The past: what is multimodal interaction?
10:30 Coffee break
11:15 Lecture 2: The present: does it work in practice?
12:15 Lunch (not provided)
14:15 The future: Where next for multimodal interaction?

Speaker Bio:

Professor Brewster is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. He has done a lot of research into Earcons, a particular form of non-speech sounds.

He did his degree in Computer Science at the University of Herfordshire in the UK. After a period in industry he did his PhD in the Human-Computer Interaction Group at the University of York in the UK with Dr Alistair Edwards. The title of his thesis was “Providing a structured method for integrating non-speech audio into human-computer interfaces”. That is where he developed my interests in Earcons and non-speech sound.

After finishing his PhD he worked as a research fellow for the European Union as part of the European Research Consortium for Informatics and Mathematics (ERCIM). From September, 1994 – March, 1995 he worked at VTT Information Technology in Helsinki, Finland. He then worked at SINTEF DELAB in Trondheim, Norway.

 

Seminar: Brain-based HCI – What could brain data can tell us HCI


Event details

  • When: Friday 25 October, 2-3pm
  • Where: JCB:1.33B – Teaching Laboratory

Abstract: This talk will describe a range of our projects, utilising functional Near Infrared Spectroscopy (fNIRS) in HCI.  As a portable alternative that’s more tolerate of motion artefacts than EEG, fNIRS measures the amount of oxygen in the brain, as e.g. mental workload creates demand.  As opposed to BCI (trying to control systems with our brain), we focus on brain-based HCI, asking what brain data can tell us about our software, our work, our habits, and ourselves.  In particular, we are driven by the idea that brain data can become personal data in the future.

Bio: Dr Max L. Wilson is an Associate Professor in the Mixed Reality Lab in Computer Science at the University of Nottingham.  His research focus is on evaluating Mental Workload in HCI contexts – as real-world as possible – primarily using functional Near Infrared Spectroscopy (fNIRS).  As a highly tolerant form of brain sensor, fNIRS is suitable for use in HCI research into user interface design, work tasks, and everyday experiences.  This work emerged from his prior research into the design and evaluation of complex user interfaces for information interfaces. Across these two research areas, Max has over 120 publications, including a Honourable Mention CHI2019 paper on a Brain-Controlled Movie – The MOMENT.