St Andrews HCI Research Group

News

User Troubles during “Shoot St Andrews to Green”!


A map shows missing images from the OpenStreetMap for St Andrews

A map shows missing images from the OpenStreetMap for St Andrews

Many photos of St Andrews are missing from open-access maps. WikiShootMe allows anyone to add an image to places on Wikimedia and Wikipedia that doesn’t already have one. So we took the initiative to take photos of St Andrews’ historic buildings and upload them to Wikicommons using WikiShootMe. However, WikiShootMe is currently only a desktop website and is difficult to use when out and about. Many usability challenges emerged, leading us to turn this into a User-Centred Interaction Design project.

Background

WikiShootMe is a tool to show Wikidata items, Wikipedia articles, and WikiCommons images with coordinates, all on the same map. One of the IDEA network activities involves inviting participants to use this tool, only available though the website, to add an image to places on the shown map that doesn’t already have one. At a recent event, the IDEA network collaborated with Abd Alsattar Ardati, SACHI researcher and Postgraduate Development Officer at the Postgraduate Society, on hosting a pilot event to celebrate the town of St Andrews. St Andrews is well recognised globally for its history and over 600 years old University, but most photos of places are missing from open-access maps! Abd invited students and locals to a one-hour “Shoot St Andrews to Green” event on Saturday, the 20th of August 2022. This event aimed to spark a discussion and encourage attendees to become “open knowledge activists” and contribute as part of a network of others doing the same at the University. The session showcased different opportunities to develop skills in photography and team collaboration while filling information gaps about our town.

Problem

On a recent Shoot St Andrews to Green event, six participants used their phones’ browsers to take pictures and upload them to the WikiShootMe website.

At the start, the organiser demonstrated how to create an account, and then the participants go “hunting” for photos. With roughly six students walking with their phones around the town, it can be challenging to track the progress of participants; some participants might upload the same photos. There was little to no error prevention measures in place, and recovering from an error can be daunting. For example, some participants uploaded photos by mistake and to delete the photo, they have to go through a lengthy process, which has a low discoverability. Other participants had issues uploading the photos which were subsequently lost when the page was refreshed.

This led Abd to collaborate with Xu Zhu and Kenneth Boyd to collaborate on translating these challenges into a project for CS students taking the CS5042: User-Centred Interaction Design module to develop interfaces for a mobile app that could support future similar activities in a more user-friendly way.

Goals

In this project, we are looking for innovative and creative ways to present the relevant information about the process, available help and documentation that would allow participants to help navigate the space. For example, if someone organised an event, who would be responsible for what images are asked for? If someone wanted to see who had uploaded the largest amount of photos at the event, how would they find this information without losing focus on their main task? Can we gamify the process of uploading pictures? Can we develop interfaces to prevent losing picture features in case the pictures were not uploaded for any reason (e.g. offline working mode)?

The current focus will be on developing an innovative and user-friendly visual interface to navigate the list of photos to be covered, what has been covered, and potential ways to get help from the organiser. It should also be easy to use for potential organisers who would like to add, amend or remove an event from the system. In addition, the organiser should be able to moderate, approve, and bulk upload images to Wikimedia Commons.

An awareness of technologies that could be leveraged for future implementation (for example, suggesting adding the image to a Wikipedia article, if it has one) would make a design more connected to the wider Wikimedia community. Involvement and building connections with the tool’s developers, Wikimedia community and design team are highly desirable and recommended for ensuring that the design fits the community norms and expectations.

There will be posting another blog about the project’s results soon.

Measuring heart rate and blood oxygen remotely in the home


Pireh Pirzada has developed and validated a first rPPG system (Automated Remote Pulse Oximetry System, or ARPOS) that measures both heart rate and blood oxygenation levels remotely within participants’ home environments (real-life scenarios).

The research shares the first data set collected from real life scenarios which includes various factors such as skin pigmentations, illuminations, beard, makeup, and glasses. The research also shares its experiment protocol and source code used to collect and analyse the data.

 

Abstract:

Current methods of measuring heart rate (HR) and oxygen levels (SPO2) require physical contact, are individualised, and for accurate oxygen levels may also require a blood test. No-touch or non-invasive technologies are not currently commercially available for use in healthcare settings. To date, there has been no assessment of a system that measures HR and SPO2 using commercial off-the-shelf camera technology that utilises R, G, B, and IR data. Moreover, no formal remote photoplethysmography studies have been performed in real-life scenarios with participants at home with different demographic characteristics. This novel study addresses all these objectives by developing, optimising, and evaluating a system that measures the HR and SPO2 of 40 participants. HR and SPO2 are determined by measuring the frequencies from different wavelength band regions using FFT and radiometric measurements after pre-processing face regions of interest (forehead, lips, and cheeks) from colour, IR, and depth data. Detrending, interpolating, hamming, and normalising the signal with FastICA produced the lowest RMSE of 7.8 for HR with the r-correlation value of 0.85 and RMSE 2.3 for SPO2. This novel system could be used in several critical care settings, including in care homes and in hospitals and prompt clinical intervention as required.

Keywords: remote health monitoring; heart rate measurement; blood oxygenation level measurement; rPPG system

 

The research outputs also include:

Dataset: https://doi.org/10.5281/zenodo.6522389

Experiment protocol: dx.doi.org/10.17504/protocols.io.n2bvj6zkxlk5/v1

Code: https://github.com/PirehP/ARPOSpublic

 

Researchers:

Pireh Pirzada

Collaborate and Celebrate the First Female Alumni


Collaborate and celebrate event poster

Students were invited to collaborate on researching digitised archival information about St Andrew’s first female alumni and thus expand the limited amount of information we have about these front-running women.

Description:

The Postgraduate Development Officer collaborated with Tomas Vancisin, a SACHI Group researcher focusing on visualising historical university records, and the Inclusion Diversity Equity Accessibility (IDEA) network to host a pilot event to raise awareness about the University’s first female students.

The University of St Andrews is over 600 years old, but women have only been allowed to study here for the past 145 years. In 1877, 15 years before women were officially allowed to study at universities around Scotland, St Andrews started offering women the Lady Literate in Arts qualification, which was equivalent to an MA degree. Despite the significance of this qualification for gender equality and beyond, the amount of information we have about these pioneering women is sparse. The event aimed to spark a discussion and encourage attendees to become “knowledge activists” by looking for additional information about these women. In addition to filling information gaps about underrepresented women, the session also showcased opportunities to develop skills in digital media, research, public engagement, and team collaboration

The long-term goal is to run Wikipedia training as a means to encourage writing Wikipedia Biographical articles about LLA’s we identify as notable.

Here is what one of the attendees said about their experience: 

“I really enjoyed searching for information about the LLA graduates. It was exciting to try and uncover what information is out there, and it was good fun. I also enjoyed hearing more about the IDEA network, and I am keen to get involved as a ‘knowledge activist.”

More information:

 

HCI Staff Position at SACHI


Come and join our group! We are currently advertising for a new staff member to join our HCI group at the School of Computer Science.


Supporting the expansion and development of the SAHCI group, topics of interest include but are not limited to: tangible computing, digital fabrication, ubiquitous computing, information visualization, human-centered artificial intelligence, augmented reality, novel software and hardware interactions, and critical HCI. Expertise in the field of HCI and technical expertise in the creation of hardware and or software interactions is of particular interest.


For more details: https://www.jobs.ac.uk/job/CRS296/lecturer-senior-lecturer-reader-in-human-computer-interaction-ac7180gb


Closing Date: 17th August 2022


Please share far and wide

HCI meets Constraint Programming


Understanding How People Approach Constraint Modelling and Solving – University of St Andrews and University of Victoria

Ruth Hoffmann will be presenting the paper on “Understanding How People Approach Constraint Modelling and Solving” at the 28th International Conference on Principles and Practice of Constraint Programming (CP 2022) taking place between July 31 to August 5, 2022 in Haifa, Israel.

This paper is a joint collaboration between SACHI (Human Computer Interaction) and Constraint Programming groups, in both the University of St Andrews, Scotland and the University of Victoria, BC.

Abstract

Research in constraint programming typically focuses on problem solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined. How humans think about constraint problems can be important for the development of efficient tools that are useful to a broader audience. For example, a system incorporating knowledge on how people think about constraint problems can provide explanations to users and improve the communication between the human and the solver.
We present an initial step towards a better understanding of the human side of the constraint solving process. To our knowledge, this is the first human-centred study addressing how people approach constraint modelling and solving. We observed three sets of ten users each (constraint programmers, computer scientists and non-computer scientists) and analysed how they find solutions for well-known constraint problems. We found regularities offering clues about how to design systems that are more intelligible to humans.

Researchers

The paper can be found at: https://doi.org/10.4230/LIPIcs.CP.2022.28

Conference

Ruth will be presenting the paper in the main conference and giving an invited talk at ModRef 2022 to raise awareness of the benefits of understanding how people represent, model and solve constraint problems.

CP 2022 Conference link: https://easychair.org/smart-program/FLoC2022/CP-2022-08-03.html#talk:197219

ModRef 2022 link: https://easychair.org/smart-program/FLoC2022/ModRef-2022-07-31.html#talk:197355

More ModRef info: https://modref.github.io/ModRef2022.html#invtalks

Congratulations to Adam Binks, Alice Toniolo and Miguel Nacenta on publishing their paper ‘Representational transformations: Using maps to write essays’


The paper is open access: Representational transformations: Using maps to write essays.

Summary of the paper and its findings

Image

We built a tool to study how writers move between map and text to write essays. The main takeaway is that important cognitive work happens in the transformation process between map and text representations.

There are lots of existing tools for building representations to support complex cognitive tasks – e.g. argument maps, text, notes, slides, sketches, and so on. But tool support for the transformations *between* representations is much more neglected – and we think it’s crucial!

We built Write Reason, a tool which combines a text editor and a mapping interface. You can drag parts of the map into the text, and parts of the text into the map, and it helps you keep them in sync.


We then studied how 20 students used Write Reason to write essays. You can interactively explore the maps and essays built by participants. We identified key properties of transformations: change in representation type, cardinality, and explicitness. And we found that most used an all-at-once batch translation, while a few used bit-by-bit interleaving. 

Image

We think understanding transformations is crucial for building the next generation of multi-representational tools. How can we better support multi-transformation pipelines like these? Can automation unlock more complex + powerful workflows, which would be tedious to do manually?

Image

Our findings revealed and falsified some of the key implicit assumptions that we baked into the design of Write Reason. We hope that these reflections will help other designers and researchers start one step ahead of us and avoid these mistakes!

Project page. Paper (open access).

Congratulations Dr. Carneiro & Dr. Carson


Thrilled to see Iain and Guilherme graduating this week. Congratulations on your well-deserved success Dr. Carneiro & Dr. Carson!

Seminar: Deep Digitality, and Digital Thinking


Abstract:

In an ACM Interactions column and an Irish HCI keynote I have explored Deep Digitality, an approach to the radical re-imagination of large scale systems of society: manufacturing, healthcare, government and healthcare.  Deep Digitality takes the counter-factual premise asking what these systems would be like of digital technology had preceded the industrial revolution, the Medicis or even Hippocrates.  Paradoxically, in some of these digital-first scenarios, digital technology is sparse and yet there is clearly a digital mindset at play.  It is the kind of thinking that underlies some of the more radical digital apps and products, and builds on the assumptions of a world where computation and sensing are cheap, communication and information are pervasive, and digital fabrication is mainstream. This digital thinking connects with other ‘thinkings’ (computational, design, management, systems) and but appears distinct – less focused on decomposition and engineering than computational thinking, but more principle rather than process driven than design thinking.  I have been trying to distill some of the defining features and heuristic principles of Digital Thinking and this talk captures some of this nascent work in progress.

Bio:

Alan Dix is Director of the Computational Foundry at Swansea University.  Previously he has spent 10 years in a mix of academic and commercial roles. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms.   For ten years Alan lived on Tiree, a small Scottish island, where he engaged in a number of community research projects relating to heritage, communications, energy use and open data and organised a twice-yearly event Tiree Tech Wave that has now become peripatetic.  In 2013, Alan walked the complete periphery of Wales, over a thousand miles.  This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way.
Alan’s role at the Computational Foundry has brought him back to his homeland.  The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit.  Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society.  The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.

Event details

  • When: 18th February 2020 14:00 - 15:00
  • Where: Cole 1.33b

Seminar: Blocks-based programming for fun and profit


Event Details

  • When: Friday 06 March 2020, 2-3pm

  • Where: JCB:1.33b – Teaching Laboratory

Abstract:

Visual programming environments have long been applied in an educational context for encouraging uptake of computer science, with a more recent focus on blocks-based programming as a means to teach computational thinking concepts.  Today, students in primary, secondary and even tertiary education are learning to code through blocks-based environments like Scratch and App Inventor, and studies in these settings have shown that they ease the transition to ‘real’ programming in high-level languages such as Java and Python.  My question is, do we need to bother with that transition?  Can we accomplish more with blocks than just programming for its own sake?  More ‘serious’ visual programming environments like LabVIEW for engineers, and Blueprints embedded in the Unreal Engine for game developers are testament to visual programming producing more than just toy programs, so how far could blocks go?  In this talk, I’ll give an overview of blocks-based programming and its applications outside education, including its role in my PhD project and current postdoctoral research in allowing end-users with no programming experience to tailor spoken dialog systems.

Bio:

Daniel is a postdoctoral research fellow working in the HCI group in UCD on the B-SPOKE project with Dr Ben Cowan.  The goal of this project is to open up the development of Spoken Dialog Systems to the end-user without programming experience, through techniques from the field of end-user development.  Prior to this, Daniel completed his PhD at the University of St Andrews, focusing on the adoption of an end-user development tool for psychology researchers to create their own data collection apps.  Daniel is especially interested in applying blocks-based programming (the visual approach to learning code used in well-known tools like Scratch) to domain-specific applications, allowing end-users to customise their software experiences without writing a single line of code.

 

Seminar: Harnessing Usability, UX and Dependability for Interactions in Safety Critical Contexts


Event Details

  • When: Monday 03 February 2020, 11am – 12hrs
  • Where: JCB:1.33A – Teaching Laboratory

Abstract: Innovation and creativity are the research drivers of the Human-Computer Interaction (HCI) community which is currently investing a vast amount of resources in the design and evaluation of “new” user interfaces and interaction techniques, leaving the correct functioning of these interfaces at the discretion of the helpless developers.  In the area of formal methods and dependable systems the emphasis is usually put on the correct functioning of the system leaving its usability to secondary-level concerns (if at all addressed).  However, designing interactive systems requires blending knowledge from these domains in order to provide operators with enjoyable, usable and dependable systems.  The talk will present possible research directions and their benefits for combining several complementary approaches to engineer interactive critical systems.  Due to their specificities, addressing this problem requires the definition of methods, notations, processes and tools to go from early informal requirements to deployed and maintained operational interactive systems.  The presentation will highlight the benefits of (and the need for) an integrated framework for the iterative design of operators’ procedures and tasks, training material and the interactive system itself.  The emphasis will be on interaction techniques specification and validation as their design is usually the main concern of HCI conferences.  A specific focus will be on automation that is widely integrated in interactive systems both at interaction techniques level and at application level.  Examples will be taken from interactive cockpits on large civil commercial aircrafts (such as the A380), satellite ground segment application and Air Traffic Control workstations.

Bio: Dr. Philippe Palanque is Professor in Computer Science at the University Toulouse 3 “Paul Sabatier” and is head of the Interactive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. Since the late 80s he has been working on the development and application of formal description techniques for interactive system. He has worked for more than 10 years on research projects to improve interactive Ground Segment Systems at the Centre National d’Etudes Spatiales (CNES) and is also involved in the development of software architectures and user interface modeling for interactive cockpits in large civil aircraft (funded by Airbus). He was involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR programme which targets at building the future European air traffic management system. The main driver of Philippe’s research over the last 20 years has been to address in an even way Usability, Safety and Dependability in order to build trustable safety critical interactive systems. He is the secretary of the IFIP Working group 13.5 on Resilience, Reliability, Safety and Human Error in System Development, was steering committee chair of the CHI conference series at ACM SIGCHI and chair of the IFIP Technical Committee 13 on Human-Computer Interaction.

 

Event details

  • When: 3rd February 2020 11:00 - 12:00
  • Where: Cole 1.33a