St Andrews HCI Research Group

News

Seminar: Brain-based HCI – What could brain data can tell us HCI


Event details

  • When: Friday 25 October, 2-3pm
  • Where: JCB:1.33B – Teaching Laboratory

Abstract: This talk will describe a range of our projects, utilising functional Near Infrared Spectroscopy (fNIRS) in HCI.  As a portable alternative that’s more tolerate of motion artefacts than EEG, fNIRS measures the amount of oxygen in the brain, as e.g. mental workload creates demand.  As opposed to BCI (trying to control systems with our brain), we focus on brain-based HCI, asking what brain data can tell us about our software, our work, our habits, and ourselves.  In particular, we are driven by the idea that brain data can become personal data in the future.

Bio: Dr Max L. Wilson is an Associate Professor in the Mixed Reality Lab in Computer Science at the University of Nottingham.  His research focus is on evaluating Mental Workload in HCI contexts – as real-world as possible – primarily using functional Near Infrared Spectroscopy (fNIRS).  As a highly tolerant form of brain sensor, fNIRS is suitable for use in HCI research into user interface design, work tasks, and everyday experiences.  This work emerged from his prior research into the design and evaluation of complex user interfaces for information interfaces. Across these two research areas, Max has over 120 publications, including a Honourable Mention CHI2019 paper on a Brain-Controlled Movie – The MOMENT.

BEGIN seminar: Maps, Space and the 2D Plane from the Data and User Interface Perspective


Event details

  • When: Tuesday 15 Octuber, 3-4pm
  • Where: School VI, United Colleges
Title: “Maps, Space and the 2D Plane from the Data and User Interface Perspective”

Abstract: The 2D plane underpins most displays of information and therefore most of the ways in which interface designers and data analysts can dynamically represent information. As a user interface and information visualization designer/researcher I encounter the 2D plane often as a necessity and sometimes as an opportunity to enhance human cognitive processes.

Maps, who are the original example of use of the 2D plane to represent information serve often as inspiration.In this talk, I will discuss some of my most exciting encounters with the 2D plane and maps, and reflect on their deeper affordances to support thinking and understanding. I hope also to engage in conversation with you in the audience about what maps and the 2D plane mean for you and how you use them.

MORE

Event details

  • When: 15th October 2019 15:00 - 16:00
  • Where: Various

Rachel Menzies: Unlocking Accessible Escape Rooms: Is Technology the Key?


Event details

  • When: 2nd April 2019 14:00 – 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

Escape rooms are popular recreational activities whereby players are locked in a room and must solve a series of puzzles in order to ‘escape’. Recent years have seen a large expansion technology being used in these rooms in order to provide ever changing and increasingly immersive experiences. This technology could be used to minimise accessibility issues for users, e.g. with hearing or visual impairments, so that they can engage in the same way as their peers without disabilities. Escape room designers and players completed an online questionnaire exploring the use of technology and the accessibility of escape rooms. Results show that accessibility remains a key challenge in the design and implementation of escape rooms, despite the inclusion of technology that could be used to improve the experience of users with disabilities. This presentation will explore the lack of accessibility within Escape Rooms and the potential for technology to bridge this gap.

Speaker Bio:

Dr Rachel Menzies is the Head of Undergraduate Studies for Computing at the University of Dundee and is the current SICSA Director of Education (https://www.sicsa.ac.uk/education/). She co-directs the UX’d research group (https://www.ux-d.co.uk/) and her research interests include user centred design with marginalised user groups, such as users with disabilities, as well as exploring novel interfaces, data visualisation and CS education. Her most recent work focusses on accessibility is in escape rooms, in particular how users with varied disabilities can access and enjoy the experience alongside typical users.

Event details

  • When: 2nd April 2019 14:00 - 15:00

Keynote in KAIST Korea


Keynote in KAIST, Daejeon

Aaron with two of his co-authors Juyoung Lee and Hyung-il Kim in KAIST.


On February 1st, Professor Quigley delivered an invited talk as part of the ACM Distinguished Speaker Program during the HCI@KAIST International Workshop in KAIST, Daejeon, South Korea.
MORE

CUTE Centre Seminar Singapore


Professor Aaron Quigley is currently in Singapore on sabbatical with the CUTE centre. His welcome seminar was on the topic of Discreet Computing and showcased a number of SACHI projects.
MORE

SACHI Seminar: Jason Alexander (Lancaster University) – What would you do if you could touch your data?



Title:  What would you do if you could touch your data?
Abstract: Data Physicalizations are physical artefacts whose geometry or material properties encode data. They bring digital datasets previously locked behind 2D computer screens out into the physical world, enabling exploration, manipulation, and understanding using our rich tactile senses. My work explores the design and construction of dynamic data physicalizations, where users can interact with physical datasets that dynamically update. I will describe our data physicalization vision and show our progress on designing, building, and evaluating physicalizations and discuss the many exciting challenges faced by this emerging field.
Speaker biography:  Jason is a Senior Lecturer in the School of Computing and Communications at Lancaster University. He has a BSc(Hons) and PhD in Computer Science from the University of Canterbury in New Zealand and was previously a post-doctoral researcher at the University of Bristol. His research is broadly in Human-Computer Interaction, with a particular interest in developing novel interactive systems to bridge the physical-digital divide. His recent work focuses on the development of shape-changing interfaces—surfaces that can dynamically change their geometry based on digital content—and their application to data physicalization. He also has interests in digital fabrication and novel haptic interaction techniques.

Event details

  • When: 29th November 2018 14:00 - 15:00
  • Where: Cole 1.33a

SACHI Seminar – Professor Anirudha Joshi: The story of Swarachakra – Cracking the puzzle of text input in Indian languages



Title: The story of Swarachakra – Cracking the puzzle of text input in Indian languages
Abstract: There was a time when text input in Indian languages was called a ‘puzzle’. People found it so difficult that became a barrier that prevented them from using most other technology products, from doing common tasks such as searching the web or saving a contact. As a result, Indians typed very little in their own languages. The Roman script (in which we write English) is an Alphabet. In contrast, a large majority of Indian scripts are Abugidas – a different type of scripts. In our lab, we were convinced that we need different solutions – what works for Alphabets may not work for Abugidas. Over the years we explored several designs. Our early solutions were for desktop computers. Later we developed concepts for the feature phones. We tried several creative ideas and made prototypes. We got interesting results in the lab. We published papers and case studies. But beyond that, we could not reach out and make a difference to the end-users. Then smartphones arrived, and quickly became popular. It became relatively easier to develop and deploy keyboards. Again, we tried several ideas. One solution stood out in comparison with others. We called it “Swarachakra”. Today, Swarachakra is available for 12 Indian languages and has been downloaded by about 4 million users. What was the problem, and how was it solved? And what challenges remain? Come to the talk to find out.
Speaker biography: Anirudha Joshi is professor in the interaction design stream in the IDC School of Design, IIT Bombay, India, though currently he is on a sabbatical, visiting universities in the UK. His specialises in design of interactive products for emergent users in developing economies. He has worked in diverse domains including healthcare, literacy, Indian language text input, banking, education, industrial equipment, and FMCG packaging. Anirudha also works in the area of integrating HCI activities with software engineering processes. He has developed process models, tools, and metrics to help HCI practitioners deliver a better user experience. Anirudha is active with HCI communities in India and outside. He has chaired in various roles in several conferences including India HCI, INTERACT and CHI. Since 2007, he represents India on IFIP TC13. He is the founding director of HCI Professionals Association of India since 2013. Since 2015 he is the Liaison for India for the ACM SIGCHI Asian Development Committee. Since 2016, he has been the VP Finance of the ACM SIGCHI Executive Committee. Anirudha has diverse backgrounds. He is a BTech (1989) in Electrical Engineering, an MDes (1992), in Visual Communication Design, and a PhD (2011) in Computer Science and Engineering, all from IIT Bombay.

Event details

  • When: 29th October 2018 15:00 - 16:00
  • Where: Cole 1.33a

SACHI Seminar – Professor Patrick Olivier – Digital Civics: Infrastructuring Participatory Citizenship



Title:  Digital Civics: Infrastructuring Participatory Citizenship
Abstract:  Firstly, this is not technical talk, its a talk about a research initiative in “Digital Civics” that Open Lab is undertaking primarily with partners in the North East of England, but also nationally and internationally. Digital Civics proposes the use of digital technologies in the provision of relational models of public services, that is, models that take as a starting point the potential of digital technologies to support citizen-focused sharing of knowledge, experience and resources. By framing government as more than simply the provider of uniform and mechanistic services, digital civics aims to leverage technology to foster environments in which local agents (e.g. charities, local businesses, citizens) are able to solve problems together. Digital Civics research is inherently cross-disciplinary, action-oriented and place-based, and this requires us (as academic researchers) to configure ourselves differently to the communities with whom we conduct our research. In this talk I will be describing examples of our digital civics research, from applications in community engagement and education to public health and social justice, as well as the trajectory and pragmatics of the overall endeavour.
Speaker biography:  Patrick Olivier is Professor of Human-Computer Interaction in the School of Computing, Newcastle University, UK. He founded and leads Open Lab, Newcastle University’s centre for cross-disciplinary research in digital technologies. His research interests span interaction design, social computing and ubiquitous computing, particularly in public service and civic application contexts (education, public health and social justice). He is director of the EPSRC Centre for Doctoral Training in Digital Civics (55 cross-disciplinary PhD students) and the EPSRC Digital Economy Research Centre (a multidisciplinary five-year project involving 25 postdocs).
Google scholar:
https://scholar.google.co.uk/citations?hl=en&user=CUu9heMAAAAJ
ORCID:
https://orcid.org/0000-0003-2841-7580
Open Lab:
https://openlab.ncl.ac.uk/
Digital Civics:
https://digitalcivics.io/

Event details

  • When: 18th October 2018 14:00 - 15:00
  • Where: Cole 1.33a

SACHI Seminar – Professor Alan Dix: Sufficient Reason



Title:  Sufficient Reason
Abstract:  A job candidate has been pre-selected for shortlist by a neural net; an autonomous car has suddenly changed lanes almost causing an accident; the intelligent fridge has ordered an extra pint of milk.  From the life changing or life threatening to day-to-day living, decisions are made by computer systems on our behalf.  If something goes wrong, or even when the decision appears correct, we may need to ask the question, “why?”  In the case of failures we need to know whether it is the result of a bug in the software,; a need for more data, sensors or training; or simply one of those things: a decision correct in the context, that happened to turn out badly.  Even if the decision appears acceptable, we may wish to understand it for our own curiosity, peace of mind, or for legal compliance.  In this talk I will pick up threads of research dating back to early work in the 1990s on gender and ethnic bias in black-box machine-learning systems, as well as more recent developments such as deep learning and concerns such as those that gave rise to the EPSRC human-like computing programme.  In particular I will present nascent work on an AIX Toolkit (AI explainability): a structured collection of techniques designed to help developers of intelligent systems create more comprehensible representations of the reasoning.  Crucial to the AIX Toolkit is the understanding that human-human explanations are rarely utterly precise or reproducible, but they are sufficient to inspire confidence and trust in a collaborative endeavour.
Speaker biography:  Alan Dix is Director of the Computational Foundry at Swansea University.  Previously he has spent 10 years in a mix of academic and commercial roles, most recently as Professor in the HCI Centre at the University of Birmingham and Senior Researcher at Talis. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms.   Issues of space and time in user interaction have been a long term interest, from his “Myth of the Infinitely Fast Machine” in 1987, to his co-authored book, TouchIT, on physicality in a digital age, due to be published in 2018. Alan organises a twice-yearly workshop, Tiree Tech Wave, on the small Scottish island where he has lived for 10 years, and where he has been engaged in a number of community research projects relating to heritage, communications, energy use and open data.  In 2013, he walked the complete periphery of Wales, over a thousand miles.  This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way.   The data from this including 19,000 images, about 150,000 words of geo-tagged text, and many giga-bytes of bio-data is available in the public domain as an ‘open science’ resource. Alan’s new role at the Computational Foundry has brought him back to his homeland.  The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit.  Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society.  The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.  http://alandix.com/

Event details

  • When: 16th October 2018 14:00 - 15:00
  • Where: Cole 1.33a

SACHI Seminar – Alyssa Goodman: Visualization and the Universe



Title:  
Visualization and the Universe: How and why astronomers, doctors, and you need to work together to understand the world around us
 
Abstract:
Astronomy has long been a field reliant on visualization. First, it was literal visualization—looking at the Sky. Today, though, astronomers are faced with the daunting task of understanding gigantic digital images from across the electromagnetic spectrum and contextualizing them with hugely complex physics simulations, in order to make more sense of our Universe.   In this talk, I will explain how new approaches to simultaneously exploring and explaining vast data sets allow astronomers—and other scientists—to make sense of what the data have to say, and to communicate what they learn to each other, and to the public.  In particular, I will talk about the evolution of the multi-dimensional linked-view data visualization environment known as glue (glueviz.org) and the Universe Information System called WorldWide Telescope (worldwidetelescope.org).  I will explain how glue is being used in medical and geographic information sciences, and I will discuss its future potential to expand into all fields where diverse, but related, multi-dimensional data sets can be profitably analyzed together.  Toward the aim of bringing the insights to be discussed to a broader audience, I will also introduce the new “10 Questions to Ask When Creating a Visualization” website, 10QViz.org.
 
Speaker biography: Professor Alyssa Goodman, Harvard University
Alyssa Goodman is the Robert Wheeler Willson Professor of Applied Astronomy at Harvard University, and a Research Associate of the Smithsonian Institution. Goodman’s research and teaching interests span astronomy, data visualization, and online systems for research and education. Goodman received her undergraduate degree in Physics from MIT in 1984 and a Ph.D. in Physics from Harvard in 1989. Goodman was awarded the Newton Lacy Pierce Prize from the American Astronomical Society in 1997, became full professor at Harvard in 1999, was named a Fellow of the American Association for the Advancement of Science in 2009, and chosen as Scientist of the Year by the Harvard Foundation in 2015. Goodman has served as Chair of the Astronomy Section of the American Association for the Advancement of Science and on the National Academy’s Board on Research Data and Information, and she currently serves on the both the IAU and AAS Working Groups on Astroinformatics and Astrostatistics. Goodman’s personal research presently focuses primarily on new ways to visualize and analyze the tremendous data volumes created by large and/or diverse astronomical surveys, and on improving our understanding of the structure of the Milky Way Galaxy. She is working closely with colleagues at the American Astronomical Society, helping to expand the use of the WorldWide Telescope program, in both research and in education.

Event details

  • When: 12th October 2018 12:00 - 13:00
  • Where: Cole 1.33b