- When: 18th February 2020 14:00 - 15:00
- Where: Cole 1.33b
Visual programming environments have long been applied in an educational context for encouraging uptake of computer science, with a more recent focus on blocks-based programming as a means to teach computational thinking concepts. Today, students in primary, secondary and even tertiary education are learning to code through blocks-based environments like Scratch and App Inventor, and studies in these settings have shown that they ease the transition to ‘real’ programming in high-level languages such as Java and Python. My question is, do we need to bother with that transition? Can we accomplish more with blocks than just programming for its own sake? More ‘serious’ visual programming environments like LabVIEW for engineers, and Blueprints embedded in the Unreal Engine for game developers are testament to visual programming producing more than just toy programs, so how far could blocks go? In this talk, I’ll give an overview of blocks-based programming and its applications outside education, including its role in my PhD project and current postdoctoral research in allowing end-users with no programming experience to tailor spoken dialog systems.
Daniel is a postdoctoral research fellow working in the HCI group in UCD on the B-SPOKE project with Dr Ben Cowan. The goal of this project is to open up the development of Spoken Dialog Systems to the end-user without programming experience, through techniques from the field of end-user development. Prior to this, Daniel completed his PhD at the University of St Andrews, focusing on the adoption of an end-user development tool for psychology researchers to create their own data collection apps. Daniel is especially interested in applying blocks-based programming (the visual approach to learning code used in well-known tools like Scratch) to domain-specific applications, allowing end-users to customise their software experiences without writing a single line of code.
Abstract: Innovation and creativity are the research drivers of the Human-Computer Interaction (HCI) community which is currently investing a vast amount of resources in the design and evaluation of “new” user interfaces and interaction techniques, leaving the correct functioning of these interfaces at the discretion of the helpless developers. In the area of formal methods and dependable systems the emphasis is usually put on the correct functioning of the system leaving its usability to secondary-level concerns (if at all addressed). However, designing interactive systems requires blending knowledge from these domains in order to provide operators with enjoyable, usable and dependable systems. The talk will present possible research directions and their benefits for combining several complementary approaches to engineer interactive critical systems. Due to their specificities, addressing this problem requires the definition of methods, notations, processes and tools to go from early informal requirements to deployed and maintained operational interactive systems. The presentation will highlight the benefits of (and the need for) an integrated framework for the iterative design of operators’ procedures and tasks, training material and the interactive system itself. The emphasis will be on interaction techniques specification and validation as their design is usually the main concern of HCI conferences. A specific focus will be on automation that is widely integrated in interactive systems both at interaction techniques level and at application level. Examples will be taken from interactive cockpits on large civil commercial aircrafts (such as the A380), satellite ground segment application and Air Traffic Control workstations.
Bio: Dr. Philippe Palanque is Professor in Computer Science at the University Toulouse 3 “Paul Sabatier” and is head of the Interactive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. Since the late 80s he has been working on the development and application of formal description techniques for interactive system. He has worked for more than 10 years on research projects to improve interactive Ground Segment Systems at the Centre National d’Etudes Spatiales (CNES) and is also involved in the development of software architectures and user interface modeling for interactive cockpits in large civil aircraft (funded by Airbus). He was involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR programme which targets at building the future European air traffic management system. The main driver of Philippe’s research over the last 20 years has been to address in an even way Usability, Safety and Dependability in order to build trustable safety critical interactive systems. He is the secretary of the IFIP Working group 13.5 on Resilience, Reliability, Safety and Human Error in System Development, was steering committee chair of the CHI conference series at ACM SIGCHI and chair of the IFIP Technical Committee 13 on Human-Computer Interaction.
He presented 2 papers and a demo at the conferences. He was a student volunteer at ISS.
WRIST: Watch-Ring Interaction and Sensing Technique for Wrist Gestures And Macro-Micro Pointing
Hui-Shyong Yeo, Juyoung Lee, Hyung-il Kim, Aakar Gupta, Andrea Bianchi, Daniel Vogel, Woontack Woo, Aaron Quigley
In Proceedings of International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI’19.
Opisthenar: Hand Poses and Finger Tapping Recognition by Observing Back of Hand Using Embedded Wrist Camera
Hui-Shyong Yeo, Erwin Wu, Juyoung Lee, Aaron Quigley and Hideki Koike
In Proceedings of the ACM symposium on User interface software and technology, UIST’19.
Student volunteering at ISS 2019
Bill Buxton giving keynote at ISS 2019
Erwin Wu demoing Opisthenar at UIST 2019
Different WRIST pointing techniques
Title: Toward magnetic force based haptic rendering and friction based tactile rendering
Abstract: Among all senses, the haptic system provides a unique and bidirectional communication channel between humans and the real word around them. Extending the frontier of traditional visual rendering and auditory rendering, haptic rendering enables human operators to actively feel, touch and manipulate virtual (or remote) objects through force and tactile feedback, which further increases the quality of Human-Computer Interaction. It has been effectively used for a number of applications including surgical simulation and training, virtual prototyping, data visualization, nano-manipulation, education and other interactive applications. My work will explore the design and construction of our magnetic haptic interface for force feedback and our surface friction based tactile rendering system through combining electrovibration effect and squeeze film effect.
Bio: Dr XIONG LU is an Associate Professor in College of Control Engineering at Nanjing University of Aeronautics and Astronautics and is an academic visitor in St Andrews HCI research group in the School of Computer Science at University of St Andrews. He received his Ph.D. degree in Measuring and Testing Technologies and Instruments from Southeast University, CHINA. His mainly research interests is Human Computer Interaction, Haptic Rendering and Tactile Rendering.
Fearn will present her research on exploring free-form visualization processes of children. Xu will present his work on how people visually represent discrete constraint problems. Uta has been involved on research that introduces design by immersion as a novel transdisciplinary approach to problem-driven visualization. She is also co-chairing the VIS Doctoral Colloquium this year, and is co-organizing the 4th workshop on Visualization for the Digital Humanities (VIS4DH’19).
Design by Immersion: A Transdisciplinary Approach to Problem-driven Visualizations [preprint]
Kyle Wm. Hall, Adam Bradley, Uta Hinrichs, Samuel Huron, Jo Wood, Christopher Collins and Sheelagh Carpendale.
Tuesday, Oct. 22 – 2:35-3:50 PM [preview video]
Provocations; Ballroom A
Construct-A-Vis: Exploring the Free-form Visualization Processes of Children [preprint]
Fearn Bishop, Johannes Zagermann, Ulrike Pfeil, Gemma Sanderson, Harald Reiterer and Uta Hinrichs.
Wednesday, Oct. 23 – 2:20-3:50 PM
(De)Construction; Ballroom A
How People Visually Represent Discrete Constraint Problems [TVCG paper; PDF]
Xu Zhu, X, Miguel Nacenta, Özgür Akgün and Peter W. Nightingale
Thursday, Oct. 24 – 9:00-10:30 AM [preview video]
Vis for Software and Systems; Ballroom B
Speaker: Stephen Brewster (University of Glasgow)
Venue: The Byre Theatre
9:30: Lecture 1: The past: what is multimodal interaction?
10:30 Coffee break
11:15 Lecture 2: The present: does it work in practice?
12:15 Lunch (not provided)
14:15 The future: Where next for multimodal interaction?
Professor Brewster is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. He has done a lot of research into Earcons, a particular form of non-speech sounds.
He did his degree in Computer Science at the University of Herfordshire in the UK. After a period in industry he did his PhD in the Human-Computer Interaction Group at the University of York in the UK with Dr Alistair Edwards. The title of his thesis was “Providing a structured method for integrating non-speech audio into human-computer interfaces”. That is where he developed my interests in Earcons and non-speech sound.
After finishing his PhD he worked as a research fellow for the European Union as part of the European Research Consortium for Informatics and Mathematics (ERCIM). From September, 1994 – March, 1995 he worked at VTT Information Technology in Helsinki, Finland. He then worked at SINTEF DELAB in Trondheim, Norway.
Abstract: This talk will describe a range of our projects, utilising functional Near Infrared Spectroscopy (fNIRS) in HCI. As a portable alternative that’s more tolerate of motion artefacts than EEG, fNIRS measures the amount of oxygen in the brain, as e.g. mental workload creates demand. As opposed to BCI (trying to control systems with our brain), we focus on brain-based HCI, asking what brain data can tell us about our software, our work, our habits, and ourselves. In particular, we are driven by the idea that brain data can become personal data in the future.
Bio: Dr Max L. Wilson is an Associate Professor in the Mixed Reality Lab in Computer Science at the University of Nottingham. His research focus is on evaluating Mental Workload in HCI contexts – as real-world as possible – primarily using functional Near Infrared Spectroscopy (fNIRS). As a highly tolerant form of brain sensor, fNIRS is suitable for use in HCI research into user interface design, work tasks, and everyday experiences. This work emerged from his prior research into the design and evaluation of complex user interfaces for information interfaces. Across these two research areas, Max has over 120 publications, including a Honourable Mention CHI2019 paper on a Brain-Controlled Movie – The MOMENT.
Our school of Computer Science are looking to recruit two people to join us in this unique and captivating place. Seven centuries of history link the students with the town, leading to the ancient and yet modern institution where you will be at the forefront of topics in CS e.g Human Computer Interaction. https://www.st-andrews.ac.uk
As noted in the School’s blog post on this, the school is particularly interested in recruiting someone with an interest in HCI into one of these posts.
The closing date is 25 October 2019
 The School of Computer Science is looking to recruit a lecturer as part of a large on-going expansion of our academic staff. We are especially, but not exclusively, interested in those working in Human Computer Interaction.
We wish to appoint a Lecturer to join our vibrant teaching and research community that is ranked amongst the top venues for Computer Science education and research worldwide. The successful candidate will be expected to have a range of interests, to be active in research publication that strengthens or complements those in the School and to be capable of teaching the subject to undergraduate and taught postgraduate students who come to us with a wide range of backgrounds.
Candidates should hold a PhD in a cognate discipline. Excellent teaching skills and an interest in promoting knowledge exchange are essential. You should also have some familiarity with grant seeking processes in relation to research councils and other sources. The Lecturer comes with an academic promotion track to Senior Lecturer, Reader, Professor.
Professor Aaron Quigley from SACHI and Professor Yoshifumi Kitamura (Tohoku University, Japan) are the general co-chairs for the ACM CHI conference on Human Factors in Computing Systems in Yokohama in 2021. CHI is hosted by the ACM SIGCHI, the Special Interest Group on Computer-Human Interaction
The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference for the field of Human-Computer Interaction (HCI). This flagship conference is generally considered the most prestigious in the field of HCI and attracts thousands of international attendees annually.
CHI provides a place where researchers and practitioners can gather from across the world to discuss the latest HCI topics. It has been held since 1982 and this is only the second time CHI will be held in Asia.