<!–Speaker: Helen Purchase, University of Glasgow
Date/Time: 1-2pm May 15, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The visual design of an interface is not merely an ‘add-on’ to the functionality provided by a system: it is well-known that it can affect user preference, engagement and motivation, but does it have any effect on user performance? Can the efficiency or effectiveness of a system be improved by its visual design? This seminar will report on experiments that investigate whether any such effect can be quantified and tested. Key to this question is the definition of an unambiguous, quantifiable characterisation of an interface’s ‘visual aesthetic’: ways in which this could be determined will be discussed.
About Helen:
Dr Helen Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. She has worked in the area of empirical studies of graph layout for several years, and also has research interests in visual aesthetics, task-based empirical design, collaborative learning in higher education, and sketch tools for design. She is currently writing a book on empirical methods for HCI research.
News
Next week, in the context of the Psychology-Computer Science collaboration grant (PSY/CS), Manuel Spitchan from Psychology will be giving a workshop on eye-movements and eye-tracking.
Please, check the details here:
<!–Speaker: Umer Rashid, University of St Andrews, UK
Date/Time: 1-2pm May 1, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
A very apparent drawback of mobile devices is that their screens do not allow for the display of large amounts of information at once without requiring interaction, which limits the possibilities for information access and manipulation on the go. Attaching a large external display can help a mobile device user view more content at once. We report on a study investigating how different configurations of input and output across displays affect task performance, subjective workload and preferences in map, text and photo search tasks. After conducting a detailed analysis of the performance differences across different UI configurations, we provide recommendations for the design of distributed user interfaces.
About Umer:
Umer Rashid has conducted my PhD research under the supervision of Prof. Aaron Quigley in the School of Computer Science at University of St Andrews. The goal of his research is to look into the ways mobile interaction with external large displays can complement the inherent capabilities of each device, thus resulting into an enhanced user experience.
<!–Speaker: Helen Ai He, University of Calgary, Canada
Date/Time: 1-2pm April 10th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Global warming, and the climate change it induces, is an urgent global issue. One remedy to this problem, and the focus of this talk, is to motivate eco-behaviors by people. One approach is the development of technologies that provide real-time feedback of energy usage (e.g. in the form of watts, monetary cost, or carbon emissions).
However, there is one problem – most technologies use a “one-size-fits-all” solution, providing the same feedback to differently motivated individuals at different stages of readiness, willingness and ableness to change. I synthesize a wide range of motivational psychology literature to develop a motivational framework based on the Stages of Change (aka Transtheoretical) Model. If you are at all interested in motivation, behaviour change, or designing technologies to motivate behaviour change, this talk may be useful for you.
About Helen:
Helen Ai He completed her Masters in Computer Science (specializing in HCI) at the University of Calgary, Canada, under the supervision of Dr. Saul Greenberg and Dr. Elaine May Huang.
She worked as a software developer in SMART Technologies for a year and a half, and plans to begin an HCI PhD in September 2012. She is particularly interested in topics such as personal informatics, cross-cultural research, technology design for developing regions, and sustainable interaction design. Aside from research, she enjoys doing karate, climbing, artwork, and eating!
Congratulations to Per Ola and his co-author who won an Honourable Mention (for Best Paper) at the Eye Tracking Research & Applications Symposium 2012. Only 3 of the 101 submitted papers received an honorable mention.
Kristensson, P.O. and Vertanen, K. 2012. The potential of dwell-free eye-typing for fast assistive gaze communication. In Proceedings ofthe 7th ACM Symposium on Eye-Tracking Research & Applications (ETRA2012). ACM Press: 241-244.
“The Seventh ACM Symposium on Eye Tracking Research & Applications (ETRA 2012) was held in Santa Barbara, California on March 28th-30th, 2012. The ETRA conference series focuses on all aspects of eye movement research and applications across a wide range of disciplines. The symposium presents research that advances the state-of-the-art in these areas, leading to new capabilities in gaze tracking systems, gaze aware applications, gaze based interaction, eye movement data analysis, etc. For ETRA 2012, we invite papers in all areas of eye tracking research and applications.” [ETRA 2012 website]
Miguel Nacenta is currently in Switzerland, participating as a keynote speaker in the Swiss Workshop on Multi-display Environments, 2012. During the workshop participants will have the opportunity to discuss on the issues of designing and building Multi-display Environments. The workshop includes another two key-note speakers (Prof. Streitz and Prof. Reiterer) and a presentation of the Interactive Collaborative Environment (ICE) research project that is currently being undertaken by the Pervasive and Artificial Intelligence research group at the Department of Informatics of the University of Fribourg.
<!–Speaker: Sriram Subramanian, University of Bristol
Date/Time: 1-2pm March 6th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The talk will present some of the recent research endeavours of the Bristol Interaction and Graphics group. The group has been exploring various technical solutions to create the next generation of touch interfaces that support multi-point haptic feedback as well as dynamic allocation of views to different users. The talk will rely on a lot of videos of on-going work to illustrate and describe our systems. I expect the talk to be accessible to all computer scientists and even to the lay public. Thus I particularly welcome discussion, feedback, and critique from the community.
About Sriram:
Dr. Sriram Subramanian is a Reader at the University of Bristol with a research interests in Human-computer Interaction (HCI). He is specifically interested in new forms of physical input. Before joining the University of Bristol, he worked as a senior scientist at Philips Research Netherlands and as an Assistant Professor at the Department of Computer Science of the University of Saskatchewan, Canada. You can find more details of his research interests at his groups page http://big.cs.bris.ac.uk
Per Ola Kristensson will give two presentations at IUI 2012: 17th ACM International Conference on Intelligent User Interfaces in Lisbon, Portugal on February 14-17, 2012.
The first presentation is on Wednesday and is entitled “Performance comparisons of phrase sets and presentation styles for text entry evaluations”. This paper describes how we used crowdsourcing to empirically compare five different publicly-available phrase sets in two large-scale text entry experiments. We also investigated the impact of asking participants to memorise phrases before writing them versus allowing participants to see the phrase during text entry. This paper is co-authored with Keith Vertanen, an Assistant Professor of Computer Science at Montana Tech in USA.
The second presentation is on Thursday and is entitled “Continuous recognition of one-handed and two-handed gestures using 3D full-body motion tracking sensors”. This paper is co-authored with SACHI members Thomas Nicholson and Aaron Quigley. In this paper we present a new bimanual gesture interface for the Kinect. Among other things, our evaluation shows that the system recognises one-handed and two-handed gestures with an accuracy of 92.7%–96.2%.
Per Ola will also introduce the keynote speaker Chris Bishop from Microsoft Research Cambridge on Thursday. Chris will talk about “…the crucial role played by machine learning in the Kinect 3D full-body motion sensor, which has recently become the fastest-selling consumer electronics device in history.”
Per Ola is a Workshop Co-Chair for IUI 2012 together with Andreas Butz, a Professor of Computer Science at the University of Munich in Germany.
<!–Speaker: Annalu Waller, University of Dundee
Date/Time: 1-2pm February 21st, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Augmentative and alternative communication (AAC) attempts to augment natural speech, or to provide alternative ways to communicate for people with limited or no speech. Technology has played an increasing role in AAC. At the most simplest level, people with complex communication needs (CCN) can cause a prestored message to be spoken by activating a single switch. At the most sophisticated level, literate users can generate novel text. Although some individuals with CCN become effective communicators, most do not – they tend to be passive communicators, responding mainly to questions or prompts at a one or two word level. Conversational skills such as initiation, elaboration and story telling are seldom observed.
One reason for the reduced levels of communicative ability is that AAC technology provides the user with a purely physical link to speech output. The user is required to have sufficient language abilities and physical stamina to translate what they want to say into the code sequence of operations needed to produce the desired output. Instead of placing all the cognitive load on the user, AAC devices can be designed to support the cognitive and language needs of individuals with CCN, taking into account the need to scaffold communication as children develop into adulthood. A range of research projects, including systems to support personal narrative and language play, will be used to illustrate the application of Human Computer Interaction (HCI) and Natural Language Generation (NLG) in the design and implementation of electronic AAC devices.
About Annalu:
Dr Annalu Waller is a Senior Lecturer in the School of Computing at the University of Dundee. She has worked in the field of Augmentative and Alternate Communication (AAC) since 1985, designing communication systems for and with nonspeaking individuals. She established the first AAC assessment and training centre in South Africa in 1987 before coming to Dundee in 1989. Her PhD developed narrative technology support for adults with acquired dysphasia following stroke. Her primary research areas are human computer interaction, natural language generation, personal narrative and assistive technology. In particular, she focuses on empowering end users, including disabled adults and children, by involving them in the design and use of technology. She manages a number of interdisciplinary research projects with industry and practitioners from rehabilitation engineering, special education, speech and language therapy, nursing and dentistry. She is on the editorial boards of several academic journals and sits on the boards of a number of national and international organisations representing disabled people.
A new paper from SACHI in collaboration with the Interactions Lab (The HapticTouch Toolkit: Enabling Exploration of Haptic Interactions) will be presented at this year’s TEI conference (#tei2012, www.tei-conf.org/12/). The paper describes work on an API to facilitate the fast programming of haptic tabletop application prototypes.