<!–Speaker: Annalu Waller, University of Dundee
Date/Time: 1-2pm February 21st, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Augmentative and alternative communication (AAC) attempts to augment natural speech, or to provide alternative ways to communicate for people with limited or no speech. Technology has played an increasing role in AAC. At the most simplest level, people with complex communication needs (CCN) can cause a prestored message to be spoken by activating a single switch. At the most sophisticated level, literate users can generate novel text. Although some individuals with CCN become effective communicators, most do not – they tend to be passive communicators, responding mainly to questions or prompts at a one or two word level. Conversational skills such as initiation, elaboration and story telling are seldom observed.
One reason for the reduced levels of communicative ability is that AAC technology provides the user with a purely physical link to speech output. The user is required to have sufficient language abilities and physical stamina to translate what they want to say into the code sequence of operations needed to produce the desired output. Instead of placing all the cognitive load on the user, AAC devices can be designed to support the cognitive and language needs of individuals with CCN, taking into account the need to scaffold communication as children develop into adulthood. A range of research projects, including systems to support personal narrative and language play, will be used to illustrate the application of Human Computer Interaction (HCI) and Natural Language Generation (NLG) in the design and implementation of electronic AAC devices.
About Annalu:
Dr Annalu Waller is a Senior Lecturer in the School of Computing at the University of Dundee. She has worked in the field of Augmentative and Alternate Communication (AAC) since 1985, designing communication systems for and with nonspeaking individuals. She established the first AAC assessment and training centre in South Africa in 1987 before coming to Dundee in 1989. Her PhD developed narrative technology support for adults with acquired dysphasia following stroke. Her primary research areas are human computer interaction, natural language generation, personal narrative and assistive technology. In particular, she focuses on empowering end users, including disabled adults and children, by involving them in the design and use of technology. She manages a number of interdisciplinary research projects with industry and practitioners from rehabilitation engineering, special education, speech and language therapy, nursing and dentistry. She is on the editorial boards of several academic journals and sits on the boards of a number of national and international organisations representing disabled people.
News
<!–Speaker: Ken Scott-Brown, University of Abertay Dundee
Date/Time: 1-2pm February 7th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In this talk I review examples from industry engagement activity that have taken well known theory in cognitive science and used them to address common HCI problems by forming new questions that have in turn lead to interface development. In the first part of the talk I discuss how a multi-disciplinary team including input from computer arts, computer games programming, engineering and psychology developed a multi-touch application to visualise financial planning targets on a Microsoft Surface. In the second part of the talk I will discuss how assistive agents displaying deictic gaze cuing have been implemented and evaluated using touch screen displays and eye-movement recording equipment. Both examples demonstrate how a practice-based approach to animation and an appreciation of vision science contribute to the understanding and development of intuitive interface design and implementation. The critical feature is the development of authentic animation conforming to the artistic principles of animation and the biological limits of the human visual system.
Bio:
Ken Scott-Brown is a lecturer at the Centre for Psychology at Abertay. After completing his Honours Degree and PhD in Psychology here at St Andrews he then undertook post-doc research posts at Glasgow Caledonian University, St Andrews, and Nottingham before taking on his current role. He is a currently Principal Investigator on a series of industry and public sector funded grants; and a collaborator on several more cross-discipline research projects. The projects are linked by the theme of data visualisation and interaction using a blend of approaches informed by Cognitive Science and exploiting technologies and skills from the Computer Games Industry.
<!–Speaker: Adrian Friday, University of Lancaster
Date/Time: 4-5pm January 9th, 2012
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Previous work in eco-feedback has focused either on new sensing technologies, or on people’s responses to specific feedback devices and other interventions placed in their homes. We attempt to take a comprehensive approach based on a large scale deployment of off the shelf sensors coupled with face to face interviews to account for both the amount of energy that specific appliances draw upon, and what occupant practices rely upon the services provided by these appliances. We performed a study in four student flats (each with 7–8 occupants) over a twenty-day period, collecting data from over two hundred sensors and conducting interviews with 11 participants. We build an account of life in the flats, and how that connects to the energy consumed. Our goal is to understand the challenges in accounting for both resources and practices at home, and what these challenges mean for the design of future feedback devices and interventions aimed at reducing energy consumption. In this talk we share results of our recent analysis and our experiences of conducting Ubicomp deployments using off the shelf sensors to study energy use.
<!–Speaker: Mark Wright, University of Edinburgh
Date/Time: 1-2pm December 6th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
The cultural significance of Informatics is that it provides new forms of digital embodiment which lead to evolution of practices and meaning.
Engagement of Informatics with Design is a key approach to explore this relationship between technology and culture. This talk outlines how such an approach was developed over an extended period of engagement with the Arts and Humanities and Practitioners in the Digital Creative Industries.
Two projects in particular are used to illustrate this process Tacitus and Spellbinder.
Tacitus was a major AHRC/EPSRC project which explored tacit knowledge in designers and makers and how this could be support by computer design systems.
A novel haptic design system was developed which demonstrated significant improvements in ease of use.
Spellbinder was a new form of mobile application based on image matching using camera phones. Funded by the “Designing for the 21st Century” AHRC/EPSRC initiative we explored the potential of this new medium through an iterative series of workshops, intense design interventions and reflection which we termed Research by Design.
<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)
Second Talk:
Speaker: David Flatla, Interaction Lab, University of Saskatchewan, Canada–>
Title: Using Situation-Specific Models of Colour Differentiation to Assist Individuals with Colour Vision Deficiency
Abstract:
Approximately 10% of the world’s population experiences either congenital, acquired, or situationally-induced colour vision deficiency (CVD – commonly called colour blindness). People with CVD often confuse colours that those without CVD can distinguish. When working in digital environments, CVD can lead to problems ranging from minor nuisances (e.g., being unable to distinguish ‘visited’ from ‘not visited’ links on a webpage) to major safety concerns (e.g., not seeing colour-coded warning messages).
Recently, recolouring tools have been developed that modify the colours presented on a display to eliminate the colour confusion that people with CVD experience. However, these tools are limited to individuals with dichromatic CVD – a particularly severe and somewhat rare form of congenital CVD. As a result, individuals with acquired and situationally-induced CVD as well as those with non-dichromatic forms of congenital CVD continue to have difficulties.
In this talk, I will present my PhD research toward a new recolouring tool based on situation-specific models of colour differentiation. I will first present my work on situation-specific models that capture the colour differentiation abilities of any individual in any environment through a two-minute in-situ calibration procedure. I will then discuss my most recent work on developing a recolouring tool based on situation-specific models of colour differentiation.
About David:
David Flatla is a PhD student at the University of Saskatchewan in Canada under the supervision of Dr. Carl Gutwin. His research focusses on the field of accessibility, particularly on how to help individuals with colour vision deficiency (CVD – commonly called colour blindness). To do this, he invented situation-specific models of colour differentiation that utilize in-situ calibration to accurately capture how people differentiate colors. He publishes at conferences like CHI and ASSETS. At UIST this year, he presented research exploring how to make boring calibrations fun by turning them into games.
<!–Speaker: Miguel Nacenta, SACHI University of St Andrews
Date/Time: 1-2pm November 22th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our daily activities are continuously mediated by the space that we occupy; we face people to talk to them, sit in circular arrangements for group discussions, and write documents facing monitors. However, current interfaces generally make strong assumptions about where we are (e.g., monitors assume we are perpendicular and in front) or outright ignore important aspects of our spatial environments (e.g., several people editing a document). In my research I deal with the perceptual, cognitive and social aspects of space relationships that will shape the design of next generation interfaces. In this talk I will discuss projects that address questions such as: what happens when you look at a display from the “wrong” place? What forms of input are most efficient to interact with displays from different locations? How does having a private display affect our awareness of the work of others?
<!–Speaker: Aaron Quigley, SACHI University of St Andrews
Date/Time: 1-2pm November 15th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artifacts? Within many disciplines Digital Human Models and Standard Observer Models are widely used and have proven to be very useful for modeling users and simulating humans. In this paper, we create personalized digital human models of perception (Individual Observer Models), particularly focused on how humans see. Individual Observer Models capture how our bodies shape our perceptions. Individual Observer Models are useful for adapting and personalizing user interfaces and artifacts to suit individual users’ bodies and perceptions. We introduce and demonstrate an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. An evaluation of the simulated eyes finds that they see eye charts the same as humans. Also demonstrated is the Individual Observer Model successfully making predictions about how easy or hard it is to see visual information and visual designs. The ability to predict and adapt visual information to maximize how effective it is is an important problem in visual design and analytics.
About Aaron:
In this talk Professor Aaron Quigley will present a talk for a paper he is presenting at the User Modeling, Adaptation and Personalization (UMAP) conference 2011 on July 12th in Barcelona Spain. This work on Creating Personalized Digital Human Models of Perception for Visual Analytics is the work with and of his former PhD student Dr. Mike Bennett and now postdoctoral fellow in the Department of Psychology in Stanford University.
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI and his appointment is part of SICSA, the Scottish Informatics and Computer Science Alliance. Aaron’s research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation.
<!–Date/Time: 1-2pm October 27th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Speaker: Anke Brock, IRIT research lab, Toulouse, France
Title: Touch the Map: Making Maps Accessible for the Blind
Abstract:
Human navigation is a very complex phenomenon that mainly relies on vision. Indeed, vision provides the pedestrian with landmarks and dynamic cues (e.g. optic flow) that are essential for position and orientation updating, estimation of distance, etc. Hence, for a blind person, navigating in familiar environment is not obvious, and becomes especially complicated in unknown environments. Exploration of geographic maps at home (for travel preparation) or even on mobile phones (for guidance) may represent valuable assistance. As maps are visual by essence and hence inaccessible for the blind, multimodal interactive maps undoubtedly represent a solution. Multimodal interactive maps are based on a combination of multi-touch devices and tactile (e.g. embossed) paper maps. However, design and realization of interactive maps for the blind imply several challenges, as for example making multi-touch surfaces accessible for the blind. In this talk the concept and design of the maps, the work with the blind users, the technical challenges as well as the psychological background will be presented.
About Anke:
Anke Brock is currently a PhD candidate in Human-Computer Interaction at the IRIT research lab in Toulouse (France). She has worked several years as a research engineer for navigation and driver assistance systems at Bosch in Hildesheim (Germany). Anke has obtained her master’s degree in Human-Computer Interaction in September 2010 at the University of Toulouse. Since then her research interests include accessibility of technology for the blind, interactive maps, tabletops, multimodal interaction, spatial cognition and haptic exploration as well as accessibility of the participatory design process.
<!–Speaker: Sean Lynch, Innovis group/Interactions lab, University of Calgary, Canada
Date/Time: 1-2pm September 28th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
Information visualization and new paradigms of interaction are generally applied to productive processes (i.e., at work) or for personal and entertainment purposes. In my work, I have looked instead at how to apply new technologies and visualization techniques to art. I will present mainly two projects that focus on multi-touch music composition and performance, and the visual analysis of the history and visual features of fine paintings.
About Sean:
Sean Lynch is a Master’s Student in Computer Science at the Interactions Lab at the University of Calgary. Sean’s research interests span interactive technologies (e.g., multi-touch), interactive art, and information visualization.
<!–Speaker: Mark Shovman, University of Abertay, Dundee
Date/Time: 2-3pm September 13th, 2011
Location: 1.33a Jack Cole, University of St Andrews (directions)–>
Abstract:
In natural and social sciences, novel insights are often derived from visual analysis of data. But what principles underpin the extraction of meaningful content from these visualisations? Abstract data visualisation can be traced at least as far back as 1801; but with the increase in the quantity and complexity of data that require analysis, standard tools and techniques are no longer adequate for the task. The ubiquity of computing power enables novel visualisations that are rich, multimodal and interactive; but what is the most effective way to exploit this power to support analysis of large, complex data sets? Often, the lack of fundamental theory is pointed out as a central ‘missing link’ in the development and assessment of efficient novel visualisation tools and techniques.
In this talk, I will present some first steps towards the theory of visualisation comprehension, drawing heavily on existing research in natural scene perception and reading comprehension. The central inspiration is the Reverse Hierarchy Theory of perceptual organisation, which is a recent (2002) development of the near-centennial Laws of Gestalt. The proposed theory comes complete with a testing methodology (the ‘pop-out’ effect testing) that is based on our understanding of the cognitive processes involved in visualisation comprehension.
About Mark:
Mark Shovman is a SICSA Lecturer in Information Visualisation in the Institute of Arts, Media and Computer Games Technology in the University of Abertay Dundee. He is an interdisciplinary researcher, studying the perception and cognition aspects of information visualisations, computer games, and immersive virtual reality. His recent research projects include the application of dynamic 3D link-charts in Systems Biology; alleviating cyber-sickness in VR helmets; and immersive VR as an art medium. Mark was born in Tbilisi, Georgia, and lived in Jerusalem, Israel since 1990. He can be found on LinkedIn