News

Welcome to David Morrison


IMG_0106This week we welcomed another new face to SACHI. David Morrison is a computer programmer, with a strong industry background, originally in the games industry but more recently mobile app development. David was also a technology fellow on the Code for Europe project in 2014.

READ MORE

Janet Read, Children, Text Input – and the Writing Process


Abstract:

The process of learning to write is both cognitive and motoric.  Forming symbols into words and committing them to a surface is a process laden with complexity; creating the meaning that will be represented by these words is even more complex.

Digital technologies provide opportunities and insights for the study of writing processes.  With keyboard capture and pen stroke capture important information can be gathered to make writing systems more child suited and to provide useful assistance to beginner writers.  Data captured during the electronic transcription of writing can also provide insights into how writing emerges as a form.

This talk will present child computer interaction against the context of children writing using electronic means.  The marriage of the text input space, the digital ink space and the child will be explored using examples from recent research.

Bio:

Prof. Janet C Read (BSc, PGCE, PhD) is an international expert in Child Computer Interaction having supervised 7 PhD students to completion, examined 14 PhD students in six different European countries and currently supervising 8 PhD students studying a range of topics including the use of colour in teenage bedrooms, the design of interactive systems for dogs, the use of scaffolding in serious games, the use of text input to detect fraudulent password use, collaborative gaming for children, evaluation of systems for children and the forensic detection process.  Her personal current research is in three main areas – she has recently published several papers on the ethics of engaging with children in participatory research activities offering a model for working with children which ensures they are given full information, and also a set of techniques that can be used to ensure that children’s contributions to interaction design are treated with respect. A second strand of interest is in the study of fun and the study of means to measure it.  The Fun Toolkit, which is a set of tools to measure the experience of children when using interactive technology, is her most cited work and this is work that has developed over time but is still being examined.  The uses of digital ink with children, and the whole area of text input for children, both with standard keyboards and with `handwriting recognition completes her current research portfolio. Professor Read has acted as PI on several projects (see below) and is the Editor in Chief of the International Journal of Child Computer Interaction.

This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.

Paper at WIPTTE 2014


WIPTTE logo

This March, Anne-Marie Mann will attend the Workshop on the Impact of Pen and Touch Technology in Education (WIPTTE 2014) in College Station, Texas. This conference focuses on the potential of pen and touch-based computing in education environments. Now in its 8th year, approximately 150 participants from industry, academia and education travel to WIPTTE to share their tools, experiences and ideas using this new hands-on technology.  This year the Keynote Speakers are Barbara Tversky (Columbia University) and Randall Davis (MIT) .

Anne-Marie has been awarded a registration scholarship and will present her paper “Digital Pen Technology’s Suitability to Support Handwriting Learning“, co-authored with Uta Hinrichs and Aaron Quigley, during the conference. Anne-Marie hopes that  the conference will provide an opportunity for open discussion regarding her recent study and research interests that will prove useful during future projects.

Ruth Aylett, Team-buddy: Investigating a long-lived robot companion


Abstract:
In the EU-funded LIREC project, finishing last year, Heriot-Watt University investigated how a long-lived multi-embodied (robot, graphical) companion might be incorporated into a work-environment as a team buddy, running a final continuous three-week study. This talk gives an overview of the technology issues and some of the surprises from various user-studies.

Bio:
Ruth Aylett is Professor of Computer Sciences in the School of Mathematical and Computer Science at Heriot-Watt University. She researches intelligent graphical characters, affective agent models, human-robot interaction, and interactive narrative. She was a founder of the International Conference on Intelligent Virtual Agents and was a partner in the large HRI project LIREC – see lirec.eu. She has more than 200 publications – book chapters, journals, and refereed conferences and coordinates the Autonomous affective Agents group at Heriot-Watt University- see here

This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.

Gregor Miller, OpenVL: Designing a computer vision abstraction for mainstream developers; and MyView: Using a personal video history for intuitive video navigation


Abstract:
I will be discussing two projects from the Human Communications Technology lab at the University of British Columbia. The first is OpenVL, an abstraction of computer vision which provides developers with a description language which models vision problems and hides the complexity of individual algorithms and their parameters. Additionally this provides facilities for hardware acceleration (and multiple implementations) and quick inclusion of improvement to the state-of-the-art. The second project is MyView, a video navigation framework utilising a personal video history for simpler browsing and search, as well as intuitive summary creation, social navigation and video editing.

Bio:
Gregor Miller has been a Research Fellow in the Department of Electrical and Computer Engineering at UBC since early 2008, working in the areas of Computer Vision, Computer Graphics and Human-Computer Interaction, and in particular the strands which connect them. Dr. Miller works in the Human Communication Technologies Laboratory as lead researcher for the MyView and OpenVL projects. Prior to coming to UBC Dr. Miller worked as a Research Fellow in Computer Science at the University of Dundee, designing multi-viewpoint camera systems. He received his Ph.D. in Computer Vision and Graphics from the University of Surrey and a BSc (Honours) in Computer Science and Mathematics from the University of Edinburgh. Dr. Miller has also been a visiting researcher at the Max Planck Institute for Computer Science, and worked for three years as a software developer.

This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.

Jacob Eisenstein, Interactive Topic Visualization for Exploratory Text Analysis


Abstract:
Large text document collections are increasingly important in a variety of domains; examples of such collections include news articles, streaming social media, scientific research papers, and digitized literary documents. Existing methods for searching and exploring these collections focus on surface-level matches to user queries, ignoring higher-level thematic structure. Probabilistic topic models are a machine learning technique for finding themes that recur across a corpus, but there has been little work on how they can support end users in exploratory analysis. In this talk I will survey the topic modeling literature and describe our ongoing work on using topic models to support digital humanities research. In the second half of the talk, I will describe TopicViz, an interactive environment that combines traditional search and citation-graph exploration with a dust-and-magnet layout that links documents to the latent themes discovered by the topic model.
This work is in collaboration with:
Polo Chau, Jaegul Choo, Niki Kittur, Chang-Hyun Lee, Lauren Klein, Jarek Rossignac, Haesun Park, Eric P. Xing, and Tina Zhou

Bio:
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.

Olivier Penacchio, A neurodynamical model of luminance perception


Abstract:
The perception of such simple visual features as black, greys and white may sound simple. However, the luminance we perceive, also called brightness, does not match the luminance as physically measured. Instead, the perceived intensity of an area is modulated by the luminance of surrounding areas. This phenomenon is known as brightness induction and provides a striking demonstration that visual perception cannot be considered a simple pixel-wise sampling of the environment.

The talk will start with an overview of the classical examples of brightness induction and a quick look at the different theories underlying this phenomenon. We will next introduce a neurodynamical model of brightness induction, recently published*. This model is based on the architecture of the primary visual cortex and successfully accounts for well-known psychophysical effects both in static and dynamic contexts. It suggests that a common simple mechanism may underlie different fundamental processes of visual perception such as saliency and brightness perception. Finally, we will briefly outline potential applications in the arena of computer vision and medical imaging.
* Penacchio O, Otazu X, Dempere-Marco L (2013) A Neurodynamical Model of Brightness Induction in V1. PLoS ONE 8(5): e64086. doi:10.1371/journal.pone.0064086

luminance

Bio:
Olivier Penacchio is a postdoctoral researcher in the school of Psychology and Neuroscience of St Andrews University. He is currently working on the perception and evolution of counter-shading camouflage. His background is in pure mathematics, algebraic geometry, and he is a recent convert to the area of vision research.

Kristian Wasen, The Visual Touch Regime: Real-Time 3D Image-Guided Robotic Surgery and 4D and “5D” Scientific Illustration at Work


Abstract:
Dr Kristian Wasen will be presenting his recently published paper co-authored by Meaghan Brierly. Emerging multidimensional imaging technologies (3D/4D/“5D”) open new ground for exploring visual worlds and rendering new image-based knowledge, especially in areas related to medicine and science. 3D imaging is defined as three visual dimensions. 4D imaging is three visual dimensions plus time. 4D imaging can also be combined with functional transitions (i.e., following radioactive tracer isotope through the body in positron emission tomography). 4D imaging plus functionality is defined as “5D” imaging. We propose the idea of “visual touch”, a conceptual middle ground between touch and vision, as a basis for future explorations of contemporary institutional standards of image-based work. “Visual touch” is both the process of reconciling the senses (human and artificial) and the end result of this union of the senses. We conclude that while new multi-dimensional imaging technology emphasises vision, new forms of image-based work using visual materials cannot solely be classified as “visual”.
Bio:
Dr. Kristian Wasen is working as a postdoctoral fellow at the University of Gothenburg in Sweden. He holds two bachelor degrees in Social Psychology and Industrial Management from the University of Skovde, Sweden, and he earned a master’s degree and a Ph.D. degree in Business Administration from the School of Business, Economics and Law in Gothenburg. His current research interests include health care robotics, technology’s social and professional impact, organisational design, evaluation of usability, user acceptance, and efficient human-robot interaction.

Vinodh Rajan, Modeling & Analyzing development of scripts


Abstract:
Human handwriting is a process that often generates variable output. Scripts generally begin with characters possessing consistent shape. But the effects of variations tend to accumulate and modify the scripts’ appearance over time. The talk will start with a brief overview of scripts and related concepts. Then the example of the Brahmic family of scripts will be addressed, and in particular the variations that led to their development.This will be followed by a general introduction to handwriting modeling methods along with techniques such as trajectory reconstruction, stroke segmentation and stroke modeling. There will further be a discussion of methods and techniques to model and analyze development of scripts, with prospective applications, and lastly there will be a demo of what was achieved so far.

Bio:
Vinodh Rajan is a PhD student based within the School of Computing, here at the University of St. Andrews. Read more about Vinodh here.

Kyle Montague, The SUM Framework: An Exploration of Shared User Models and Adaptive Interfaces to Improve Accessibility of Touchscreen Interactions


Abstract:
Touchscreens are ever-present in technologies today. The large featureless sensors are rapidly replacing the physical keys and buttons on a wide array of digital technologies, the most common is the mobile device. Gaining popularity across all demographics and endorsed for their superior interface soft design flexibility and rich gestural interactions, the touchscreen currently plays a pivotal role in digital technologies. However, just as touchscreens have enabled many to engage with digital technologies, its barriers to access are excluding many others with visual and motor impairments. The contemporary techniques to address the accessibility issues fail to consider the variable nature of abilities between people, and the ever changing characteristics of an individuals impairment. User models for personalisation are often constructed from stereotypical generalisations of the similarities of people with disabilities, neglecting to recognise the unique characteristics of the individuals themselves. Existing strategies for measuring abilities and performance require users to complete exhaustive training exercises that are disruptive from the intended interactions, and result in the creation of descriptions of a users performance for that particular instance.
This research aimed to develop novel techniques to support the continuous measurement of individual user’s needs and abilities through natural touchscreen device interactions. The goal was to create detailed interaction models for individual users, in order to understand the short and long-term variances of their abilities and characteristics. Resulting in the development of interface adaptions that better support interaction needs of people with visual and motor impairments.

Bio:
Kyle Montague is a PhD student based within the School of Computing at the University of Dundee. Kyle works as part of the Social Inclusion through the Digital Economy (SiDE) research hub. He is investigating the application of shared user models and adaptive interfaces to improve the accessibility of digital touchscreen technologies for people with vision and motor impairments.

His doctoral studies explore novel methods of collecting and aggregating user interaction data from multiple applications and devices, creating domain independent user models to better inform adaptive systems of individuals needs and abilities.

Alongside his research Kyle works for iGiveADamn, a small digital design company he set up with a fellow graduate Jamie Shek. Past projects have included iGiveADamn Connect platform for charities, Scottish Universities Sports and the Blood Donation mobile apps. Prior to this he completed an undergraduate degree in Applied Computing at the University of Dundee.