<!–Speaker: Ingi Helgason, Edinburgh Napier University
Date/Time: 2-3pm Jan 21, 2013
Location: 1.33b Jack Cole, University of St Andrews–>
Abstract:
This talk will present the work of the UrbanIxD project’s interdisciplinary summer school that took place in Croatia in August 2013. The goal of the summer school was the production of fictional concepts that explored the active role of citizens as designers, users and inhabitants in the emerging socio-technical situations that might characterise the Hybrid City of the near-future. The built environment is already in the process of being enriched with layers of data gathering computation and, combined with our own personal mobile technologies, this is offering a myriad of new urban informatics experiences and possibilities.
By employing a Critical Design methodology the UrbanIxD FP7 project is providing an opportunity to re-think what networked and connected communities of the future might look like. The project is questioning the premise of the “smart city” and is developing a community of researchers with a shared commitment to the foregrounding of the human experience in the emerging field of Urban Interaction Design.
Bio:
Ingi Helgason is a research fellow working on the UrbanIxD project based at Edinburgh Napier University where she is also studying part-time towards a PhD in Interaction Design. She teaches technology design and innovation at the Open University and her research interests focus on technology-mediated interactions in public and urban spaces. She was a member of the executive committee of the BCS Create series of interaction design conferences, and was on the programme committee of the BCS HCI conference for 2012. Ingi is on the editorial board of the SpringerOpen Journal of Interaction Science (JoIS). She is one of the organisers of This Happened Edinburgh, a series of events focusing on the stories behind interaction design.
This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.
News
<!–Speaker: Ruth Aylett, Heriot-Watt University, Edinburgh
Date/Time: 1-2pm Sep 10, 2013
Location: 1.33a Jack Cole, University of St. Andrews–>
Abstract:
In the EU-funded LIREC project, finishing last year, Heriot-Watt University investigated how a long-lived multi-embodied (robot, graphical) companion might be incorporated into a work-environment as a team buddy, running a final continuous three-week study. This talk gives an overview of the technology issues and some of the surprises from various user-studies.
Bio:
Ruth Aylett is Professor of Computer Sciences in the School of Mathematical and Computer Science at Heriot-Watt University. She researches intelligent graphical characters, affective agent models, human-robot interaction, and interactive narrative. She was a founder of the International Conference on Intelligent Virtual Agents and was a partner in the large HRI project LIREC – see lirec.eu. She has more than 200 publications – book chapters, journals, and refereed conferences and coordinates the Autonomous affective Agents group at Heriot-Watt University- see here
This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.
Title: London and the 19th Century Global Commodity Trade: Industrialists and Economic Botanists
<!–Speaker: Jim Clifford, University of Saskatchewan, Saskatoon, Canada (@jburnford,
Date/Time: Thursday, August 15; 1-2pm,
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Greater London’s industry relied on overseas ghost acres for economic expansion. Britain did not have enough land to support the massive growth in industries such as soap making and it could not grow tropical and sub-tropical plants, such as sugarcane or cinchona, on an economic scale. This project explores the environmental consequences of London’s industrial development during the long nineteenth century. For example, the soap industry’s transnational fat supply shifted from Russian tallow at the start of the century, to animal fats from around the world, supplemented by palm oil from West Africa, coconut oil from Ceylon, and cottonseed oil from Egypt. This one industry’s supply chain represents a wider trend where British industrialists increasingly relied on plantations, farms, forest, mines and oceans all over the world to supply essential raw materials. Along with finding new supplies to expand existing industries, London’s industrialists, and economic botanists at Kew Gardens, also searched the world for new economically viable plants, and both groups played a role in the transfer of seeds and living plants to establish new plantations throughout the British Empire. For example, the British created neo-South American landscapes in Sri Lanka (Ceylon) with cinchona and rubber plantations.
This presentation will discuss how I’m combining archival research on the soap industry and economic botany with a text mined database created by the Trading Consequence research project. Our research team extracts a database of information about commodity flows throughout the British World during the nineteenth century by using computer algorithms to text-mine millions of pages of digitized historical documents. We then develop a range of visualizations to explore this large database. This new methodology allows us to explore a much wider range of commodity flows throughout the British World in the nineteenth century than traditional archival research.
Bio:
Jim Clifford is an environmental historian of Britain and the British World during the long-19th century. He uses digital methods to explore the global environmental consequences of Britain’s growing industrial economy. Jim is interested in the intersections between environmental, social and political history. In particular, he researches how communities responded to worsening environmental conditions.
This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.
<!–Speaker: Gregor Miller, The University of British Columbia, Canada
Date/Time: 1-2pm July 16, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
I will be discussing two projects from the Human Communications Technology lab at the University of British Columbia. The first is OpenVL, an abstraction of computer vision which provides developers with a description language which models vision problems and hides the complexity of individual algorithms and their parameters. Additionally this provides facilities for hardware acceleration (and multiple implementations) and quick inclusion of improvement to the state-of-the-art. The second project is MyView, a video navigation framework utilising a personal video history for simpler browsing and search, as well as intuitive summary creation, social navigation and video editing.
Bio:
Gregor Miller has been a Research Fellow in the Department of Electrical and Computer Engineering at UBC since early 2008, working in the areas of Computer Vision, Computer Graphics and Human-Computer Interaction, and in particular the strands which connect them. Dr. Miller works in the Human Communication Technologies Laboratory as lead researcher for the MyView and OpenVL projects. Prior to coming to UBC Dr. Miller worked as a Research Fellow in Computer Science at the University of Dundee, designing multi-viewpoint camera systems. He received his Ph.D. in Computer Vision and Graphics from the University of Surrey and a BSc (Honours) in Computer Science and Mathematics from the University of Edinburgh. Dr. Miller has also been a visiting researcher at the Max Planck Institute for Computer Science, and worked for three years as a software developer.
This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.
<!–Speaker: Jacob Eisenstein, Georgia Institute of Technology,
Date/Time: 1-2pm July 23, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Large text document collections are increasingly important in a variety of domains; examples of such collections include news articles, streaming social media, scientific research papers, and digitized literary documents. Existing methods for searching and exploring these collections focus on surface-level matches to user queries, ignoring higher-level thematic structure. Probabilistic topic models are a machine learning technique for finding themes that recur across a corpus, but there has been little work on how they can support end users in exploratory analysis. In this talk I will survey the topic modeling literature and describe our ongoing work on using topic models to support digital humanities research. In the second half of the talk, I will describe TopicViz, an interactive environment that combines traditional search and citation-graph exploration with a dust-and-magnet layout that links documents to the latent themes discovered by the topic model.
This work is in collaboration with:
Polo Chau, Jaegul Choo, Niki Kittur, Chang-Hyun Lee, Lauren Klein, Jarek Rossignac, Haesun Park, Eric P. Xing, and Tina Zhou
Bio:
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.
<!–Speaker: Olivier Penacchio, University of St Andrews
Date/Time: 1-2pm June 11, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
The perception of such simple visual features as black, greys and white may sound simple. However, the luminance we perceive, also called brightness, does not match the luminance as physically measured. Instead, the perceived intensity of an area is modulated by the luminance of surrounding areas. This phenomenon is known as brightness induction and provides a striking demonstration that visual perception cannot be considered a simple pixel-wise sampling of the environment.
The talk will start with an overview of the classical examples of brightness induction and a quick look at the different theories underlying this phenomenon. We will next introduce a neurodynamical model of brightness induction, recently published*. This model is based on the architecture of the primary visual cortex and successfully accounts for well-known psychophysical effects both in static and dynamic contexts. It suggests that a common simple mechanism may underlie different fundamental processes of visual perception such as saliency and brightness perception. Finally, we will briefly outline potential applications in the arena of computer vision and medical imaging.
* Penacchio O, Otazu X, Dempere-Marco L (2013) A Neurodynamical Model of Brightness Induction in V1. PLoS ONE 8(5): e64086. doi:10.1371/journal.pone.0064086
Bio:
Olivier Penacchio is a postdoctoral researcher in the school of Psychology and Neuroscience of St Andrews University. He is currently working on the perception and evolution of counter-shading camouflage. His background is in pure mathematics, algebraic geometry, and he is a recent convert to the area of vision research.
<!–Speaker: Dr Kristian Wasen, University of Gothenberg
Date/Time: 12-1pm Wed May 22, 2013
Location: 1.33a Jack Cole, University of St Andrews,(directions)–>
Abstract:
Dr Kristian Wasen will be presenting his recently published paper co-authored by Meaghan Brierly. Emerging multidimensional imaging technologies (3D/4D/“5D”) open new ground for exploring visual worlds and rendering new image-based knowledge, especially in areas related to medicine and science. 3D imaging is defined as three visual dimensions. 4D imaging is three visual dimensions plus time. 4D imaging can also be combined with functional transitions (i.e., following radioactive tracer isotope through the body in positron emission tomography). 4D imaging plus functionality is defined as “5D” imaging. We propose the idea of “visual touch”, a conceptual middle ground between touch and vision, as a basis for future explorations of contemporary institutional standards of image-based work. “Visual touch” is both the process of reconciling the senses (human and artificial) and the end result of this union of the senses. We conclude that while new multi-dimensional imaging technology emphasises vision, new forms of image-based work using visual materials cannot solely be classified as “visual”.
Bio:
Dr. Kristian Wasen is working as a postdoctoral fellow at the University of Gothenburg in Sweden. He holds two bachelor degrees in Social Psychology and Industrial Management from the University of Skovde, Sweden, and he earned a master’s degree and a Ph.D. degree in Business Administration from the School of Business, Economics and Law in Gothenburg. His current research interests include health care robotics, technology’s social and professional impact, organisational design, evaluation of usability, user acceptance, and efficient human-robot interaction.
<!–Speaker: Vinodh Rajan, University of St. Andrews
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Human handwriting is a process that often generates variable output. Scripts generally begin with characters possessing consistent shape. But the effects of variations tend to accumulate and modify the scripts’ appearance over time. The talk will start with a brief overview of scripts and related concepts. Then the example of the Brahmic family of scripts will be addressed, and in particular the variations that led to their development.This will be followed by a general introduction to handwriting modeling methods along with techniques such as trajectory reconstruction, stroke segmentation and stroke modeling. There will further be a discussion of methods and techniques to model and analyze development of scripts, with prospective applications, and lastly there will be a demo of what was achieved so far.
Bio:
Vinodh Rajan is a PhD student based within the School of Computing, here at the University of St. Andrews. Read more about Vinodh here.
<!–Speaker: Kyle Montague, University of Dundee
Date/Time: 1-2pm May 14, 2013
Location: 1.33a Jack Cole, University of St Andrews–>
Abstract:
Touchscreens are ever-present in technologies today. The large featureless sensors are rapidly replacing the physical keys and buttons on a wide array of digital technologies, the most common is the mobile device. Gaining popularity across all demographics and endorsed for their superior interface soft design flexibility and rich gestural interactions, the touchscreen currently plays a pivotal role in digital technologies. However, just as touchscreens have enabled many to engage with digital technologies, its barriers to access are excluding many others with visual and motor impairments. The contemporary techniques to address the accessibility issues fail to consider the variable nature of abilities between people, and the ever changing characteristics of an individuals impairment. User models for personalisation are often constructed from stereotypical generalisations of the similarities of people with disabilities, neglecting to recognise the unique characteristics of the individuals themselves. Existing strategies for measuring abilities and performance require users to complete exhaustive training exercises that are disruptive from the intended interactions, and result in the creation of descriptions of a users performance for that particular instance.
This research aimed to develop novel techniques to support the continuous measurement of individual user’s needs and abilities through natural touchscreen device interactions. The goal was to create detailed interaction models for individual users, in order to understand the short and long-term variances of their abilities and characteristics. Resulting in the development of interface adaptions that better support interaction needs of people with visual and motor impairments.
Bio:
Kyle Montague is a PhD student based within the School of Computing at the University of Dundee. Kyle works as part of the Social Inclusion through the Digital Economy (SiDE) research hub. He is investigating the application of shared user models and adaptive interfaces to improve the accessibility of digital touchscreen technologies for people with vision and motor impairments.
His doctoral studies explore novel methods of collecting and aggregating user interaction data from multiple applications and devices, creating domain independent user models to better inform adaptive systems of individuals needs and abilities.
Alongside his research Kyle works for iGiveADamn, a small digital design company he set up with a fellow graduate Jamie Shek. Past projects have included iGiveADamn Connect platform for charities, Scottish Universities Sports and the Blood Donation mobile apps. Prior to this he completed an undergraduate degree in Applied Computing at the University of Dundee.
<!–Speaker: Patrick Olivier, Culture Lab, Newcastle University
Date/Time: 1-2pm May 7, 2013
Location: 1.33a Jack Cole, University of St. Andrews–>
Abstract:
The purpose of this talk will be to introduce Culture Lab’s past and current interaction design research into digital tabletops. The talk will span our interaction techniques and technologies research (including pen-based interaction, authentication and actuated tangibles) but also application domains (education, play therapy and creative practice) by reference to four Culture Lab tabletop studies: (1) Digital Mysteries (Ahmed Kharrufa’s classroom-based higher order thinking skills application); (2) Waves (Jon Hook’s expressive performance environment for VJs); (3) Magic Land (Olga Pykhtina’s tabletop play therapy tool); and (4) StoryCrate (Tom Bartindale’s collaborative TV production tool). I’ll focus on a number of specific challenges for digital tabletop research, including selection of appropriate design approaches, the role and character of evaluation, the importance of appropriate “in the wild” settings, and avoiding the trap of simple remediation when working in multidisciplinary teams.
Bio:
Patrick Olivier is a Professor of Human-Computer Interaction in the School of Computing Science at Newcastle University. He leads the Digital Interaction Group in Culture Lab, Newcastle’s centre for interdisciplinary practice-based research in digital technologies. Their main interest is interaction design for everyday life settings and Patrick is particularly interested in the application of pervasive computing to education, creative practice, and health and wellbeing, as well as the development of new technologies for interaction (such as novel sensing platforms and interaction techniques).