News

SACHI Seminar: The design of digital technologies to support transitional events in the human lifespan


Event details

When: 14th February 2017 14:00 - 15:00
Where: Cole 1.33a
Speaker: Professor Wendy Moncur, FRSA

    Title:  The design of digital technologies to support transitional events in the human lifespan

    Abstract:  This talk will focus on (i) qualitative research undertaken to understand how digital technologies are being used during transitional periods across the human lifespan, such as becoming an adult, romantic breakup, and end of life, and (ii) the opportunities for technology design that have emerged as a result. Areas of focus include presentation of self online, group social norms, and the problematic nature of ‘ownership’ of digital materials.

    Biography – Professor Wendy Moncur, FRSA:  I hold an Interdisciplinary Chair in Digital Living at the University of Dundee, where I work across Duncan of Jordanstone College of Art & Design and the School of Nursing and Health Sciences. I am also a Visiting Scholar at the University of Technology Sydney, Australia, and an Associate of the Centre for Death and Society (University of Bath).

    The work of my group, Living Digital (www.livingdigital.ac.uk) is grounded in Human Computer Interaction, and focuses on human experiences enacted in a digital age – for example, becoming an adult, becoming a parent, relationship breakdown, and the end of life.

    I have been involved in grants totalling £2.7 Million since 2011, through an EPSRC Personal Fellowship and as a Principal Investigator/Co-investigator. Full details of my publications can be found at http://bit.ly/1kQx2zH. My next large research project, ‘TAPESTRY’, is funded under the EPSRC TIPS program, and explores normative online behaviour in social groups.

    SACHI Seminar: The Collaborative Design of Tangible Interactions in Museums


    Event details

    When: 31st January 2017 14:00 - 15:00
    Where: Cole 1.33a
    Speaker: Dr Luigina Ciolfi

      Title:  The Collaborative Design of Tangible Interactions in Museums

      Abstract:  Interactive technology for cultural heritage has long been a subject of study for Human-Computer Interaction. Findings from a number of studies suggest that, however, technology can sometime distance visitors from heritage holdings rather than enabling people to establish deeper connections to what they see. Furthermore, the introduction of innovative interactive installations in museum is often seen as an interesting novelty but seldom leads to substantive change in how a museum approaches visitor engagement. This talk will discuss work on the EU project “meSch” (Material EncounterS with Digital Cultural Heritage) aimed at creating a do-it-yourself platform for cultural heritage professionals to design interactive tangible computing installations that bridge the gap between digital content and the materiality of museum objects and exhibits. The project has adopted a collaborative design approach throughout, involving cultural heritage professionals, designers, developers and social scientist. The talk will feature key examples of how collaboration unfolded and relevant lessons learned, particularly regarding the shared envisioning of tangible interaction concepts at a variety of heritage sites including archaeology and art museums, hands-on exploration centres and outdoor historical sites.

      Biography:  Dr. Luigina Ciolfi is Reader in Communication at Sheffield Hallam University. She holds a Laurea (Univ. of Siena, Italy) and a PhD (Univ. of Limerick, Ireland) in Human-Computer Interaction. Her research focuses on understanding and designing for human situated practices mediated by technology in both work and leisure settings, particularly focusing on participation and collaboration in design. She has worked on numerous international research projects on heritage technologies, nomadic work and interaction in public spaces. She is the author of over 80 peer-reviewed publications, has been an invited speaker in ten countries, and has advised on research policy around digital technologies and cultural heritage for several European countries. Dr. Ciolfi serves in a number of scientific committees for international conferences and journals, including ACM CHI, ACM CSCW, ACM GROUP, ECSCW, COOP and the CSCW Journal. She is a member of the EUSSET (The European Society for Socially Embedded Technologies) and of the ACM CSCW Steering Groups.  Dr. Ciolfi is a senior member of the ACM. Full information on her work can be found at http://luiginaciolfi.com

      Professor Roderick Murray Smith: University of Glasgow


      Event details

      When: 18th November 2016 13:00 - 14:00
      Where: Cole 1.33b
      Speaker: Roderick Murray Smith

        rod-head

        Title: Control Theoretical Models of Pointing

        Speaker: Rod Murray-Smith, University of Glasgow
        http://www.dcs.gla.ac.uk/~rod/

        Abstract: I will talk about two topics:

        1. (Joint work with Jörg Müller & Antti Oulasvirta) I will present an empirical comparison of four models from manual control theory on their ability to model targetting behaviour by human users using a mouse: McRuer’s Crossover, Costello’s Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.

        2. Casual control. How and why we can design systems to work at a range of levels of engagement.

        Biography: Roderick Murray-Smith is a Professor of Computing Science at Glasgow University, in the “Inference, Dynamics and Interaction” research group and the Head of the Information, Data and Analysis Section. He works in the overlap between machine learning, interaction design and control theory. In recent years his research has included multimodal sensor-based interaction with mobile devices, mobile spatial interaction, Brain-Computer interaction and nonparametric machine learning. Prior to this he held positions at the Hamilton Institute, NUIM, Technical University of Denmark, M.I.T., and Daimler-Benz Research, Berlin, and was the Director of SICSA, the Scottish Informatics and Computing Science Alliance. He works closely with the mobile phone industry, having worked together with Nokia, Samsung, FT/Orange, Microsoft and Bang & Olufsen. He was a member of Nokia’s Scientific Advisory Board and is a member of the Scientific Advisory Board for the Finnish Centre of Excellence in Computational Inference Research. He has co-authored three edited volumes, 22 journal papers, 16 book chapters, and 88 conference papers.

        Dr Rebecca Fiebrink: Goldsmiths University of London


        Event details

        When: 1st November 2016 14:00 - 15:00
        Where: Cole 1.33b
        Speaker: Rebecca Fiebrink

          rebeccafiebrink

          Title: Designing Real-time Interactions Using Machine Learning

          Abstract: Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are crucial in the design of new musical instruments and other embodied interactions. Realising the creative potential of these algorithms requires a rethinking of the interfaces through which people provide data and build models, providing for tight interaction-feedback loops and efficient mechanisms for people to steer and explore algorithm behaviours.

          In this talk, I will discuss my research on better enabling composers, musicians, and developers to employ supervised learning in the design of new real-time systems. I will show a live demo of tools that I have created for this purpose, centering around the Wekinator software toolkit for interactive machine learning. I’ll discuss some of the outcomes from 7 years of creating machine learning-based tools and observing people using these tools in creative contexts. These outcomes include a better understanding how machine learning can be used as a tool for design by end users and developers, and how using machine learning as a design tool differs from more conventional application contexts.

          Biography: Dr. Rebecca Fiebrink is a Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning (with a new version just released in 2015!), a co-creator of the Digital Fauvel platform for interactive musicology, and a Co-I on the £1.6M Horizon 2020-funded RAPID-MIX project on Real-time Adaptive Prototyping for Industrial Design of Multimodal Expressive Technology. She is the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.

          Dr Trevor Hogan: Cork Institute of Technology


          Event details

          When: 14th October 2016 13:00 - 14:00
          Where: Cole 1.33b
          Speaker: Trevor Hogan

            s200_trevor-hogan

            Title: Data and Dasein – A Phenomenology of Human-Data
            Relations.

            Abstract: In contemporary society, data representation
            is an important and essential part of many aspects of our daily lives.
            In this talk Trevor will present how his doctoral research has
            contributed to our understanding on how people experience data and what
            role representational modality plays in the process of perception and
            interpretation. This research is grounded in phenomenology – he aligns
            his theoretical exploration to ideas and concepts from philosophical
            phenomenology, while also respecting the essence of a phenomenological
            approach in his choice and application of methods. Alongside offering a
            rich description of people’s experience of data representation, the key
            contributions of his research transcend four areas: theory, methods,
            design, and empirical findings. From a theoretical perspective, besides
            describing a phenomenology of human-data relations, he has defined, for
            the first time, multisensory data representation and established a
            design space for the study of this class of representation. In relation
            to methodologies, he will describe how he deployed two elicitation
            methods to investigate different aspects of data experience. He blends
            the Repertory Grid technique with a focus group session and shows how
            this adaption can be used to elicit rich design relevant insight. He
            will also introduce the Elicitation Interview technique as a method for
            gathering detailed and precise accounts of human experience.
            Furthermore, he will describe how this technique can be used to elicit
            accounts of experience with data. In his talk Trevor will present the
            findings of a series of empirical studies, these show, for instance, how
            certain representational modalities cause us to have heightened
            awareness of our body, some are more difficult to interpret than others,
            some rely heavily on instinct and each of them solicit us to reference
            external events during the process of Interpretation.

            Biography: Trevor Hogan is a Lecturer of Interaction
            Design at the Cork Institute of Technology, Ireland. The aim of his
            research is to describe and better understand how embodiment influences
            and augments people’s experience of data representations. His work is
            strongly interdisciplinary and may be situated in the field of
            interactive design, but at the intersection of tangible computing,
            human-computer interaction, information visualization and psychology. At
            CIT Trevor leads the Human-Data Interaction Group, a multidisciplinary
            research team, whose aim is explore novel ways of representing data –
            through and beyond the visual modality. This group is also focused on
            exploring methods and approaches that broaden the evaluation criteria of
            data representation – beyond traditional measurements, such as
            efficiency and effectiveness, towards novel aspects such as experience,
            use qualities, hedonics, affect, empathy, and enchantment.

            Professor Chris Reed: Centre for Argument Technology, University of Dundee


            Event details

            When: 6th October 2016 14:00 - 15:00
            Where: Purdie Theatre D
            Speaker: Chris Reed

              creed

              http://arg.tech

              Title: Argument Technology and Argument Mining

              Abstract: Argument Technology is that part of the overlap between theories of argumentation and reasoning and those of AI where an engineering focus leads to applications and tools that are deployed. One significant step in the past decade has been the development of the Argument Web — the idea that many of these tools can interact using common infrastructure, with benefits to academic, commercial and public user groups. More recently, there has been a move towards linguistic aspects of argument, with NLP techniques facilitating the development of the field of Argument Mining. Drawing on the academic success and commercial uptake of techniques such as opinion mining and sentiment analysis, argument mining seeks to build on systems which use data mining to summarise *what* people think by explaining also *why* they hold the opinions they do.

              Biography: Chris Reed is Professor of Computer Science and Philosophy at the University of Dundee in Scotland, where he heads the Centre for Argument Technology. Chris has been working at the overlap between argumentation theory and artificial intelligence for over twenty years, has won over £5.6m of funding from RCUK, government and commercial sources and has over 150 peer-reviewed papers in the area including five books. He has also been instrumental in the development of the Argument Interchange Format, an international standard for computational work in the area; he is spear-heading the major engineering effort behind the Argument Web; and he was a founding editor of the Journal of Argument & Computation.

              Professor John Lee: Recycled Resources and Learning Communities


              Event details

              When: 22nd August 2016 14:00 - 15:00
              Where: Cole 1.33a
              Speaker: John Lee

                p.069

                Title:  Recycled Resources and Learning Communities 

                Abstract:  The concept of learning communities can be seen as central in higher education, especially. Learning is fostered by dialogue, which is implicated in processes of conceptual development and alignment. These rich and complex phenomena include learning through witnessing the learning experiences of others — “vicarious learning” (VL). We propose that VL can be exploited by using rich media (such as video) to capture and share learning experiences. But the potential of rich media is broad and seems to be curiously under-exploited in education. One can envisage learning communities that create and build around learning resources of diverse kinds, using new materials but also integrating many strands of existing materials. In a number of encouraging ways, the available technologies already support this, but are often not greatly used, which suggests a challenge for design. How can we make these technologies more usable for our learning communities? A couple of exploratory approaches are discussed, including an informal experiment upon which it is hoped to build further.

                Biography:  John Lee is Professor of Digital Media at the University of Edinburgh. He holds a PhD in Philosophy and Cognitive Science, from Edinburgh.  He works jointly in the School of Informatics and the Edinburgh College of Art, where he directs the long-standing MSc programme in Design and Digital Media. His research interests centre around cognition and communication in design and learning. For some time, he has been investigating the paradigm of “vicarious learning” and the question of how rich media resources can be used more effectively in applications of learning technologies.

                 

                Daniel Holden – Deep Learning for Character Animation


                Event details

                When: 9th August 2016 14:00 - 15:00
                Where: Cole 1.33a
                Speaker: Daniel Holden

                  daniel

                  Abstract: In this talk I will discuss how deep learning can be applied to character animation. I will present a framework based on deep convolutional neural networks that allows for motion synthesis and motion editing in the same unified framework. Applications of this framework include fixing corrupted motion data such as that from the kinect, synthesis of character motion from high level parameters such as the trajectory, motion editing via arbitrary cost functions, and style transfer between two animation clips.

                  Biography: Daniel Holden is a PhD student at Edinburgh University studying how deep learning and data driven artistic tools can be used to save time in the production of high quality character animation. Outside of research he maintains several open source C projects and has a wide variety of interests including theory of computation, game development, and writing short fiction.

                  Mark Dunlop: Designing mobile keyboards with older adults


                  Event details

                  When: 3rd June 2016 13:00 - 14:00
                  Where: Cole 1.33a
                  Speaker: Mark Dunlop

                     

                    63006_web

                    Abstract

                    As part of an EPSRC project into text entry for older adults we ran a series of workshops on the design of new keyboard for older adults. These workshops blew away some of the stereotypes of older adults – ours were well connected, adjusted text style for twitter vs email vs facebook and were more open to new keyboard layouts than our undergraduates. Error awareness was highlighted as a concern and we developed an Android keyboard that highlights errors and autocorrections. In this talk I’ll review some of our experimental keyboards, the main lessons from our highlighting keyboard, main lessons in study design for older adults and future directions.

                    Biography

                    Since 2000, Mark Dunlop has been a senior lecturer in computer science at Strathclyde. His research focuses on usability of mobile systems including mobile text entry, visualisation, sensor driven interaction and evaluation of mobiles. His first work on mobile text entry was published in 1999 and he’s been involved in the organisation of the MobileHCI conference series since it’s inception in 1998. Recent project involve text entry for older adults and mobile based driving crowdsourced braking alert system. His teaching is mainly in human computer interaction (HCI) and mobile/internet programming technologies. Prior to joining Strathclyde, Mark was a senior researcher at Risø Danish National Laboratory and a lecturer at Glasgow University. He completed his PhD in Multimedia Information Retrieval at Glasgow in 1991.

                    Karl Smith: Enabling Client Communications


                    Event details

                    When: 29th February 2016 14:00 - 15:00
                    Where: Honey 110 - John Honey Teaching Lab
                    Speaker: Karl Smith

                      Abstract

                      There is a huge and complex social psychology to managing client engagements effectively. Merely presenting actionable solutions that have valid data to back them up is not enough for clients. They become lost with the simplest of justifications and proof often focusing factors of little importance to the end users. In this talk I will offer some meeting navigation concepts that will enable people to facilitate client meetings, establish and reach defined outcomes and establish clear dialog and interaction methods.

                      READ MORE