Abstract: In this talk I will discuss how deep learning can be applied to character animation. I will present a framework based on deep convolutional neural networks that allows for motion synthesis and motion editing in the same unified framework. Applications of this framework include fixing corrupted motion data such as that from the kinect, synthesis of character motion from high level parameters such as the trajectory, motion editing via arbitrary cost functions, and style transfer between two animation clips.
Biography: Daniel Holden is a PhD student at Edinburgh University studying how deep learning and data driven artistic tools can be used to save time in the production of high quality character animation. Outside of research he maintains several open source C projects and has a wide variety of interests including theory of computation, game development, and writing short fiction.