The breast is not just a protruding gland situated on the front of the thorax in female bodies: behind biology lies an intricate symbolism that has taken various and often contradictory meanings. We begin our journey looking at pre-historic artifacts that revered the breast as the ultimate symbol of life; we then transition to the rich iconographical tradition centering on the so-called Virgo Lactans when the breast became a metaphor of nourishment for the entire Christian community. Next, we look at how artists have eroticized the breast in portraits of fifteenth-century French courtesans and how enlightenment philosophers and revolutionary events have transformed it into a symbol of the national community. Lastly, we analyze how contemporary society has medicalized the breast through cosmetic surgery and discourses around breast cancer, and has objectified it by making the breast a constant presence in advertisement and magazine covers. Through twenty-five centuries of representations, I will talk about how the breast has been coded as both "good" and "bad," sacred and erotic, life-giving and life-destroying.
BIO: Benedetta Gennaro is currently a researcher in the Institut für Soziologie at TU Darmstadt. She was an acting professor of Sociology at Goethe Universität in Frankfurt and since 2011 she has been affiliated with the Cornelia Goethe Centrum for Women’s and Gender Studies. She received an M.A. in Mass Communications from Miami University (Oxford, OH) and an M.A. and Ph.D. in Italian Studies from Brown University (Providence, RI). Her areas of research include gender and sexuality studies, cultural studies and visual methodologies, women and political violence, masculinity studies, and feminist methodologies.
How is it that biological systems can be so imprecise, so ad hoc, and so inefficient, yet accomplish (seemingly) simple tasks that still elude state-of-the-art artificial systems? In this context, I will introduce some of the themes central to CMU's new BrainHub Initiative by discussing: (1) The complexity and challenges of studying the mind and brain; (2) How the study of the mind and brain may benefit from considering contemporary artificial systems; (3) Why studying the mind and brain might be interesting (and possibly useful) to computer scientists.
Michael J. Tarr is the Head of the Department of Psychology in Carnegie Mellon Universitys Dietrich College of Humanities and Social Sciences and the Chair of Carnegie Mellon's BrainHub Steering Committee. He studies the neural, cognitive and computational mechanisms underlying visual perception and cognition. He is particularly interested in object and face recognition, how we become visual experts for non-face object domains, and how visual perception interacts with our other senses, with cognition, and with social and affective processing. Much of his work is predicated on the idea that models of artificial and biological vision have something (meaningful) in common and that both disciplines will benefit from greater interaction. From 2009-2013, he was the co-director of the Center for the Neural Basis of Cognition (CNBC), at Carnegie Mellon. Before joining the CMU faculty in 2009, he spent 14 years on the faculty of Brown University and 6 years on the faculty of Yale University. He received his PhD from M.I.T. in 1989 and his BA from Cornell University in 1984. The National Academy of Sciences recognized Tarr with the Troland Award in 2003, given annually to honor unusual achievement and further empirical research in psychology. The American Psychological Association recognized him with the APA Early Career Award 1997. He is a fellow of the American Psychological Association and the Society of Experimental Psychologists
In this talk I will give an overview of work I have done over the years
exploring physically based simulation of contact, deformation, and
articulated structures where there are trade-offs between computational
speed and physical fidelity that can be made. I will also discuss
examples that mix data-driven and physically based approaches in
animation and control.
Paul Kry is an associate professor in the School of Computer Science at
McGill University. He has a BMath from University of Waterloo, and MSc
and PhD from University of British Columbia. His research focuses on
physically based simulation, motion capture, and control of character
Everyone in visual psychology seems to know what Biological Motion is. Yet, it is not easy to come up with a definition that is specific enough to justify a distinct label, but is also general enough to include the many different experiments to which the term has been applied in the past. I will present a number of tasks, stimuli, and experiments, including some of my own work, to demonstrate the diversity and the appeal of the field of biological motion perception. In trying to come up with a definition of the term, I will particularly focus on a type of motion that has been considered “non-biological” in some contexts, even though it might contain -- as more recent work shows -- one of the most important visual invariants used by the visual system to distinguish animate from inanimate motion.
We present an approach to creating 3D models of objects depicted in Web images, even when each object may only be shown in a single image. Our approach uses a comparatively small collection of existing 3D models to guide the reconstruction process. These existing shapes are used to derive information about shape structure. Our guiding idea is to jointly analyze the images and the available 3D models. Joint analysis of all images along with the available shapes regularizes the formulated optimization problems, stabilizes estimation of camera parameters and construction of dense pixel-level correspondences, and leads to reasonable reproduction of object appearance in the absence of traditional multi-view cues. Joint work with Qixing Huang and Hai Wang.
Vladlen Koltun is the director of the Visual Computing Lab at Intel Labs. Works in computer vision, computer graphics, and machine learning. Received a PhD in 2002 for new results in theoretical computational geometry. Spent three years at UC Berkeley as a postdoc in the theory group. Joined the Stanford Computer Science department in 2005 as a full-time faculty member working in theoretical computer science. Switched to applied research in visual computing in 2007. Joined Intel Labs in 2015.
Image-based rendering has been introduced in the 1990s as an alternative approach to photorealistic rendering. Its key idea is to novel renderings by re-projecting pixels from nearby views. The basic approach works well for many scenes but breaks down if the scene contains “non-standard” elements such as reflective surfaces. In this talk, I will first show how we can extend image-based rendering to handle scenes with reflections. I will then discuss a novel gradient-based technique for image-based rendering that can intrinsically handle scenes with reflections.
Driven by the increasing demand for photorealistic computer-generated images, graphics is currently undergoing a substantial transformation to physics-based approaches which accurately reproduce the interaction of light and matter. Progress on both sides of this transformation -- physical models and simulation techniques -- has been steady but mostly independent from another. When combined, the resulting methods are in many cases impracticably slow and require unrealistic workarounds to process even simple everyday scenes. My research lies at the interface of these two research fields; my goal is to break down the barriers between simulation techniques and the underlying physical models, and to use the resulting insights to develop realistic methods that remain efficient over a wide range of inputs.
I will cover three areas of recent work: the first involves volumetric modeling approaches to create realistic images of woven and knitted cloth. Next, I will discuss reflectance models for glitter/sparkle effects and arbitrarily layered materials that are specially designed to allow for efficient simulations. In the last part of the talk, I will give an overview of Manifold Exploration, a Markov Chain Monte Carlo technique that is able to reason about the geometric structure of light paths in high dimensional configuration spaces defined by the underlying physical models, and which uses this information to compute images more efficiently.
SHORT BIO: Wenzel Jakob is a Marie Curie Postdoctoral Fellow at ETH Zürich in the Institute for Visual Computing. He obtained his Ph.D. in 2013 under the supervision of Dr. Steve Marschner at Cornell University and conducted his undergraduate studies at the Karlsruhe Institute of Technology. Wenzel's experience includes research and development work at Disney Research Zurich and Weta Digital, and he is the lead developer of Mitsuba, a research-oriented open source rendering system that has become a popular research platform in rendering and appearance modeling.
I will present selected research projects of the Photogrammetry and Remote Sensing Group at ETH, including (i) 3D scene flow estimation for stereo video captured from a car; (ii) extraction of road networks from aerial images; and (iii) 3D reconstruction from large, unstructured (e.g. crowd-sourced) image collections.
Konrad Schindler received the Diplomingenieur (M.Tech.) degree in photogrammetry from Vienna University of Technology, Austria, in 1999, and the Ph.D. degree from Graz University of Technology, Austria, in 2003. He has worked as a photogrammetric engineer in the private industry and held researcher positions in the Institute of Computer Graphics and Vision Department at Graz University of Technology, the Digital Perception Lab at Monash University, and the Computer Vision Lab at ETH Zurich. In 2009, he became Assistant Professor of Image Understanding at TU Darmstadt. Since 2010, he has been a tenured Professor of Photogrammetry and Remote Sensing at ETH Zurich. His research interests lie in the field of computer vision, photogrammetry, and remote sensing, with focus on image understanding
and 3d reconstruction. He received several awards, including the U. V. Helava Award for the Best Paper in the ISPRS Journal of Photogrammetry and Remote Sensing 2008-2011 (with A. Ess, B. Leibe and L. Van Gool), and an honorable mention for the Marr Prize at ICCV 2013 (with C. Vogel and S. Roth).
The growing scale of image and video datasets in vision makes labeling and annotation of such datasets, for training of recognition models, difficult and time consuming. Further, richer models often require richer labelings of the data, that are typically even more difficult to obtain. In this talk I will focus on two models that make use of different forms of supervision for two different vision tasks.
In the first part of this talk I will focus on object detection. The appearance of an object changes profoundly with pose, camera view and interactions of the object with other objects in the scene. This makes it challenging to learn detectors based on an object-level labels (e.g., “car”). We postulate that having a richer set of labelings (at different levels of granularity) for an object, including finer-grained sub-categories, consistent in appearance and view, and higher-order composites – contextual groupings of objects consistent in their spatial layout and appearance, can significantly alleviate these problems. However, obtaining such a rich set of annotations, including annotation of an exponentially growing set of object groupings, is infeasible. To this end, we propose a weakly-supervised framework for object detection where we discover subcategories and the composites automatically with only traditional object-level category labels as input.
In the second part of the talk I will focus on the framework for large scale image set and video summarization. Starting from the intuition that the characteristics of the two media types are different but complementary, we develop a fast and easily-parallelizable approach for creating not only video summaries but also novel structural summaries of events in the form of the storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in the form of a branching directed network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frame, thereby treating consumer image as essentially a form of weak-supervision. The reconstruction of storyline graphs on the other hand is formulated as inference of the sparse time-varying directed graphs from a set of photo streams with assistance of consumer videos.
Time permitting I will also talk about a few other recent project highlights.
Abstract: I will present a general framework for modelling and recovering 3D shape and pose using subdivision surfaces. To demonstrate this frameworks generality, I will show how to recover both a personalized rigged hand model from a sequence of depth images and a blend shape model of dolphin pose from a collection of 2D dolphin images. The core requirement is the formulation of a generative model in which the control vertices of a smooth subdivision surface are parameterized (e.g. with joint angles or blend weights) by a differentiable deformation function. The energy function that falls out of measuring the deviation between the surface and the observed data is also differentiable and can be minimized through standard, albeit tricky, gradient based non-linear optimization from a reasonable initial guess. The latter can often be obtained using machine learning methods when manual intervention is undesirable. Satisfyingly, the "tricks" involved in the former are elegant and widen the applicability of these methods.