During live reenactment, we track the expression of the source actor and transfer them to the reconstructed target face. An overview of our reenactment approach: In a preprocessing step we analyze and reconstruct the face of the target actor. Proposed online reenactment setup: A monocular target video sequence (e.g., from Youtube) is reenacted based on the expressions of a source actor who is recorded live with a commodity webcam.įigure 2. Figure 2 shows an overview of our method.įigure 1. It is important to note that we maintain the appearance of the target mouth shape in contrast, existing methods either copy the source mouth region onto the target 23 or a generic teeth proxy is rendered, 8, 19 both of which leads to inconsistent results. Finally, we introduce a new image-based mouth synthesis approach that generates a realistic mouth interior by retrieving and warping best matching mouth shapes from the offline sample sequence. For final image synthesis, we re-render the target's face with transferred expression coefficients and composite it with the target video's background under consideration of the estimated environment lighting. In order to transfer expressions from the source to the target actor in real-time, we propose a novel transfer functions that efficiently applies deformation transfer 18 directly in the used low-dimensional expression space. We demonstrate that our RGB tracking accuracy is on par with the state of the art, even with online tracking methods relying on depth data. At runtime, we track both the expressions of the source and target actor's video by a dense analysis-by-synthesis approach based on a statistical facial prior. ![]() As this preprocess is performed globally on a set of training frames, we can resolve geometric ambiguities common to monocular reconstruction. In our method, we first reconstruct the shape identity of the target actor using a new global non-rigid modelbased bundling approach based on a prerecorded training sequence. Faithful photo-realistic facial reenactment is the foundation for a variety of applications for instance, in video conferencing, the video feed can be adapted to match the face motion of a translator, or face videos can be convincingly dubbed to a foreign language. We aim to modify the target video in a photo-realistic fashion, such that it is virtually impossible to notice the manipulations. The target sequence can be any monocular video for example, legacy video footage downloaded from Youtube with a facial performance. In contrast to previous reenactment approaches that run offline, our goal is the online transfer of facial expressions of a source actor captured by an RGB sensor to a target actor. However, instead of transferring facial expressions to virtual CG characters, our main contribution is monocular facial reenactment in real-time. In this paper, we employ a new dense markerless facial performance capture method based on monocular RGB data, similar to state-of-the-art methods. It is now feasible to run these face capture and tracking algorithms from home, which is the foundation for many Virtual Reality (VR) and Augmented Reality (AR) applications, such as teleconferencing. These techniques have become increasingly popular for the animation of virtual Computer Graphics (CG) avatars in video games and movies. Impressive results have been achieved, both based on Red-Green-Blue (RGB) as well as RGB-D data. In recent years, real-time markerless facial performance capture based on commodity sensors has been demonstrated. This live setup has also been shown at SIGGRAPH Emerging Technologies 2016, by Thies et al. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Reenactment is then achieved by fast and efficient deformation transfer between source and target. ![]() At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. The source sequence is also a monocular video stream, captured live with a commodity webcam. Face2Face is an approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |