DragGAN: A New Method for Manipulating Generated Images

The technique enables one to "drag" any points of the image to precisely reach target points in a user-interactive manner.

A group of researchers recently published a paper presenting DragGAN, an innovative technique for manipulating generated images. This method allows users to interactively drag points on the images to achieve precise positioning at target points.

The method comprises two key components. The first component is feature-based motion supervision, which guides the movement of the handle point towards the desired position. The second component involves a novel point tracking approach that utilizes discriminative GAN features to continuously localize the handle points' positions.

With DragGAN, individuals can flexibly deform images while maintaining complete control over the pixel placement. This enables the manipulation of various categories, such as animals, cars, humans, landscapes, and more, allowing for adjustments in pose, shape, expression, and layout.

"As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object's rigidity," commented the team. "Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking."

Learn more here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 2

  • Anonymous user

    Will this be something in the near future,that can be used inside Blender?

    0

    Anonymous user

    ·8 months ago·
  • Anonymous user

    may we see it as a plug-in for Adobe After Effects soon?

    0

    Anonymous user

    ·10 months ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more