A new Genshin Impact web event has some great loot, but you’re going to have to look at some terrifying AI animation.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    You can do upscaling with AI upscalers in SD today, yeah, and it’s pretty nifty, but it’s working with a 2D model. That’s nice if you have a lot of footage of Lawrence from exactly the same angle; if you train a model on the whole video, then you can use that for upscaling individual frames.

    But my point is that if you have software that’s smart enough to make use of information derived with a 3D model, then you don’t need to have that identical angle to make use of the information there.

    Let’s say that you’ve got a shot of Peter O’Toole like this:

    https://lemmy.kya.moe/imgproxy?src=prod-images.tcm.com%2fMaster-Profile-Images/lawrenceofarabia1962.4455.jpg?w=824

    And another like this:

    https://lemmy.kya.moe/imgproxy?src=media.vanityfair.com%2fphotos/52d691da6088e6966a000006/master/w_2240,c_limit/1389793754760_lawrencethumb.jpg

    Those aren’t from the same angle.

    But add a 3d model to the thing, and you can use data from the close-up in the first image to scale up the second. The software can rotate the data in three dimensions, understand the relationships. If you can take time into account, you could even learn how his robe flaps in the wind or whatnot.

    One would need something like this.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      My point is that if all you are doing is cleaning up frames and trying to upscale footage from 24fps to 60fps, you have all of the data you need from the previous/next frames to blend those into in-between frames. A model trained on the movie would help, but there’s no need to get into anything as complex as 3D models of objects. Sub-second animation data is just fine.