This repo presents an open-source re-aging method for aging faces. You can try it out yourself on Hugging Face 🤗:
Re-aging is increasingly used in the film industry and commercial applications like TikTok and FaceApp. With this, there is also an increased interest in the application of Generative AI to this use-case. Such a method is presented here, largely based on Disney Research's "Production-Ready Face Re-Aging for Visual Effects" paper, the model and code of which remain proprietary.
The method only requires an image (or video frame) and the (approximate) age of the person to generate the same image of the person looking older or younger.
Although trained on images, the method can also be applied to frames in a video:
Model output: Aged 20 | Original: Aged ~35 | Model output: Aged 60 |
---|---|---|
![]() |
![]() |
![]() |
You can find all training and testing scripts in this repository. Next to the Hugging Face demo, you can try the demos on Google Colab, or by downloading the model.
This repo replicates the Face Re-Aging Network (FRAN) architecture from the aforementioned paper, which is a relatively simple U-Net-based architecture.
Following the paper, the dataset used for training consists of artificially re-aged images from FFHQ dataset generated using SAM. However, compared to SAM, this model's output preserves the identity of the pictured person much better, and is more robust. In the example below it is visible how, compared to SAM, the model is able to preserve the identity of the subject, details like the background, glass, and earring, while still providing realistic aging.
Input image | Our model output | SAM output |
---|---|---|
![]() |
![]() |
![]() |
The trained model can be downloaded from Hugging Face;
The best_unet_model.pth
is the model in question.
This model can be tested with the Gradio demos, available on Hugging Face and on Google Colab.
- Image Inference Demo: In this demo, one can input an image with a source age (age of the person pictured) and a target age.
- Image Animation Demo: In this demo, one does not have to specify a target age: Instead, a video will be shown where we cycle through the target age between 10 - 95.
- Video Inference Demo: In this demo, one can apply the model onto a video. The video is processed frame-by-frame.
Photos cycling through target ages 10 - 95, made with the Image Animation Demo. | ||
---|---|---|
![]() |
![]() |
![]() |
The training script is available in this repo, together with the training parameters used.
In order to train the model from scratch using the available files, one would need to put the training data in data/processed/train
.
The training dataset should consist of folders where each folder contains images of one person, where the filename indicates the age,
e.g. person1/10.jpg
is person1 at age 10 and person1/20.jpg
is the same person at age 20.
To finetune a model using the pre-trained models, one can download the U-Net and discriminator models from Hugging Face.