This article includes step by step instructions on how to create unique deepfaked Fortnite skins.
Required software and hardware:
DeepFaceLab: available here https://sourceforge.net/projects/deepfacelab.mirror/
You are going to want the version that matches your GPU specifications. For example, I am using a NVIDIA GeForce GTX 1660 Super, so I use DeepFaceLab_NVIDIA_up_to_RTX2080Ti_build_11_20_2021. This is tooled to work with NVIDIA graphics cards up to RTX2080Ti. If you have a better graphics card, you pull the build that works for you. If you do not have a NIVIDA graphics card, you can use the DirectX build such as DeepFaceLab_DirectX12_build_05_04_2022.
Screen recorder: I use the built in screen recorder on my iPhone.
AI photo conversion app: The app must be capable of converting photos to cartoon images. I use ToonApp (although though I am certain there are better options out there).
Video editor: I use Microsoft ClipChamp, which comes free with Windows.
A Windows gaming desktop or other computer with a decent graphics card. You don’t need a decent graphics card - but it will make the deepfakes smoother and allow the models to render faster.
Initial comments:
First and foremost, this is not as complicated as it looks. The first time you go through it, it will go slow and take some getting used to. But after a few deepfakes you will get faster and more precise. If you want to learn more, I recommend checking out: https://www.deepfakevfx.com/ or feel free to DM me on Twitter @CoreyBieber.
If you render a deepfake you like using these instructions for Fortnite or some other game, please post the result to Twitter and tag my @CoreyBieber and @LegionofMemers in the Tweet.
If you want to check out the full video tutorial for this deepfake you can watch it on LegionofMemers YouTube channel here.
LET’S GET STARTED
Step 1:
To begin, download and unpack DeepFaceLab. I unpacked my build onto my desktop. When you open the DeepFaceLab folder you will see something like this:
Step 2:
You are going to want decent quality videos of:
The game play session into which you want to deepfake your character (the “destination” or “dst”); and
The person you are looking to deepfake (the “source” or “src”).
For the game play session, or destination, I recorded my Fortnite lobby using the video capture feature built into the Xbox Game Bar. This comes standard on Windows (search your device for it). Here is a sample you can use:
For the source, DDayCobra, I visited the DDayCobra YouTube page and used Apple’s built in screen recorder to record one of his YouTube shorts. You are going to want to find a video that displays the source’s whole head and preferably with multiple expressions, head turns (left, right, up, down, etc.). This will provide the model with plenty of images of which to match between the destination and source. Here is a sample you can use:
Once you have these videos save them in the DeepFaceLab sub-folder titled “workspace.” You will want to name the destination file “data_dst“ and the source file “data_src.“ The workspace folder should end up looking something like this.
Step 3:
Now you are going to want to extract each from the two videos. To do this go back to the main DeepFaceLab folder (you will see a list of executable files titled 1) … 2) … etc.
Click on the option titled: 2) extract images from video data_src
This will allow you to extract each individual frame from the source file. You will get a terminal that looks like this.
Leave the FPS set to 0 and the output image format set to png; then select Enter. You will get a screen that looks like this. These are the default values; you can except the defaults by simply selecting Enter when presented with each prompt.
Here DeepFaceLab is extracting the frames (this may take a few minutes depending on your system specs and the size of the video). When then process concludes you will get a message that says “Press any key to continue …: Do that.
Go back to your workspace folder and open the folder titled "data_src.” You will now see a few hundred (or thousand) images of the source video frames.
Step 4:
Now you will need to repeat the same process but for the destination video. For this, follow Step 3 except choose the option 3) extract images from video data_dst FULL FPS to pull the frames from the destination video. Leave all settings set to their default values as you did in step 3.
After the frame extraction runs, check the data_dst folder, you should see something like this.
Step 5:
Next you will extract DDayCobra’s head from each of the images. To do this, go back to the main DeepFaceLab folder and select: 4) data_src faceset extract. You will receive a prompt asking you to select your CPU (you will most likely leave that as the default). You will then receive a prompt asking for the “Face type.” Type the word “head” after the prompt and hit enter. The rest of the prompts can be left to their default values.
The face extractor will run (again this may take a few minutes). Once it is complete go back to the workspace folder, enter the data_src folder, then enter the aligned folder. You should not see a set of DDayCobra face images.
Step 6:
Again, go through the same steps that you did for Step 5 to extract the destination faces. To do this, go back to the main DeepFaceLab folder and select 5) data_dst faceset extract. Remember to type the word “head” after the prompt asking for a “Face type.” The rest of the prompts can be left to their default values.
After the face extraction runs, check the align folder in data_dst, you should see the face images extracted from the destination video.
Important! Because there is more than one face in the destination video, in the align folder, the extractor may have included faces other than the face you intend to replace. Delete any face image include in the align folder that you do not want to replace.
Step 7:
Now you will use your AI app, again I used ToonApp for iPhone, to render a “comic” version of DDayCobra’s head. Select about 40 different images of DDayCobra’s head with different expressions and looking in different directions (left, right, up, down). Drop these in the app for conversion.
In ToonApp I used the “3D cartoons” option and the “Comic 1” filter. I won’t walk through how to do this here … but it is self-explanatory when you download the app. If you are using an app other than ToonApp, pick a filter that matches the destination video animation style as close as possible.
In the end you want a set of images that look something like this.
Step 8:
Once you have your 40 or so “comic” versions of DDayCobra’s head, you will need to stitch them together in a video. (DeepFaceLab will not recognize the comic version of the images unless it has extracted them from a video.)
To do this, I dropped the images into Microsoft ClipChamp, which comes free with Windows. Again, I am not going to explain how to do this here, as it is self-explanatory when you open ClipChamp and you can use other video editing software if you would like. In the end you will want to export the images as a video (I exported at 1080p). This creates a “comic video” of the 40 DDayCobra comic images.
Step 9:
Now you are going to want to replace the source images from the DDayCobra YouTube short with the images from the new comic video you just made.
To do this delete the data_src video from the workspace folder and delete all of the images in the data_src folder and align folder in data_src.
Drop your comic video into the workspace folder and name it data_src. Now re-run Step 3 and Step 5 verbatim for the new comic data_src file.
This will result in an align folder containing the comic versions of DDayCobra’s head ready for processing by DeepFaceLab.
Step 10:
Now that you have the heads you want to swap you need to help the model understand what is and is not a head in the images.
To do this run 5.XSeg) data_src mask - edit from the main DeepFaceLab folder. This will bring up the XSeg editor.
Use the pointer tool to draw around the entire head. I have provided a quick video below to show what this looks like in practice. You will want to draw around maybe 8 - 10 images looking in various directions (left, right, up, down). You do not need to be prefect, just get the drawings as close as possible. Also, note the AI made DDayCobra have comically large ears. I clipped the ears down using the pointer tool.
Once you are done repeat the same process with the destination images by running 5.XSeg) data_dst mask - edit.
Step 11:
Next you will train the model. To do this run 5.XSeg) train. You will want to set the prompts as follows:
CPU: 0 (this should be the default)
Restart training: y (you will not see this the first time you run the trainer)
Face type: head
Batch size: 4 (you can go higher if you have a more powerful GPU)
Enable pretraining mode: n
Once you have set the parameters the trainer will start. Let the trainer run for approximately 600 iterations (the more the better, but you should not need to go more than 600 for this deepfake). You will see the iterations counting up in the bottom right-hand corner of the terminal window. Press Enter to stop the training.
Step 12:
Apply the trained mask by selecting 5.XSeg) data_dst trained mask - apply and 5.XSeg) data_src trained mask - apply
Step 13:
Reopen each XSeg editor using 5.XSeg) data_dst mask - edit and 5.XSeg) data_src mask - edit.
In each select top right button to show mask overlay. Look through to make sure the overlay has been applied to each image correctly. If there are images where the mask is significantly off you can draw a new mask over that image and retain the model (another 600 iterations). Here you should not need to do that. Below is a video showing how to check the mask application.
Step 14:
Next we will train the SAEHD model; this is where we create the deepfake. This step will take some time (a few hours), you may want to set the trainer running and go grab lunch or something. You do not need to sit in front of your computer while it is training.
To train the SAEHD model select 6) train SAEHD. You will want enter the following settings into the prompts. There is a lot you can to here when you are a pro - for now we are keeping it simple.
CPU: 0
Autobackup: default
Target iteration: default
Flip SRC faces randomly: y
Flip DST faces randomly: y
Batch_size: 8
Masked training: y
Eyes and mouth priority: n (this would be yes if the characters were talking)
Uniform yaw … : n
Blur out mask: n
Place models and optimizer on GPU: y
Use AdaBelief optimizer: y
USe learning rate dropout: n
Enable random warp of samples: y
Random hue/saturation … : 0.0
GAN power: 0.0
Face style power: 0.0
Background style power: 0.0
Color transfer for src faceset: none (we will deal with this later)
Enable gradient clipping: n
Enable pretraining mode: n
Now sit back and relax; you are going to let this run for 20,000 iterations (minimum). For lager more complicated deepfakes you will be running 200,000 iterations, usually overnight.
Step 15:
Once the SAEHD trainer has hit 20,000 iterations, press Enter to stop. Now run 7) merge SAEHD. This will allow is to check the model and make adjustments to the color, size, etc. of the deepfake. When you start the SAEHD merger you will be given the following prompts:
GPU: default
Use interactive merger: y
Number of workers? 12
You will then be presented with the following screen. These are the options to refine your model. I won’t go through them in this article, but there is a lot here you can do to make the model look better. You can use the tab key to tab between your model images and the settings screen. Note the < and > keys allow you to switch between images.
When you hit tab you will see a screen similar to this. We did not run a lot of iterations, so as you can see, at this point the deepfake looks kind of crappy.
Now we can clean it up. For our purposes switch to the image screen and change the following settings.
Hit E until the blur_mask_modifier is 30
Hit J until the output_face_scale is -17
Hit C until the color_transfer_mode is mlk-m
Hit I until the image_denoise_power is 1
You will get something similar to the image below. That looks much better. Now apply the settings to the remainder of the images by selecting the ? key and the Up Arrow key at the same time. Once you are done hit Esc to save the session.
Step 16:
Now run 8) merge to mp4 and you are done. A file titled result will appear in your workspace folder. This is your final deepfake video.
And with all of these steps, don’t worry, if something looks off you can always go back and redo a step, DeepFaceLab is very forgiving.
Here is my final version.
Best of luck! Let me know if you have any questions or comments on how to improve this article. @CoreyBieber