← Blog
Tutorial

Maintaining Character Consistency: FaceSet Node Guide

How to keep a character's appearance consistent across your entire video using the FaceSet node.

2026.03.2810 min read

One of the most common challenges in AI video production is that the same character looks different from scene to scene. Reproducing an identical appearance using only text prompts is extremely difficult. The FaceSet node is designed to solve exactly this — it generates consistent faces based on reference images.

What Is the FaceSet Node?

The FaceSet node lets you register one or more reference images of a person. Connected video/image generation nodes then produce results that preserve that person's facial features. A face recognition model extracts keypoints — eyes, nose, mouth, face outline — from the reference images and incorporates them into the generation process. Currently supported with Runway Gen-4 Video and Kling Pro/Master nodes.

Preparing Reference Images

FaceSet performance depends heavily on reference image quality. Good reference images: face the camera directly with even lighting; have a resolution of at least 512×512 pixels; are not obscured by sunglasses, masks, or hats. Registering 3–5 images from different angles (front, 45°, side) produces more stable results.

Configuring the FaceSet Node

Drag 'FaceSet' from the left panel onto the canvas. Drop your reference photos into the node's upload area — multiple images are automatically ensembled. In the settings panel, adjust 'Similarity Strength'. Higher values stay closer to the reference but reduce generation variety. Starting at 0.7–0.8 is recommended.

Connecting to a Video Generation Node

Connect the FaceSet node's output port to the 'Face Reference' input port of a Runway Gen-4 Video node. Once connected, a 'FaceSet Active' toggle appears in the generation node's settings. Enable it, then write your prompt and run as normal. You no longer need to describe the subject's appearance in detail — focus instead on action, background, and mood.

Advanced Tips for Consistency

When using the same character across multiple scenes, connect a single FaceSet node to multiple generation nodes. This means all scenes share a single reference source, improving consistency. Also, repeating the same costume description in each scene's prompt (e.g. 'wearing a blue denim jacket, white t-shirt') extends consistency from the face to clothing as well.

Common Issues and Fixes

If the face is distorted: lower Similarity Strength (0.5–0.6) or improve the lighting conditions of the reference image. If FaceSet is applied to background characters: add 'single person, solo subject' to the prompt. If the reference image isn't recognized: convert the file to JPG or PNG and verify the file size is under 10MB. For engines that don't support FaceSet (e.g. Hailuo, Wan), use an IP-Adapter image reference node as an alternative.

Related Posts

Start making AI videos with Skaper.

Get Started with Skaper →