PixelSeal achieves state-of-the-art robustness-imperceptibility trade-off, positioning at the Pareto frontier
- 🏆 PixelSeal: SOTA imperceptibility & robustness through adversarial-only training and JND-based attenuation
- 🚀 ChunkySeal: 4× capacity increase (1024 bits) - proving watermarking limits are far from reached
- 🎬 VideoSeal: Efficient image & video watermarking with temporal consistency
- 🔓 Open Source: All models, training code, and evaluation tools released under MIT license
- December 2025: 🆕 ChunkySeal and PixelSeal released! Model cards and checkpoints now available
- October 2025: 🏅 WmForger accepted to NeurIPS 2025 as Spotlight! Code in
wmforger/ - March 2025: VideoSeal v1.0 with improved 256-bit model and enhanced robustness
- December 2024: Initial VideoSeal release with 96-bit baseline model
import videoseal
from PIL import Image
import torchvision.transforms as T
# Load any model by name (automatically downloads on first use)
model = videoseal.load("videoseal") # VideoSeal v1.0 (256-bit, stable)
# model = videoseal.load("pixelseal") # PixelSeal (SOTA imperceptibility & robustness)
# model = videoseal.load("chunkyseal") # ChunkySeal (1024-bit high capacity)
# Watermark an image 🎨
img_tensor = T.ToTensor()(Image.open("image.jpg")).unsqueeze(0)
outputs = model.embed(img_tensor)
T.ToPILImage()(outputs["imgs_w"][0]).save("watermarked.jpg")
# Detect watermarks
detected = model.detect(img_tensor)
hidden_message = (detected["preds"][0, 1:] > 0).float() # Binary messageVideo watermarking:
import videoseal
import torchvision
# Load and watermark video 🎬
model = videoseal.load("videoseal")
video, _, _ = torchvision.io.read_video("video.mp4")
video = video.permute(0, 3, 1, 2).float() / 255.0
outputs = model.embed(video, is_video=True)
watermarked = (outputs["imgs_w"] * 255).byte().permute(0, 2, 3, 1)
torchvision.io.write_video("watermarked.mp4", watermarked, fps=30)💡 For standalone usage without dependencies, see our TorchScript guide for pre-compiled models.
Version of Python is 3.10 (pytorch > 2.3, torchvision 0.16.0, torchaudio 2.1.0, cuda 12.1). Install pytorch:
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
Other dependencies:
pip install -r requirements.txt
For training, we also recommend using decord:
pip install decord
Note that there may be some issues with installing decord: dmlc/decord#213 Everything should be working without decord for inference, but there may be issues for training in this case.
We provide a comprehensive suite of watermarking models with different trade-offs between capacity, robustness, and imperceptibility.
| Model | Capacity | Best For | Model Card | Checkpoint | Paper | Status |
|---|---|---|---|---|---|---|
| PixelSeal | 256 bits | SOTA Robustness & Imperceptibility | pixelseal.yaml |
pixelseal/checkpoint.pth | Link | 🆕 New |
| ChunkySeal | 1024 bits | High Capacity larger model | chunkyseal.yaml |
chunkyseal/checkpoint.pth | arXiv:2510.12812 | 🆕 New |
| VideoSeal v1.0 | 256 bits | Stable | videoseal_1.0.yaml |
y_256b_img.pth | arXiv:2412.09492 | ✅ Stable |
| VideoSeal v0.0 | 96 bits | Legacy Baseline | videoseal_0.0.yaml |
rgb_96b.pth | arXiv:2412.09492 | 🟡 Legacy |
Note: For complete training checkpoints (with optimizer states and discriminators), see docs/training.md.
We do not own any third-party models, so you have to download them manually. We provide a guide on how to download the models at docs/baselines.md.
We provide a guide on how to check and install VMAF at docs/vmaf.md.
notebooks/image_inference.ipynbnotebooks/video_inference.ipynbnotebooks/video_inference_streaming.ipynb: optimized for lower RAM usage
To watermark both audio and video from a video file. It loads the full video in memory, so it is not suitable for long videos.
Example:
python inference_av.py --input assets/videos/1.mp4 --output_dir outputs/
python inference_av.py --detect --input outputs/1.mp4To watermark a video file in streaming. It loads the video clips by clips, so it is suitable for long videos, even on laptops.
Example:
python inference_streaming.py --input assets/videos/1.mp4 --output_dir outputs/Will output the watermarked video in outputs/1.mp4 and the binary message in outputs/1.txt.
To run full evaluation of models and baselines.
Example to evaluate a trained model:
python -m videoseal.evals.full \
--checkpoint /path/to/videoseal/checkpoint.pth \or, to run a given baseline:
python -m videoseal.evals.full \
--checkpoint baseline/wam \This should save a file called metrics.csv with image/video imperceptibility metrics and the robustness to each augmentation (you can remove some of them to make the evaluation faster).
For instance, running the eval script for the default videoseal model on high-resolution videos from the SA-V dataset should give metrics similar to sav_256b_metrics.
We provide training code to reproduce our models or train your own models. This includes image and video training (we recommand training on image first, even if you wish to do video). See docs/training.md for detailed instructions on data preparation, training commands, and pre-trained model checkpoints.
Here are some important parameters for the models:
scaling_w: Controls the global watermark strength (default0.2). Higher values increase robustness against attacks but make the watermark more visible; lower values improve imperceptibility.attenuation: Enables Just Noticeable Difference (JND) masking. The JND model builds a heatmap that is high when there is a lot of texture, and low otherwise. It allows the model to hide stronger watermarks in these textured areas while preserving smooth regions. By default thevideoseal_1.0model uses a JND heatmap (the one present in modules/jnd.py).
You can also modify some model attributes after loading.
# Example: updating parameters on an already loaded model
model.blender.scaling_w = 0.4 # Increase strength (more robust)The model is licensed under an MIT license.
See contributing and the code of conduct.
Pierre Fernandez, Hady Elsahar, Tomas Soucek, Sylvestre Rebuffi, Alex Mourachko
If you find this repository useful, please consider giving a star ⭐ and cite the relevant papers:
Pierre Fernandez, Hady Elsahar, I. Zeki Yalniz, Alexandre Mourachko
Demo: aidemos.meta.com/videoseal
@article{fernandez2024videoseal,
title={Video Seal: Open and Efficient Video Watermarking},
author={Fernandez, Pierre and Elsahar, Hady and Yalniz, I. Zeki and Mourachko, Alexandre},
journal={arXiv preprint arXiv:2412.09492},
year={2024}
}Aleksandar Petrov, Pierre Fernandez, Tomáš Souček, Hady Elsahar
Despite rapid progress in deep learning-based image watermarking, the capacity of current robust methods remains limited to the scale of only a few hundred bits. This work establishes theoretical upper bounds on watermarking capacity and demonstrates ChunkySeal, which increases capacity 4× to 1024 bits while preserving image quality and robustness.
@misc{petrov2025hidebits,
title={We Can Hide More Bits: The Unused Watermarking Capacity in Theory and in Practice},
author={Aleksandar Petrov and Pierre Fernandez and Tomáš Souček and Hady Elsahar},
year={2025},
eprint={2510.12812},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2510.12812}
}Tomáš Souček*, Pierre Fernandez*, Hady Elsahar, Sylvestre-Alvise Rebuffi, Valeriu Lacatusu, Tuan Tran, Tom Sander, Alexandre Mourachko
This work introduces adversarial-only training that eliminates unreliable perceptual losses, achieving state-of-the-art robustness and imperceptibility. PixelSeal addresses optimization instability and resolution scaling challenges through a three-stage training schedule and JND-based attenuation.
@article{soucek2025pixelseal,
title={Pixel Seal: Adversarial-only Training for Invisible Image and Video Watermarking},
author={Souček, Tomáš and Fernandez, Pierre and Elsahar, Hady and Rebuffi, Sylvestre-Alvise and Lacatusu, Valeriu and Tran, Tuan and Sander, Tom and Mourachko, Alexandre},
year={2025}
}Tomáš Souček, Sylvestre-Alvise Rebuffi, Pierre Fernandez, Nikola Jovanović, Hady Elsahar, Valeriu Lacatusu, Tuan Tran, Alexandre Mourachko
NeurIPS 2025 Spotlight 🏅 | Virtual Site
@article{soucek2025wmforger,
title={Transferable Black-Box One-Shot Forging of Watermarks via Image Preference Models},
author={Souček, Tomáš and Rebuffi, Sylvestre-Alvise and Fernandez, Pierre and Jovanović, Nikola and Elsahar, Hady and Lacatusu, Valeriu and Tran, Tuan and Mourachko, Alexandre},
journal={arXiv preprint arXiv:2510.20468},
year={2025}
}