Skip to content

mazurowski-lab/SAMFailureMetrics

Repository files navigation

[WACV 2026] Quantifying Challenging Objects for Semantic Segmentation

NEWS: 🎉 Our paper has been accepted to WACV 2026!

*equal contribution

arXiv paper link: arXiv Paper

Segment Anything Model (SAM) and segmentation foundation models (SFMs) like it (e.g., HQSAM, SAM 2) have shown extensive promise in segmenting objects from a wide range of contexts unseen in training, yet still has surprising trouble with certain types of objects such as those with dense, tree-like structures, or low textural contrast with their surroundings.

In our paper, Quantifying the Limits of Segmentation Foundation Models: Modeling Challenges in Segmenting Tree-Like and Low-Contrast Objects, we propose metrics that quantify the tree-likeness and textural contrast of objects, and show that the ability of SFMs (SAM, SAM 2, HQ-SAM) to segment these objects noticeably correlates with these metrics (see below). This codebase provides the code to easily calculate these metrics.

Shown below: Correlation between segmentation performance (IoU) and our proposed metrics for object tree-likeness (CPR and DoGD) and textural separability, for several segmentation foundation models, evaluated on the DIS and MOSE datasets.

Citation

Please cite our WACV 2026 paper if you use our code or reference our work:

@article{zhang2024texturalconfusion,
      title={Quantifying the Limits of Segmentation Foundation Models: Modeling Challenges in Segmenting Tree-Like and Low-Contrast Objects}, 
      author={Yixin Zhang and Nicholas Konz and Kevin Kramer and Maciej A. Mazurowski},
      journal={IEEE Winter Conference on Applications of Computer Vision (WACV)},
      year={2026},
      url={https://arxiv.org/abs/2412.04243}, 
}

1) Installation

Please run pip3 install -r requirements.txt to install the required packages.

2) Usage

You can easily compute these metrics for your own segmentation masks as shown in the following example.

import torch
device = "cuda" # or "cpu"

object_mask = torch.load('path/to/your/mask.pt')
assert object_mask.ndim == 3
# ^ ! mask needs to be of shape (1, H, W)

Tree-likeness Metrics

from treelikeness_metrics import get_CPR, get_DoGD
cpr = get_CPR(object_mask, device=device)
dogd = get_DoGD(object_mask, device=device)

The hyperparameters of these metrics ($r$ for CPR, and $a$ and $b$ for DoGD) can also be adjusted from their default values, as shown below.

dogd = get_CPR(object_mask, rad=7, device=device)
cpr = get_DoGD(object_mask, a=63, b=5, device=device)

Textural Contrast/Separability Metrics

Note that the textural contrast/separability metrics additionally require the image that the object mask corresponds to:

import torchvision.transforms as transforms
from PIL import Image
from textural_contrast_metrics import TexturalMetric

img = transforms.functional.to_tensor(Image.open('path/to/your/image.png').convert('RGB')).to(device)

metric = TexturalMetric(device)
separability = metric.get_separability_score(img, object_mask)

About

[WACV 2026] Metrics for tree-likeness and textural contrast of objects for segmentation and other computer vision tasks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages