Skip to content

creative-graphic-design/huggingface-evaluate_layout-metrics

Repository files navigation

🤗 Layout Evaluation Metrics by Huggingface Evaluate

CI Ruff

A collection of metrics to evaluate layout generation that can be easily used in 🤗 huggingface evaluate.

📊 Metric 🤗 Space 📝 Paper
FID creative-graphic-design/layout-generative-model-scores [Heusel+ NeurIPS'17], [Naeem+ ICML'20]
Max. IoU creative-graphic-design/layout-maximum-iou [Kikuchi+ ACMMM'21]
Avg. IoU creative-graphic-design/layout-average-iou [Arroyo+ CVPR'21], [Kong+ ECCV'22]
Alignment creative-graphic-design/layout-alignment [Lee+ ECCV'20], [Li+ TVCG'21], [Kikuchi+ ACMMM'21]
Overlap creative-graphic-design/layout-overlap [Li+ ICLR'19], [Li+ TVCG'21], [Kikuchi+ ACMMM'21]
Validity [Hsu+ CVPR'23]
Occlusion
Overlap
Overlay
Underlay Effectiveness
Unreadability
Non-Alignment

Usage

pip install evaluate
  • Load the layout metric and then compute the score
import evaluate
import numpy as np

# Load the evaluation metric named "creative-graphic-design/layout-alignment"
alignment_score = evaluate.load("creative-graphic-design/layout-alignment")

# `batch_bbox` is a tensor representing (batch_size, max_num_elements, coordinates) 
# and `batch_mask` is a boolean tensor representing (batch_size, max_num_elements).
batch_bbox = np.random.rand(512, 25, 4)
# Note that padded fields will be set to `False`
batch_mask = np.full((512, 25), fill_value=True)

# Add the batch of bboxes and masks to the metric
alignment_score.add_batch(batch_bbox=batch_bbox, batch_mask=batch_mask)
# Perform the computation of the evaluation metric
alignment_score.compute()

Reference

About

Layout evaluation metrics for huggingface evaluate

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published