Open
Description
For evasive whitebox or blackbox attacks, the objective of each attack is to fool the model to predict a different class but making it deceptive by making small changes, these changes are measured in distances for Example the L1/L2 Norm of difference.
Implement these metrics
- L1, L2 ... Lk Norm
- ISSM
- PSNR
- SAM
- SRE
You can find numpy and cv2 implementation at https://github.com/up42/image-similarity-measures/blob/master/image_similarity_measures/quality_metrics.py