Testset used for evaluation.

#2
by ZhengPeng7 - opened

Hi, BRIAAI team. Congratulations on your excellent work!
Thanks for citing my BiRefNet. I'm really glad to see that it can help.

I'm curious about one thing. What is the test set you used for the evaluation?

Thanks for the explanation in advance!

Hi Peng Zheng,

Thanks for getting in touch! We truly value the foundation you’ve provided, which has played a significant role in our progress.
While we’d love to share the full benchmark dataset, we're unable to do so as it includes proprietary data. However, we're more than happy to provide the dataset distributions.
Please don’t hesitate to let us know if there’s anything else we can assist with.

Best regards,
Or
image.png

No description provided.
Negev900 changed discussion status to closed
Negev900 changed discussion status to open
Negev900 changed discussion status to closed

Thanks for the reply. Have you ever used publicly available dataset for evaluation? For example, DIS-VD in DIS5K and P3M-500-NP in P3M?

This comment has been hidden

While we unfortunately cannot publish the benchmark on which and with which we developed the model, out of respect for copyright, Or has provided the maximum possible information about the benchmark. In the attached Git, [GitHub Link: https://github.com/Efrat-Taig/RMBG-2.0]
there is a script that allows anyone to easily create a benchmark for their specific use case.
Additionally, I’ve prepared another benchmark for comparison. Again, this is not the original benchmark; it is for a specific use case.
I've also created a comparison script between our previous model and the new model:
https://github.com/Efrat-Taig/RMBG-2.0/blob/main/compare_bria_models.py
I haven't yet compared it to BiRefNet. —it would be great if someone from the community could do so.

Negev900 changed discussion status to open

I know... Thanks for your detailed explanation!

We can discuss also in our Discord community https://discord.gg/Nxe9YW9zHS

Thanks for the details about the test set ! I have a few questions:

  1. Is the test set fully generated using the approach from https://github.com/Efrat-Taig/RMBG-2.0 or is it based on real images also?
  2. What's the total size of your dataset?
  3. Also the discord link seems broken?
    Many thanks :)

Thanks for the quick response and the link to discord !
I didn't find the actual size of the full dataset, could you provide the count?
And also who did the voting to get this graph?

Screenshot 2024-11-25 at 15.50.10.png

origubany changed discussion status to closed
origubany changed discussion status to open
BRIA AI org

hi @tdurbor the dataset covered few hounders images.
voting was performed manually.

origubany changed discussion status to closed

Hi @origubany thanks for the answer, I'm curious about the details around the voting? Who voted? How where the scores calculated?

BRIA AI org

The voting was based on blind testing conducted by design and art students ,scoring our benchmark results.

origubany changed discussion status to open

how many votes per image were you able to collect?

BRIA AI org

3 votes per image.

BRIA AI org

@tdurbor you are more than welcomed to join our discord community channel https://discord.gg/CCNYhjKT to get the latest updates and insights.

origubany changed discussion status to closed

Sign up or log in to comment