Skip to content

Custom data, identifying borders #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
TerezaSikova opened this issue Feb 25, 2025 · 3 comments
Open

Custom data, identifying borders #8

TerezaSikova opened this issue Feb 25, 2025 · 3 comments

Comments

@TerezaSikova
Copy link

Hello, I am a student at the Faculty of Informatics, Masaryk University (MUNI), Brno (Czech Republic), and I am working on a pipeline to restore a broken fresco at our university. Your project seems like a perfect fit!
I successfully ran the demo on the Breaking Bad dataset and got great results. However, I wasn't able to replicate similar results with other datasets (not even another piece from Breaking Bad, but I suspect that the issue was connected to the object being a rather thin vase, not a filled one, as suggested in Issue #4). Specifically, when experimenting with the RePAIR dataset (which, like our university’s data, consists of fresco fragments), I noticed that the fragments never align correctly.
I have read the existing issues related to custom data and tried to change the parameters as suggested. That did improve the borders to some extent, but they remain disconnected, preventing proper segmentation. I am also testing our own dataset but am encountering the similar issues. (Borders are either disconnected or take up too much space of the fragment).
Could you please provide some guidance on how to proceed? I'm unsure if the issue is with my data (if so, would digitalizing them in different way help?) or if I need to modify the demo code beyond changing the path to my data (and tuning parameters).
If it's just a matter of parameters, I’d appreciate an explanation of the key ones and their expected ranges. Also, do the to and tb parameters correspond to λ₀ and λ₂ in your paper?

Thank you for your time and assistance!

Unfortunately, I don’t currently have two matching scanned fragments from our university’s dataset. I hope this won't be a problem since the issue is with breaking curves rather than registration (I think). If needed, I can provide more suitable fragments.

repair_objects.zip
muni_objects.zip

@freerafiki
Copy link
Member

Hi, thanks for the interest and for the issue. I am sorry to hear that the results were not as good as you expected. I just took a look at your data, and it looks similar enough so that you should not change too much the parameters. I agree with you that the issue is (most of the time, not only in your case) with the breaking curves. The algorithm we used had excellent results but it seems to be (with the time we had experience with more datasets) sometimes too sensitive to different data.

For the questions:

  • I do not think you should change digitization, we can process the data, but it already looks fine
  • No, you should only change path and parameters, it's correct.
  • regarding parameters, no: $\lambda_i$ are the eigenvalues of the correlation matrix (are calculated based on each points' neighbours) and the parameters in the code are linked to the graph creation, pruning and dilation. I will add a section on the readme for a better explanation, you are right, it is deserved. Sorry for that.
  • No, we do not need matching fragments, we can try to match these!

In a more general way, I think since the framework is modular, the best way to improve this framework would be to plug in a new breaking curve detection method which can better generalize to any kind of data. We try now to fix the issue using this method, and for the future I think it would be the best extension.
We had some discussions on this, but we did not reach a final concrete implementation.
Last thing: do you have colors in your data? It seems not from the files you shared, but out of curiosity, because we do have but it's not used at the moment, but I am not sure whether it's a problem shared by other projects or only ours.

@freerafiki
Copy link
Member

I have added a more detailed explanation in the configs file and linked from the main readme in section configurations-and-parameters, I hope this helps at least for that part. For the right parameters, you can try to tune them and when I have more time I can dig a bit deeper to see if I manage to improve them!

@TerezaSikova
Copy link
Author

Hello, thank you very much for your kind reply and for explaining the parameters in detail! I really appreciate your insights.

I am still working on tuning the parameters, and recently I noticed that the results from prepare_challenge.py can vary significantly even when run with the same input (I haven't looked into it in detail yet, but I think it's because of the open3D methods?). Some runs produce much clearer breaking curves and segmentation than others (e.g., in one case, I got 3 segments, while in another, I got 19). I'm not sure if this is expected behavior or if it could be an anomaly in my setup. I’ll look into it more and let you know if I find anything noteworthy.

Regarding your question about colors in the data: yes, the dataset is painted, with each fragment at least partially colored (but it is usually quite damaged). It is not currently used, but we are interested in exploring ways to use motifs in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants