Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add section for comparing your implementations with original papers! #171

Open
ali-masoudi opened this issue Aug 12, 2019 · 1 comment
Open

Comments

@ali-masoudi
Copy link

Hi
I've tested a few of your implementation and compared the results with original papers. but the results were not even close to the original ones. It will be good to have a section which we can see your implementation results on benchmark datasets(PASCAL VOC, Cityscapes). so we will know if we are doing something wrong or not!
thanks
@qubvel

@qubvel
Copy link
Owner

qubvel commented Aug 14, 2019

Hi @ali-masoudi
I am not going for now train models with such datasets
Final results depends on a lot of different factors:

  • model architecture
  • image preprocessing
  • data sampling
  • training hyperparams like optimizer, lr sheduling, losses
  • etc.

Even if you will have this params described in paper this will not guarantee to reproduce result.

I am not concentrated on training pipeline. I would like to give you an easy tool with flexibility to build your own model for you experiments. That is why there are a lot of parameters that can be passed while model initialization.

If you have a goal to reproduce result, I would be happy to answer your questions regarding models architectures. May be you will be able to find best params or bugs in architectures, please let me know opening an issue or creating a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants