-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about OneRestore trained on our CDD-11 for Real Scenes.
#13
Comments
Thanks for your attention to our work. The pre-trained model named OneRestore trained on our CDD-11 for Real Scenes (onerestore_real.tar) was also trained using CDD-11. It doesn't require a special dataset or a new loss function. It follows the same training settings described in Sec.5.1 of the paper. |
Thanks for your reply, but I have a further question. What is the difference between |
Same question here. |
Thank you for your question. Specifically, both models are trained under the same configuration. The difference is that for the first model, we selected the result with the best indicator for experimental comparison, while the other one is the model of the last batch. |
So, |
In practice, this is not different, but since we used different model weights for synthetic and real data in the paper, we publish both weights to ensure that the implementation results match our paper. |
Could you explain why you choose to use the parameters of the final epoch for real data? |
There is no special reason, we publish this just to allow researchers to reproduce the same results in our paper. |
Well, let me describe my query in Chinese... 根据您的解释,OneRestore并没有使用额外的训练方式、新增的退化或数据集,仅仅使用最后一轮的的权重去处理真实退化场景的图像。 那么, |
I guess the authors just did this in implementation but not with a clear purpose. It may be okay to use the same weights for real scenes. Have you tested the onerestore_cdd-11.tar for real scenes?
|
You are right. As I mentioned before, these two checkpoints have no significant difference in model training. It does not mean that onerestore_cdd-11.tar can not process the real image. We just want researchers can produce the same results as our paper. Please don't dwell on this. |
This implementation confuses me a lot. The authors just do this but don't clearly state the reason for using this setting in their paper or repository description, which makes me doubt these results of this paper. We may never know the authors' intention in conducting the test for real data in such way. |
If you have any doubts, you can just do the experiment. I have made all the checkpoints public. |
Dear authors of OneRestore, It is the author's responsibility to fully explain the setting of your paper and the intention behind it. And your "strongly worded" feedback will cause more researchers to misunderstand your work. You keep ducking this issue for no reasons. |
Maybe you just conduct the wrong test, or maybe there is a scientific research topic behind this: perhaps extended training time, appropriate parameter selection or stacking can increase the generalization of image restoration models under complicated and heterogeneous real degradations? As readers, we cannot know why, and it is difficult to fully replicate your experiments. We have our own experiments. We desire an explanation, even if it is due to your errors or omissions. Explanation is preferable to evading the situation. Perhaps you could wait until you are emotionally stable before answering the questions above. |
In the README, you mentioned a pre-trained model named
OneRestore trained on our CDD-11 for Real Scenes.
However, your paper didn't provide details of the training of this model. What isfor Real Scenes
? Does it require a special dataset or a new loss function? How is it different from the training settings described in Sec.5.1 of the paper?The text was updated successfully, but these errors were encountered: