Hi,
I would also like to thank the organizers of this challenge for their work of organizing the challenge. This challenge has been very valuable for the research community, showing the potential of ultra-low-field MRI scans and has led to many great solutions being developed. The challenge therefore was a great success.
Unfortunately, the recent change in the score-evaluation understandably frustrates many participants, that have developed and optimized solutions according to the scoring that was communicated throughout the challenge. While I understand that the new scoring system, which normalizes the PSNR score to [0,1] is more aligned with the original intention of the challenge and the higher weighting of SSIM, introducing this normalization after the final submissions is rather uncommon and unfair, as also pointed out by the other participants. Going through with the post-hoc change in scoring will negatively affect the challenge experience for participants, who have developed excellent solutions and would have won the challenge or would have ranked higher if the original score has been used.
While my team did benefit from the change in evaluation metric, I feel that some solution that can be agreed upon by all participants needs to be found. Would it be an idea to split the challenge into two tracks? One main track honoring the original evaluation metric with a strong focus on PSNR as well as a side-track with a stronger focus on SSIM (aka the new metric), as was the original intention by the challenge organizers?
Again, I think the challenge was very successful and is a great contribution to the research field! It is therefore very important that all participants feel treated fairly and are eager to further develop their algorithm in a potential next-year challenge!
Best regards,
Nikolas