I have run my inference code in my local machine and checked the local reconstructed files which look fine and satisfy number of submissions(606 volumes in TaskR1). But when I submit my 'submission.zip' in this competition, it actually shows I have only submit 520 files to the challenge. I double check that I do submit 606 files in this 'submission.zip'. So I want to figure out how your computing metric code filter my submission files. Is it because the image quality is so bad in those 86 unsubmission files? But I roughly look through my reconstruction files and they look fine. I'm confused now.

Created by Yipin Deng Santinor
Thank you very much for sharing your experience, and we appreciate your continued participation in the rankings.
Thank your response. I solved this issue yesterday. I print the shape of my submission files and official submission files. I found their dimension is very close, like [192,84,2,2](official),[192,83,2,2](my). I check the code. Finally, I figure out the reason that 'round' function from python version is different from matlab version. When I change 'round((img.shape[-2] - crop_shape[0])/ 2)' to 'int((img.shape[-2] - crop_shape[0])/ 2 +0.5)', the problem is solved. So, it's just a 'round' function issue. I'm gonna crying cause I spent almost two days to fix this issue.
The detail log for each submission can be found in https://www.synapse.org/Synapse:syn66492708 log files stored separately in Regular-Task1, Regular-Task2, Special-Task1 and Special Task2, and they are named by {submission-id}.zip, you can find the submission-id from the leaderboard, The first column, id, refers to the submission-id. You can get the score missing reason from .xlsx files in the {submission-id}.zip.

Submission question page is loading…