Hi all, I would like to reproduce the rankings for robustness and accuracy for the multi-instance segmentation task. Is there any way I can obtain the evaluation results from all the teams? Perhaps the pickle file that was generated with create_algorithm_performances.py? I'd really appreciate it if possible. Thanks in advance

Created by jcarlos.angelesc
Hi Annika, Understood. Thanks for your quick response. Best regards
Dear jcarlos.angelesc, thank you for your comment. Unfortunately, we cannot publicly release the per image results per participant due to privacy issues. Therefore, it is not possible to directly reproduce the current rankings. Anyways, the rankings have been produced with the challengeR toolkit (https://github.com/wiesenfa/challengeR) from a csv file including the metric values for each participant for each image and for each task. You can check it out and use it for your own purposes. Cheers, Annika

How to reproduce rankings? page is loading…