Hi FOMO25 organizers, I have a few questions about the submission process that I'd like to clarify: **1. Container Structure:** The submission instructions mention submitting "a single (*.sif) file per track," but the container preparation guide shows task-specific requirements with different input/output formats. Should I: Create ONE container that handles all three tasks (with predict.py routing based on input arguments), or Submit separate containers for each task? **2. Pretrained Model Verification:** The challenge emphasizes using "the same pretrained checkpoint" for all three fine-tuned models, which is evaluated through the unified leaderboard. However, I'm unclear on how this is technically verified during submission. Since only the inference containers are submitted (not the pretrained weights or training code), how do you ensure that: All three task-specific models actually derive from the same pretrained checkpoint? Participants aren't using different pretrained models or training from scratch for each task? **3. Submission Limits:** How many submissions are allowed per track? Can we submit updated containers if we improve our models? Best regards, Pedro.

Created by Pete McG Petermcgor
Hi Pedro, Thanks a lot for your question and appologies for the late response. 1. Container Structure: You will submit three containers, one for each task. We believe this is the most userfriendly, since the routing can be a bit messy, but welcome any feedback on this matter for future iterations of the challenge. 2. Pretrained Model Verification: We believe it is important to allow users to perform their own finetuning rather than enforcing a single predefined strategy, as the optimal finetuning approach depends on the specifics of the pretraining. For the final models, all participants will be required to complete a questionnaire detailing their methods and any additional data used for pretraining, which will be appended to the final paper. We also encourage participants to make their implementations publicly available, although we understand that this may not always be feasible. As with any challenge, there is always the potential for misuse, such as incorporating undeclared external data, and we fundamentally rely on the academic integrity of the participants. We are actively exploring ways to mitigate this in future iterations of the challenge, and we highly welcome any feedback in this regard. 3. Submission Limits: Participants are allowed to submit as many models as they wish for validation and decide which model is evaluated on the test set after the submission deadline. We have committed to running models on the validation set within a week, but strive to evaluate them faster, depending on the total volume of submissions. Best, Asbjørn

Clarification on Submission Requirements - Container Structure and Model Verification page is loading…