Dear FOMO25 Organizers, I have a question regarding the finetuning approach. Are we allowed to retrain the entire pre-trained unsupervised model (starting from a common checkpoint) for each downstream task, or must we keep the encoder frozen and only train specific decoders/heads using the learned representations? I hope my question is clear. Kind regards, Jaume

Created by Jaume Banus Cobo jbanusco
Hello, You are allowed to do full end-to-end finetuning. Looking forward to your submissions! Best, Asbjørn

Clarification on finetuning approach page is loading…