Hi!
Thank you for your great effort so far!
I was wondering, are we getting the final scores prior to the final write up? And if so, approximately when?
Thank you very much!
Created by Martin Guerrero martinguerrero89 they have no choice but to collect all write-ups before releasing the scores @martinguerrero89 other wise they have no bargain to acquire write-ups. Nobody would submit.
but indeed I could see this as a potential problem, that now participants would have no bargain against organizers to actually acquire a score. In some challenges, scores are not released for over 1.5 years now, which effectively allowed the organizers to change the data and change the winners posthumously, especially with dockers.
I think this is inappropriate and I wish this were not happening right now in this challenge. @saezrodriguez
@thomas.yu i suggest in future challenges maybe implementing something like course evaluation system that one can only see their performance after submitting their evaluation to others. I.e. there is an intermediate agent to whom both scores and write-ups can be submitted to at anytime as they want. But only when a participant submit satisfactory write-up, s/he can see scores. And on the other hand, only when the organizers freeze and release scores to the agent, may they start to read write-ups from participants. I think this is something necessary to implement, because
1. in quite a few recent challenges, the organizers changed data and scoring, after reading the top-performer's write-up or after the winner's talk, and thus allow them to give essentially any ranking they want. i haven't been particularly affected by this, maybe only one trivial challenge out of the sixteen i entered, but I can see many other teams were (and to my teammate of that challenge, yes, that was his only challenge, and his only winner-ship in his graduate school could be removed after they changed the simulation data).
2. this ensures all write-ups will be collected as soon as possible and you don't need to chase people around to get write-ups.
3. very easy to implement as a permission thing by adding someone into a team like 'write-up submitted group' to see the table.
4. Encourages transparency and honesty in scoring and careful curation of the data on the organizers' side Just to be clear, is there no lane we can use to score our models, since the close of the final round?