I am concerned the limited compute resources (4 cores, 1 hour execution time) in the competition will necessitate pre-trained models on our proprietary data where we are computationally unconstrained. This seems counter to the spirit of the challenge because we may not be able to share the underlying data. Will there be leaderboards on open-data and closed-data models separately? Or am I misunderstanding the training paradigm in the webinar and our models must be on open-data?
Created by Ivan Brugere ivanbrugere The compute resources listed (4 cores, 1 hour execution time) are for the Open Phase where a synthetic dataset smaller than the challenge dataset is used (see [first webinar slides](https://www.synapse.org/#!Synapse:syn20815694)). We are still in the process of identifying the max compute time for models running on the UW data. This should be somewhere between 12 and 24 hours (to be confirmed).
> I am concerned the limited compute resources in the competition will necessitate pre-trained models on our proprietary data where we are computationally unconstrained.
Pre-training on private data prior to the submission is what participants usually do in other DREAM challenges.
> Will there be leaderboards on open-data and closed-data models separately?
There is only one leaderboard, which is the performance of the trained models on the UW evaluation data. UW data are private.
To be eligible as best performer and the associated award (e.g. manuscript authorship), team will need to make their submitted docker image public as well as release publicly a Synapse project that includes the source code and documentation of the submission. Teams who do not meet these requirements will not appear in the final leaderboard of the challenge
Drop files to upload
Compute limitations and pre-training page is loading…