Hi, **For the coming rounds (data):** I understand for SC1 we need to change what we read from evaluation_data/data_test_obs_{1:100} to evaluation_data/data_test_obs_{101:200}. For SC2 and 3, will it still be on ovary or will it be using breast. If it is using breast, please do not update the data anymore as we need to train the data before submitting and cannot retrain every time the data is updated. **For resources:** - I feel that 4 with 10 GB is very limiting. The training data itself is huge and a very simple model uses ~ 8GB, so it is almost impractical to have such limits. Maybe increase the memory and time thresholds otherwise many people including myself who have spent a lot of energy, time and resources for this challenge will simply walk away. - If for SC1 it couldn't finish running for all 100 datasets, could the scoring method just use whatever got predicted and not crash if it can't find all the files? - I have been searching for a way to limit memory usage in R during parallelization (using FORK really helps). Is there a way while running parallel (foreach or parLapply) instead of crashing because it exceeded the memory limit, simply use less clusters until there is enough memory ? I am not able to predict ahead of time how much memory will be used to set the number of cores accordingly with detectCores(). So, I need to limit it when running foreach or parLapply. Thanks,

Created by SA90
The SC1 final file names will be changed to match the leaderboard round names, ie. evaluation_data/data_test_obs{1:100} before the final round starts. Resources are under discussion, round 2 will be extended.

Round 2, 3 and Final - data and resources page is loading…