Hello, We just submitted our first version of Docker in SC1 Express Lane, and the log is telling that we are not able to get the data where it is supposed to be (if we follow your GitHub examples (https://github.com/Sage-Bionetworks/NCI-CPTAC-Challenge-Examples/blob/master/sc1/Dry_Run_SC1.R). Are the data supposed to be named like in your example? --Maxime

Created by Maxime Vallée valleem
I have submitted another docker that is based on your SC1 example. It has one modification: no imputation is even tried so the output is the same as input. Submission system states that samples 6-80 are missing from the output and no further validation is made. This is obvious, because the input has only 5 samples. It would still be nice to be able to evaluate the script before actual submission. ``` objectId: 9638329 LOG_FILE: syn10928683 ``` To save time, would it be possible to let everyone know when the express lanes are able to handle your example scripts 'as is'? Thanks!
Dear Maxime, You should have gotten a log file, but it seems there isn't any other print statements saying why your model failed. This is your log file: https://www.synapse.org/#!Synapse:syn10921949. Can you add more print statements? Your model seems to stop right after the loading of the R packages. Best, Tom
Sure, I re-tried my version with *test* in name files. Still saying I have not created predictions. ``` submission name: syn10907604 submission ID: 9638207 ```
Dear Maxime, I will update the express lanes to mimic that of the real queues. The filenames should be `data_test_obs_*.txt`. Sorry about that. As for your error, could you please give me a submission id so I could take a look? Best, Tom
Thanks Tomi for the data_obs_test to data_obs tip. I re-submitted an updated Docker version with this change, and in the (R) log file, no more errors, only usual verbose (likewise when I tried it locally). However, I received this in an e-mail : ``` No prediction file generated, please check your log file ``` So yes, I guess there is something odd going on.
After having some trouble getting my imputation scripts to work on the express lane of subchallenge 1, I also tested the example docker build that is available. After fixing the file naming convention of the script (data_obs_test to data_obs), the final status is still INVALID with the following logfile: ``` Cluster size 7627 broken into 3702 3925 Cluster size 3702 broken into 2526 1176 Cluster size 2526 broken into 1063 1463 Done cluster 1063 *** caught segfault *** address (nil), cause 'unknown' Segmentation fault (core dumped) ``` Any thoughts on this? I guess your own template should work without any modifications. Does this mean that there are still issues with the submission/evaluation system?

Are /evaluation_data/data_test_obs_*.txt files present in SubChallenge1 Express Lane? page is loading…