Hi there,
I have tried a simple imputation for subchallenge 1. Its NRMSE is drastically different on real and simulated zero entries (https://www.synapse.org/#!Synapse:syn8228304/discussion/threadId=2152). The NRMSE is <0.2 when computed only for nonzero (i.e. real) entries, but >10 for zero (i.e. simulated) entries.
I am wondering if this performance is typical from other participants. Also, are the organizers planning to release the full groundtruth without masked zeros at some stage of the challenge so we can validate our methods against the real groundtruth data please?
Created by Lingfei Wang lwang Thanks, pacificma.
I hope you see where my questions lead to. For the subchallenge in its current form, surely all submission will be validated against the true dataset with simulated zeros. However, groundtruths with and without simulated zeros would obviously lead to different validation outcomes and the question is which should, as opposed to will, be used. Leaving that question to open discussion, I am simply asking whether it will be possible for us to try both groundtruths for validation. You should consider the testing true data set as the ground truth.
Drop files to upload
Subchallenge 1: Question on simulated zeros. page is loading…