Hi, I could not find any entry with the above topic? Has anyone got the same error? If yes, any suggestions? I checked my validation results for their variance. They are close to zero but I do not see any zeros at all. Best, Ebrahim

Created by Ebrahim Afyounian eafyounian
Dear Thomas, Thank you for your response. Best regards, Ebrahim
Dear Ebrahim, There will not be any allowance of resubmissions. I can talk to collaborators to increase the submission quota from 3 to 4 due to this error, but the quota is unlikely to increase. Apologies for the inconvenience and the error caused on our part. Best, Tom
@eafyounian   That's kind of a marginal case that a mistake was made due to a mistake on the organizers side.   For your and thomas yu's reference, a similar situation has happened once before in the past and in the final round, where postmortem found the training data was slightly wrong. The solution was to maintain the original winner, because all participants were facing the same data, while, a bonus round was added to allow participants to correct. In the end, using the fixed version of the data still didn't generate any prediction score that was better by using the slightly wrong data. And that was settled.   I personally think it is unlikely to be changed, because you still have 2 submissions. Also, if they allow this, in the next day, they would receive 10 such requests at least. Then the solution would be giving everyone 4 entries, which would have been the same as it is now.
@thomas.yu Dear Thomas, The modification to the verification step came couple of hours after we had made our **first submission** in the **second round**. Due to this issue, we needed to add random noise to all predictions so that our submission would be accepted. I am afraid that this might have marginally lowered our score. I would like to know whether it is possible that our team get a chance to resubmit our **first submission**? Thanks in advance for your response. Br., Ebrahim
Dear all, Please submit again. I have removed this validation so your submissions should work now. Best, Tom
Indeed, this was fixed. But before going to the scoring step, there is first a verification that the prediction file is valid (which catches the zero variance error). Now we are going to remove this verification step, as no longer useful
@MI_YANG i thought this bug was fixed evenwas in the previous round by printing zero when variance is zero?
The validation code can be sensitive to variances close to zero.
@MI_YANG Hi, I am still getting the following error (i.e. predictions.tsv: No rows can have variance of 0) even though I am adding enough noise to the predictions. Any ideas how to fix this? Here is what I am doing: For each gene, if the variance of the prediction vector is equal to zero, I add enough random noise to each sample in the vector. If the variance is not zero I do nothing. Is it possible that the validation code is sensitive to variances close to zero?
Dear all, The zero variance genes have been filtered out from the test mRNA (82 samples). However, after the split for leaderboard, there might be some genes with zero variance across the 20 samples. I do not know as I don't have access to the test data. The uniqueness of this challenge is that the organizer has to do the dry run on docker without having access to the test data. :) In case it is true, I suggest to simply add enough noise to pass the validation step. Best, Mi
In the mRNA training data a close example (i.e. the row has the same value across all samples) is the `CFHR2` gene with `101` common values and `4` other different values.
@mbelmadani Thanks for your response! The closest values to zero that I see in my validation results are not any smaller that `x^-7`, so the answer is that I do not see that small numbers. The other possibility might be that for the **mRNA** `test data` there are rows that have the same expression values across all samples. Can @MI_YANG and @thomas.yu please confirm whether this is the case and what your suggestions would be?
I am getting this error as well :(
I had this issue too... Do you have really small values? Like in the order x^-200 and smaller? Seems like something in the evaluation is not dealing well with small values. I've had to log transform my values and then add noise to the output. Not clear what exactly happened since that was only in the main submission and did not occur in the express lane tests.

[subchallenge 2] predictions.tsv: No rows can have variance of 0 page is loading…