Thank you

To determine the relative performance of the models, we will perform a bootstrapping analysis. We will sample 10% of the test data 10,000 times and, for each sample, calculate the performance of each model and the rankings of the model for both Pearson and RMSE. Scores . We will average the ranks from both metrics to decide their final ranks.

Could you please kindly be more concrete, along the lines of my specific question? As participants, we deserve to know what the competition's goal is ;) Many thanks in advance.

My answer is both are important!

I'm terribly sorry but I don't think I was able to understand this answer in relation to my question. I'll paraphrase - what objective function do we have to maximize? For example, what if one team has a significantly better RMSE and another one has a significantly better Pearson correlation, what happens? This is surely important in determining what model to submit as final.

We usually don't reveal the details of scoring, but in cases like this we generally will do a rank sum based on the two metrics with 10% bootstrapping for x number of folds to differentiate the teams..

How will the final score be determined?page is loading…