Dear @DREAMOlfactoryMixturesPredictionChallenge2025Participants, We apologize for the error in our previous announcement. The Task 2 results were misstated. Please see the corrected results below. Task 1 ๐Ÿ† **Winner:** [nachman.keren](https://www.synapse.org/Synapse:syn68656030) โ€“ Sneak peek of the method ๐Ÿฅˆ [PL21](https://www.synapse.org/Synapse:syn68587196) ๐Ÿฅ‰ [Nylix](https://www.synapse.org/Synapse:syn68704854) Task 2 ๐Ÿ† **Winners (3-way tie):** [PL21](https://www.synapse.org/Synapse:syn68587196) & [yuanfang.guan](https://www.synapse.org/Synapse:syn68327093) & [Nylix](https://www.synapse.org/Synapse:syn68704854) Thank you for your understanding, and congratulations to all teams for their outstanding contributions.

Created by Gaia Andreoletti gaia.sage
Hi @jeriscience kindly let me know if you can read the submission methodology explanation of our team and if we are supposed to send email of the draft?
All participants will be part of the DREAM olfaction consortium (so their name appears on the supp materials and is pubmed indexed), best performers are invited to be byline authors. Also depending on their contribution to the overall paper and community discussion phase of the challenge, participants from the consortium might be moved to byline authors.
Hi @gaia.sage Just following up on the message posted above. Is there any info on which teams will be eligible for authorship? Thanks
for task1 How many teams will be given authorship? the methodology document with code is made public and uploaded in the files section. my team has 0.713 as pearson and .196 as cosine the top has 0.752 and 0.176.
Hi @dskhanirfan, Thanks for the question For Task 1, weโ€™re not able to produce a ranking limited to teams that used only Task-1 data. We donโ€™t have a reliable way to independently verify each teamโ€™s training provenance (e.g., whether no external or Task-2 data were used), so any such list would be speculative. For that reason, we wonโ€™t publish a separate โ€œTask-1-onlyโ€ ranking. Weโ€™ll share the official overall rankings when theyโ€™re released; youโ€™ll be able to review everyoneโ€™s standing there. Teams are also encouraged to describe their data usage in their method write-ups if they wish to highlight a Task-1-only approach.
Congradulations to the winners. @gaia.sage For Task1: Can you kindly give the ranking for the teams which only used data from task1 and not used any external or task2 dataset for training the model?

๐ŸŽ‰ Challenge Results Are In! ๐ŸŽ‰ page is loading…