Please reply to this thread to report any data issues encountered with the BraTS-GLI 2025 datasets. Lead GLI organizers: @jeffrudie @maria.correiadeverdier @rs2492

Created by Verena Chung vchung
Hi Maria! Is there any document linking the patient IDs from this dataset to those from 2021 (or 2023, for that matter)? And a list of the patient IDs for the "some newly added cases"? That would be greatly appreciated! Also, is any clinical and/or demographic information available, or just those provided for the 2021 challenge? (In that case, the linking patient ID document would prove even more needed for us). Thanks in advance! Best, Juancito
Hi @qulin, Yes, the deadline for the short paper submission has been [extended](https://www.synapse.org/Synapse:syn64153130/discussion/threadId=12178) until the 31st of July. Therefore, the submission portal for the validation phase will remain open until July 31st. Best regards, /Mehdi
Dear Organizing Committee, I would like to ask if the submission of the verification set will be delayed
Hi @jinlw1999 , In addition to @astaraki's comment, can you also confirm that you are submitting predictions for the validation dataset, not training? Your validation errors \-- which you should also receive by email \-- show the following message: > Unknown scan IDs found: 00003-000, 00008-001, 00016-000, 00017-001, ... _[truncated]_ If you are not receiving the notifications about your submission errors, please double-check your [email notification settings](https://www.synapse.org/#!Profile:v/settings) to ensure that "Allow Synapse to send me email notifications to my Primary email address." is checked. Hope this helps!
@jinlw1999 Did you apply the standard file[ naming convention ](https://www.synapse.org/Synapse:syn51156911/wiki/621296)to your predicted masks ?
Hello, I have submitted the verification results of the BraTS2025-GLI-PRE-Challenge-TrainingData.zip dataset many times, but each time it shows INVALID and the Workflow Status shows ERROR. Can you check the specific reason? I use the calculation indicator code locally and can calculate the DICE value. If you can see it, please reply to me in time. Thank you very much! Submission ID:9753854
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear @qulin , @yy1234, @yy911 The method behind the BraTS scores is described in an upcoming manuscript. This preprint will be published on arXiv early next week. I will update you as soon as it becomes available. Best regards, /Mehdi
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance.
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance!
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance!
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance!
Hi @mohamedkas Thanks for getting in touch, Can you please specify the name of the subject(s) that have these values as labels [0, 85, 180, 255]? Best regards, /Mehdi
Dear Committee, Thank you for organizing such challenge, I have a question regarding the submission format of GLI segmentation, the provided labels values are [ 0, 85, 170, 255]. I'm wondering if the submitted predictions should be with the same ids or with [0, 1, 2, 3]. Best Regards,
Dear Organizing Committee, Any specific formula to combine those multiple evaluation metrics to decide the final ranking? Thank you for your guidance. We look forward to your clarification. Best regards,
Dear Organizing Committee, Thank you for the update regarding the evaluation metrics. We appreciate that both NSD and DSC will contribute equally to the final ranking. To better align our submission strategy, we would like to clarify the specific calculation methodology for the overall ranking score. Could you kindly provide details on: Whether there is a predefined mathematical formula for combining NSD and DSC scores (e.g., weighted average, harmonic mean, or other composite metric) Thank you for your guidance. We look forward to your clarification. Best regards,
@077lll , @qulin Thank you for getting in touch. In the final evaluation over the testing set, the NSD and DSC metrics will contribute equally to the ranking systems. Regarding the NSD metrics, the calculation of this metric requires setting a numerical parameter. We will fix this parameter per challenge and will notify you as soon as we can, so you will know which of the NSD0.5 or NSD1.0 will be used. Please let us know if you have any questions. /Mehdi
Hello dear BraTS 2025 Organizing Team, Hello, we would like to inquire about the evaluation system. For the three metrics—LesionWiseDice, LesionWiseNSD0.5, and LesionWiseNSD_1.0—how are the weights assigned in the final ranking calculation? Additionally, is NSD considered more important than Dice? Thank you for your guidance!
Hello dear BraTS 2025 Organizing Team, Hello, we would like to inquire about the evaluation system. For the three metrics—LesionWise_Dice, LesionWise_NSD_0.5, and LesionWise_NSD_1.0—how are the weights assigned in the final ranking calculation? Additionally, is NSD considered more important than Dice? Thank you for your guidance!
Dear @yy911 Please follow the[ submission tutorial](https://www.synapse.org/Synapse:syn64153130/wiki/632674) to find out the proper file-naming conventions for the validation submission. Also, you may find the information in [this thread](https://www.synapse.org/Synapse:syn64153130/discussion/threadId=11986) useful for the GLI task. Let us know if you have any questions. /Mehdi
Dear BRATS Organizers, I would like to confirm the required file structure when submitting predictions for the validation set. Are we expected to submit only the segmentation label files (e.g., BraTS20XX_XXXXX_seg.nii.gz), without including the original multimodal scans (t2w, t2f, t1n, t1c)? Thank you for your guidance!
Hi @qulin, and thank you for reaching out. You can submit the predicted segmentation masks of both Pre and Post datasets in a single compressed file by following the [submission instruction](https://www.synapse.org/Synapse:syn64153130/wiki/632674). The validation submission queues have been opened. Please read [this thread](https://www.synapse.org/Synapse:syn64153130/discussion/threadId=12068) for more information. /Mehdi
Hello dear BraTS 2025 Organizing Team, Yes, we need to make predictions for both the pre and post validation sets separately, right? Additionally, we would like to know when the evaluation queue for the validation sets opens and where we can upload our prediction results. Thank you!
@qulin Thank you for reaching out. Please note that the evaluation over both the validation and testing datasets will be done on both pre- and post-treatment data collectively. This essentially means that you can develop two separate solutions by using **ONLY **data from the **2025 challenge** and predict the pre- and post-treatment labels **_separately_**. You can find more information in [BraTS-GLI Evaluation Procedure](https://www.synapse.org/Synapse:syn64153130/discussion/threadId=11986) Please let us know if you have any questions. /Mehdi

Hello dear BraTS 2025 Organizing Team, Is the final submitted validation set the pre-operative 2025 validation set? Since the 2025 validation set only contains pre-operative data, how can we predict the RC? For the RC, should we use the 2025 validation set or the 2024 validation set? Thank you!
@halima_fouadi Thank you for reaching out. Please note that the evaluation will be performed separately on each of the individual tumor subregions (ET, NETC, SNFH, and RC). In addition, the evaluation will also cover the combined labels: the Tumor core (ET + NETC) and the Whole tumor (ET + SNFH + NETC). Therefore, you may optimize your model(s) using either a label-based or a region-based approach Once the evaluation queue for the validation dataset opens (expected next week), you will be able to upload your predicted segmentation masks. These will then be evaluated based on the aforementioned labels and regions. Therefore, you do not need to submit any computed evaluation results yourself. We will provide instructions on how to upload your predicted masks. Best regards, /Mehdi
Hello dear BraTS 2025 Organizing Team, We have trained our model with four explicit segmentation classes -ET, NETC, SNFH and RC- plus background. For the evaluation measures (Dice score and Hausdorff distance), should we submit the results only for these four classes and you do the evaluation of the TC and WT labels on your side? Or should we calculate and submit the measurements for TC and WT ourselves? Thank you!
@danish_ali_uwa Hi Danish, and thank you for reaching out. Regarding the Glioma task, you can develop two distinct models for pre-treatment and post-treatment data. However, for the final testing phase, it is required to integrate these models into a unified pipeline. The [naming convention](https://www.synapse.org/Synapse:syn64153130/wiki/631053) for data files can be utilized to differentiate between pre-treatment and post-treatment cases. It is crucial to note that evaluation during both the validation and testing phases will encompass all cases, collectively including both pre-treatment and post-treatment data. Therefore, a validation phase submission comprising solely pre-treatment masks will result in zero values of the evaluation metrics applied to the post-treatment cases. Concerning the submission format for validation cases, the expected submitted files are single-channel ".nii.gz" files containing categorical values (similar to reference segmentation masks). The submission portal for validation cases will be made available shortly, accompanied by detailed instructions for participants. Let us know if you have any questions, /Mehdi
@maria.correiadeverdier, I have trained my model for just pr traetment cases. If i just wanna evaluate its performnec on pre treaatment valdiation set. is there any way?
Hi Danish, Thanks for your question! BraTS 2025 includes both pre- and post-treatment cases, combining datasets from the 2021 and 2024 challenges, along with some newly added cases. The RC label (label 4) was introduced as part of the 2024 challenge and is present in post-treatment cases only. Feel free to reach out if anything else is unclear! Best, Maria
@maria.correiadeverdier, Thanks for your kind repsone. I am curious abut as dataset is from previous challenge like BraTS 2021. in that data set there are basically four labels 0 BG 1 NC 2 Ed and 3ET but here 4 for RC and 5 for TC and 6 for WT, I know WT is combbination of 1, 2 and 3 and TC is combination of 1 and 3 . WHAT RC represent here. like i just need to submitt my predited mask that will contain basiccally labels for each predicted calss that will for sure will be from 0 to 3 right? Is that the case?
Hi Danish, Thanks for reaching out, and great to hear you've registered and started working with the data! The evaluation will be based on the four individual labels, as well as two combined sub-regions. Please note that not all labels will be present in every case. 1. ET 2. NETC 3. SNFH 4. RC 5. Tumor core (ET plus NETC) 6. Whole tumor (ET plus SNFH plus NETC) @astaraki will get back to you regarding the submission procedure. Let me know if you have any further questions! Best regards, Maria
Dear BraTS 2025 Organizing Team, I hope this message finds you well. I have successfully registered for the BraTS 2025 challenge and obtained access to the validation data. After running inference with my model, I would like to confirm the correct procedure for uploading the predicted segmentation masks for the glioma validation task. Could you please clarify: Where and how should I submit the predicted masks for validation evaluation? Should the segmentation output be in: A single-channel NIfTI file with label values {0, 1, 2, 3}, or A 3-channel output, with each channel representing WT, TC, and ET respectively? Will the evaluation be based on these 3 tumor regions individually (WT, TC, ET), or should the output reflect the original BraTS label structure (NCR=1, ED=2, ET=3)? Thank you in advance for your help and for organizing this valuable challenge. Best regards, Danish Ali

BraTS-GLI 2025 Data Feedback page is loading…