Please reply to this thread to report any data issues encountered with the BraTSPRO datasets. Lead BraTSPRO organizers: @YKirchhoff

Created by Verena Chung vchung
Okay thanks you @YKirchhoff sir
Hi @vansh, short papers should be submitted to progression, classification is for [task 10](https://www.synapse.org/Synapse:syn64153130/wiki/631458). Best, Yannick
@YKirchhoff Sir i wanted to ask, under which track should short paper be submitted, for this challenge, at CMT? Classification or Progression? Please reply as soon as possible, since deadline is within hours
Thanks Yannick. I created a new token and it works now.
Hi @DarylWM, did you configure an access token for Synapse? With the switch to 2FA that is required (when you are prompted to login in the terminal you need to use the token as password). If that is set correctly you should not get an access denied error. Let me know in case that doesn't work and I will try to help you fix that. Best, Yannick
Hi @YKirchhoff, I'd like to upload my Docker container but I get an "access denied" error. I am logged-in and am Synapse-certified. Can you let me know what I'm doing wrong? docker tag rano:latest docker.synapse.org/syn68774620/rano:latest docker push docker.synapse.org/syn68774620/rano:latest The push refers to repository [docker.synapse.org/syn68774620/rano] be7ef81b1a9a: Preparing 6c5623168528: Preparing 02bea5676cde: Preparing 63c2a8442683: Preparing 5f70bf18a086: Preparing 03f1547e3abf: Waiting 1b576c8cb9e6: Waiting fd45f340a2d9: Waiting e9b5df99b5df: Waiting 7fe75af37b7d: Waiting 72cf985fdcc7: Waiting c0fd88a0aeec: Waiting 9181b04f871f: Waiting 74c1a364ba0a: Waiting a0b970ebec8e: Waiting 2f78c715c256: Waiting f8536c24d0ef: Waiting 3bb0a9bdf970: Waiting 17918215a5d0: Waiting 2573e0d81582: Waiting denied: requested access to the resource is denied
Hi @YKirchhoff, we were able to push the docker image over to synapse, however as previously mentioned by you, there seems to be some problem with the evaluation queue. We've also sent you a mail with the mention of the project id and the docker image details. Kindly evaluate this submission so that we can work on it further. Thanks.
Dear , @YKirchhoff Thank you for your support. we will send you our Docker image.
Thank You for your help @YKirchhoff, I'll send the docker image to you as a tar.gz file.
Hi @SatyaM, thanks for giving me access. I just tested pushing the hello-world docker and it immediately appears on the docker overview. You can see it as test-yannick. I guess there might be some checks from synapse leading to your image not showing up but I am not sure. In order to enable you to submit your algorithm, **you can also send me your docker image directly via mail (yannick.kirchhoff@dkfz-heidelberg.de - please add your username and teamname if you are submitting as a team) and I will evaluate it, same goes for @Ewunate and everyone else.** You can either upload your docker to some docker registry (e.g. dockerhub - make sure to give me access) or upload it as a .tar.gz file or similar. Please keep in mind that it takes some time for me to manually evaluate submissions, therefore I would appreciate if you don't send me your submissions via mail last minute.
@YKirchhoff, I've added you to one of my projects (syn68718904). I tried to change the tag even, but that didn't help! Thanks for your time.
@SatyaM @Ewunate In [another thread](https://www.synapse.org/Synapse:syn25829067/discussion/threadId=8417) it apparently helped to push the docker again with a different tag, maybe you can give that a try. And please try reloading with deleted browser cache, that can apparently also lead to new dockers not showing up.
@SatyaM and @Ewunate can you maybe give me access to your projects?
Hi @SatyaM and @Ewunate, I am not sure what is going wrong. I just created a new project and pushed a docker image to try to replicate your error, but for me it immediately shows up in the docker tab. @vchung are you maybe able to help us out here? Best, Yannick
Thank you for your suggestion again. @YKirchhoff I have been using the following codes: ``` docker login docker.synapse.org docker tag brats_submission1 docker.synapse.org/syn6871893/brats_submission1 docker push docker.synapse.org/syn68718934/brats_submission1 ``` What change shall I do on this code?
@YKirchhoff, I am also facing a similar issue as @Ewunate is facing. I am trying to push a Docker image for the challenge. My Synapse Project ID is syn68718904, and I am pushing an image with the tag docker.synapse.org/syn68718904/brats_submission:latest. The docker push command completes successfully and returns a digest, but the repository does not appear in my project's Docker tab. I have verified that I am an administrator on the project, and I have used a new Personal Access Token with full permissions. Can you please help in solving this issue ?
Hi @Ewunate, could you please check that you followed these exact steps and you have the correct _synapse\_project\_ID_ set, i.e. syn68718934 in your case? ``` docker login docker.synapse.org docker tag docker_imagename docker.synapse.org/synapse_project_ID/docker_imagename docker push docker.synapse.org/synapse_project_ID/docker_imagename:latest ```
@YKirchhoff, Thank you for your response. But the problem is I am not able to see any Docker image on my Synapse project page. Here is what my project page looks like. [https://drive.google.com/file/d/1ofmxKtQLKY4828sgXwxFx-hm7eM-Gfmn/view?usp=drive_link](url)
Hi @Ewunate, on your project you need to go to docker, select the docker image and then on _Docker Repository Tools_ click on _Submit Docker Repository to Challenge_ and select the evaluation queue for task 11. Did you do it like that? Best, Yannick
Hi @YKirchhoff, I was pushing the Docker image in Synapse, and I can see in my command prompt that the image was posted successfully. However, I am not able to see the submission in the project's Docker tab. Kindly suggest a solution for this.
Hi @DarylWM, this information will not be provided during testing, neither implicitly nor explicitly. Best, Yannick
Hi @SatyaM, if I understand it correctly, it shouldn't be a problem for you to write the temporary files to the _pred\_dir_ and delete it at the end. This way you shouldn't need the additional _--tmpfs /tmp_, correct? I would much prefer not to mess with the run command, as the algorithms are executed slightly differently under the hood, which is equivalent to the given docker command. Best, Yannick
Hello @YKirchhoff. In the LUMIERE dataset it was possible to infer the number of weeks between baseline and followup from their filename. Is it an explicit field in patients.json? If not, can it still be inferred from the filenames?
Hello @YKirchhoff, we were working on submitting our docker file, however, we had a doubt. Our code generates some temporary files during its execution, and we are able to achieve this locally using something similar to the below code: ``` docker run --gpus all --tmpfs /tmp -v "test_data_dir:/mnt/test_data" -v "pred_dir:/mnt/pred" --read-only docker-image-name /workspace/run_inference.sh /mnt/test_data /mnt/pred ``` (with an addition of ``` --tmpfs /tmp ``` in the code to run the image) All our temporary files will be created to the pred folder and later on deleted once the code is executed. Though this deviates slightly from the code that was provided on github for running the docker image ``` docker run --gpus all -v "test_data_dir:/mnt/test_data" -v "pred_dir:/mnt/pred" --read-only docker-image-name /workspace/run_inference.sh /mnt/test_data /mnt/pred. ``` Will it be possible to incorporate this minor change from the organizer's end ? Thank you for your time.

Hi @_deepakk, yes, you need to upload a docker to a synapse project and submit that to the evaluation queue for task 11. The full procedure is detailed [here](https://github.com/MIC-DKFZ/BraTPRO/?tab=readme-ov-file#submission). You actually need to submit the docker to the evaluation queue so it counts as a submission. The submitted docker will be automatically evaluated on our end, the additional mail is just so I can check that everything is going smoothly but strictly speaking it is not mandatory. Results will automatically be sent back to you. The deadline on 20th July is for a submission to the validation set and the short paper including the results obtained from that submission. Your submission to the validation phase will **NOT** be used as the final submission and you are free to make changes to your algorithm until the final deadline - this is not clearly defined in the challenge rules and might not hold for other subtasks. These changes should then also be reflected in the camera ready version. Best, Yannick
Hi @YKirchhoff , I wanted to ask what did you mean by algorithm submission ? In my understanding we have to create a synapse project and upload our docker image to the synapse registry corresponding to the created project ID . Then after successfully submitting our docker image to the project we have to mail you that we have done the submission and you can evaluate it and mail us back the validation result as we have to mention it in our Short Paper . Correct me if i am wrong and 20th july deadline is only for short paper submission right , we can submit our improvements after 20th also!!! this will not count as our final Submission !!!! Time is ticking very fast please reply as soon you can !!!!!!!!
Hi @DarylWM, the issues should be fixed, but I would still appreciate if you sent me an email when you submit your algorithm, just so I can check. Submissions are possible the whole weekend and you will get an automatic response, however, I would advise you to make a submission as early as possible to check that there are no issues with the docker. You can just make a basic submission to check the docker before submitting it with your final model. Best, Yannick
Hi @YKirchhoff. In an earlier post you asked us to send our submissions for evaluation to you via email; will it be possible to get responses over the weekend?
Hi @_deepakk, sorry if that lead to any confusions. The data path in the _patients.json_ will also lead to the folder, where the different .nii.gz images are stored, the images itself are then named like the parent directory, the folder structure then looks something like this: ``` ├── images │ ├── patient_01_scan_01 │ │ ├── patient_01_scan_01_0000.nii.gz │ │ ├── patient_01_scan_01_0001.nii.gz │ │ ├── patient_01_scan_01_0002.nii.gz │ │ ├── patient_01_scan_01_0003.nii.gz ``` so it should be easy to extract them. I can't check right now, but that should also be the structure generated by the download and conversion script provided for the Lumiere dataset. For testing, all four modalities will be present for all cases. One thing I forgot in my earlier reply is that in addition to the ground truth response also segmentations won't be present in the patients.json during inference. The patients.json for inference will look similar to this (I quickly wrote it down, so there might be minor mistakes, but the general structure is correct): ``` { "patient_001": { "case_01": { "baseline": "./images/patient_001_scan_1", "baseline_registered": "./images_registered/patient_001_scan_1", "followup": "./images/patient_001_scan_2", "followup_registered": "./images_registered/patient_001_scan_2" }, "case_02": { "baseline": "./images/patient_001_scan_2", "baseline_registered": "./images_registered/patient_001_scan_2", "followup": "./images/patient_001_scan_3", "followup_registered": "./images_registered/patient_001_scan_3" } }, "patient_002": { "case_01": { "baseline": "./images/patient_002_scan_1", "baseline_registered": "./images_registered/patient_002_scan_1", "followup": "./images/patient_002_scan_2", "followup_registered": "./images_registered/patient_002_scan_2" }, "case_02": { "baseline": "./images/patient_002_scan_2", "baseline_registered": "./images_registered/patient_002_scan_2", "followup": "./images/patient_002_scan_3", "followup_registered": "./images_registered/patient_002_scan_3" }, "case_03": { "baseline": "./images/patient_002_scan_2", "baseline_registered": "./images_registered/patient_002_scan_2", "followup": "./images/patient_002_scan_4", "followup_registered": "./images_registered/patient_002_scan_4" } } } ``` Best, Yannick
Thankyou @YKirchhoff for the clarification . One more thing I wanted to ask , as in the patients.json file data paths are given not the actual .nii.gz images and we had to extract them from Imaging folder . For Inference what can we expect in the testing json file path or images itself? If images are to present then will there be all four modalities present for each non segmented images and if paths were to given then what will be the folder from which we will extract out .nii.gz images .
Hi @_deepakk, your algorithm needs to read a _patients.json_, which is located in the input folder and details the relative paths to the scans of the respective patients. The _patients.json_ is very similar to the one created from the script for the Lumiere Dataset, just the ground truth response labels are of course missing. Check [this page](https://www.synapse.org/Synapse:syn64153130/wiki/631459) for details about the _patients.json_ and [here](https://github.com/MIC-DKFZ/BraTPRO/?tab=readme-ov-file#submission) for some more details about how the docker will be executed. Let me know, if you have further questions! Best, Yannick
Hello @YKirchhoff , Can you please clarify what will be the input data format for inference you will use . Is it like a folder with samples containing baseline and follow up images . Because we might need to use some preprocessing to abstract the data format we used to train our model. Hoping for a quick response......!
Please all keep in mind the [upcoming deadline](https://www.synapse.org/Synapse:syn64153130/discussion/threadId=12068) for validation submissions on July 20th!
Hi @DarylWM, the segmentations provided are purely based on automatic segmentation pipelines, i.e. HD-GLIO-AUTO or DeepBraTumIA and neither annotated or confirmed by a radiologist. For the RANO ratings on the other hand, an experienced neuroradiologist reviewed all cases. Therefore, this should have no or only minor effect on trained models, in case you are doing multi-task learning. I have to admit, that I am a bit surprised myself by these high numbers, especially for SD and PD. But it shows once again, why the problem formulation as a segmentation task might be suboptimal. Best, Yannick
Hi @SatyaM, for preprocessing images are skull stripped using HD-BET and co-registered to the T1 sequence. Additionally, follow up data is provided either as is or co-registered to the baseline. An adaptation of the script provided for the download and conversion of the Lumiere dataset is used for this. Validation and testing data are therefore provided in a very similar way to the training data. Best, Yannick
Hello @YKirchhoff. In the training data labels, there are a great many examples that have no voxels labelled as CE. This is to be expected for CR, but not for PR, SD, and PD. ``` RANO label % of Training Images Without CE Complete Response (CR) ~65% Partial Response (PR) ~13% Stable Disease (SD) ~56% Progressive Disease (PD) ~48% ``` Given that the RANO class definitions rely on changes to CE, is this observation expected or an error somewhere?
Thank you for the response, I have one other question. I was having a doubt with the images that we'll be using for the validation and testing purpose. 1. Will the images be skull stripped and registered to some atlas (similar to the skull-strip atlas data provided in the DeepBraTumIA-segmentation folder in the Lumiere dataset) ? 2. If not, will the baseline and the follow-up scan sequences have the same dimensions or a different one (for all the sequences) ?
Hi @SatyaM, this information will not be available from the folder names, instead we will provide generic names similar to _patient01scan01_. The idea is to predict response **solely** from image information and we plan to do a final analysis against baselines with some probabilistic modeling dependent on just time from baseline etc. You can of course still use all information available in the Lumiere dataset during training. Best, Yannick
Hi @YKirchhoff, we wanted to know whether the scan folder names contain the information of the week on which the scans were made (eg. Patient-001_week-001), just like the Lumiere data? Or will it be some general name like 'patient_01_scan_01' and 'patient_01_scan_02'. Thank You for your time.
Again, so everyone has the chance to see it: **Important** It seems like the evaluation queue might not be properly working so please **share the created project with me and write me a mail to yannick.kirchhoff@dkfz-heidelberg.de when doing a submission**. This way I can run the evaluation even when the queue does not work.
Hi all, sorry (again) for the late reply, my synapse notifications seem to be not properly working. @aishaque: Results from last year were not published, as we only had few not too successful submissions. We hope to get more, better performing submissions within the context of the lighthouse challenge this year. @vansh: Yes, that is correct. We are not allowed to share the data, therefore we rely on algorithm submissions already during the validation phase. Sorry for any confusion here. You can find more information about submitting the docker [here](https://github.com/MIC-DKFZ/BraTPRO/?tab=readme-ov-file#submission). It seems like the evaluation queue might not be properly working so please **share the created project with me and write me a mail to yannick.kirchhoff@dkfz-heidelberg.de when doing a submission**. This way I can run the evaluation even when the queue does not work. @perkyfever: 1. The provided training set has such cases, however, all test cases are guaranteed to have all four modalities available. 2. You are allowed to use **any** data, which also includes pretrained models. I would actually like to strongly encourage the use of transfer learning here. Let me know if you have any further questions, I will make sure to check this forum more frequently, so I don't miss any more messages. Best, Yannick
Hello, my team has several questions regarding the rules for Task 11. I would greatly appreciate if you could clarify the following details: 1. There is a great amount of patients with one missing modality in training dataset. Can there be such cases in test dataset as well? 2. Did I understand correctly from the above @YKirchhoff response that classification models must be trained from scratch? Best regards, Daniil
Hi @YKirchhoff, i had a few queries regarding the BraTS PRO 2025 challenge, based on what i read in the previous year's challenge website, am i correct in stating that, unlike other classification challenges, we do not have the option of submitting CSV files created based on the validation data, during the validation phase & viewing a leaderboard, and have to submit the trained model directly in docker format, under the BraTS 2025 Lighthouse challenge? Please correct me if am wrong. Sincerely, Vansh
Thanks! Where can I access the results from last year's challenge? I wasn't able to find them
Hi @aishaque, apologies for the late reply. Sorry for the confusion, validation data is NOT publicly released for this subtask, instead submissions to the validation phase already need to be done by submitting a trained model. This was probably a mistake in copy/pasting the timeline from the other subtasks. We first only planned to use one case per patient, i.e. two timepoints. However, we have more timepoints available for most patients and therefore decided to use all available timepoints in the test phase. A single case still consists of two time points though. Yes, the task and requirements for models is the same as in last year's challenge, just slightly updated to use a larger test set (last year had two tasks, so the dataset was split into train/test which we will use completely for testing this year). Let me know, if you have any further questions! Best, Yannick
Hello, Just wanted to follow up on this if someone can reply please! Best, Abdullah
We are interested in the brain tumour progression for BraTS 2025, and we are registered for it. Would you please clarify the following for me: On the document listed here (https://zenodo.org/records/15094823), it says by April 1 2025, the training and validation data will be released. However, on synpase, I have access to the training LUMEIRE database, but no the validation dataset is available yet. On the same page, it says that the testing data will have 300 patients with two timepoints each for 600 scans. However, on synpase, it says the testing dataset will have 300 patients with 1010 cases. Could you clarify that please? Is this challenge identical to BraTS glioma progression challenge from 2024? (https://www.synapse.org/Synapse:syn53752772/wiki/) To me it seems like the model requirements are still the same with the model needing to read two images and provide output class probabilities for the four different response types per the RANO criteria. Looking forward to your response!

BraTSPRO 2025 Data Feedback page is loading…