Hello! For the testing phase (i.e., Docker submission), there is mention that two working submissions will be used per team. Does this mean that you will run both submissions and take our best score? Thank you!

Created by James Grover jgro4702525
@kklay, glad to hear it worked. Kind regards, Himashi
Thank you so much@hap, --input should be '/workspace/input/Testing_data', 1. The log link you provided now shows “You don’t have permission to view this.” I am able to see the logs at https://www.synapse.org/Synapse:syn68906573 2. The line `#ENTRYPOINT ["bash", "/workspace/test.sh"]` is commented out in my code; the hash was missing in the web view due to Markdown formatting I succeeded, thanks again
Hi @kklay, You can find logs in here under your submission ID: https://www.synapse.org/Synapse:syn68924792 I suspect the issue is with this entry in your Dockerfile: ENTRYPOINT ["bash", "/workspace/test.sh"] Since you have the line: ENTRYPOINT ["python3", "/workspace/inference.py"], can you remove ENTRYPOINT ["bash", "/workspace/test.sh"] and try again? Kind regards, Himashi
Hi @hap, First,Thanks for your reply. I have modified the code as you suggested, and the code is as follows, but it shows that my docker submission is still invalid, and there is no log in the Testing Logs. def parse_args(): p = argparse.ArgumentParser(description="Simple inference entry: parse args and run 4 steps.") p.add_argument('--input', type=str, default='/workspace/input/Testingdata', help='Path to input test data') p.add_argument('--output', type=str, default='/workspace/output/Predictions', help='Path to output predictions') args = p.parse_args() return args.input, args.output def run(cmd): print("[RUN]", " ".join(cmd), flush=True) subprocess.run(cmd, check=True) def main(): input_dir, output_dir = parse_args() print(f"[INFO] Input directory: {input_dir}", flush=True) print(f"[INFO] Output directory: {output_dir}", flush=True) os.environ["INPUT_DIR"] = input_dir os.environ["OUTPUT_DIR"] = output_dir print(f"[INFO] Using INPUT_DIR={input_dir}", flush=True) print(f"[INFO] Using OUTPUT_DIR={output_dir}", flush=True) # -------- 预处理 -------- print("[INFO] Step1: preprocess", flush=True) run(["python", "-u", "data_preprocess/prep.py", "--input", input_dir]) # -------- 推理(3 模态) -------- print("[INFO] Step2: inference - FLAIR", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "FLAIR", "--test_dataset", "flair_test"]) print("[INFO] Step2: inference - T1", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "t1", "--test_dataset", "t1_test"]) print("[INFO] Step2: inference - T2", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "t2", "--test_dataset", "t2_test"]) print("[INFO] Done.", flush=True)
Hi @kklay, Can you please add these two lines to the argparse and retry: parser.add_argument('--input', type=str, default='/workspace/input/Testing_data', help='Path to input test data') parser.add_argument('--output', type=str, default='/workspace/output/Predictions', help='Path to output predictions') Kind regards, Himashi
Thanks@hap Okay. Below are my Dockerfile and inference script. FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime WORKDIR /workspace COPY requirements.txt /workspace/requirements.txt RUN pip install --no-cache-dir -r /workspace/requirements.txt COPY . /workspace #ENTRYPOINT ["bash", "/workspace/test.sh"] ENTRYPOINT ["python3", "/workspace/inference.py"] #!/usr/bin/env python3 # -*- coding: utf-8 -*- import os import sys import argparse import subprocess def parse_args(): p = argparse.ArgumentParser(description="Simple inference entry: parse args and run 4 steps.") # 既支持位置参数,也支持 --input/--output p.add_argument("pos_input", nargs="?", help="输入目录(位置参数1)") p.add_argument("pos_output", nargs="?", help="输出目录(位置参数2,可选,不在本脚本中使用)") p.add_argument("--input", dest="flag_input", help="输入目录(可选)") p.add_argument("--output", dest="flag_output", help="输出目录(可选)") args = p.parse_args() # 优先顺序:--flag > positional > env(可选) > 默认 input_dir = args.flag_input or args.pos_input or os.environ.get("INPUT_DIR") or "/input" output_dir = args.flag_output or args.pos_output or os.environ.get("OUTPUT_DIR") or "/output" return input_dir, output_dir def run(cmd): print("[RUN]", " ".join(cmd), flush=True) subprocess.run(cmd, check=True) def main(): input_dir, output_dir = parse_args() #打印输入输出目录 print(f"[INFO] Input directory: {input_dir}", flush=True) print(f"[INFO] Output directory: {output_dir}", flush=True) # 暴露到子进程环境,方便其他脚本需要时读取 os.environ["INPUT_DIR"] = input_dir os.environ["OUTPUT_DIR"] = output_dir print(f"[INFO] Using INPUT_DIR={input_dir}", flush=True) print(f"[INFO] Using OUTPUT_DIR={output_dir}", flush=True) # -------- 预处理 -------- print("[INFO] Step1: preprocess", flush=True) run(["python", "-u", "data_preprocess/prep.py", "--input", input_dir]) # -------- 推理(3 模态) -------- print("[INFO] Step2: inference - FLAIR", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "FLAIR", "--test_dataset", "flair_test"]) print("[INFO] Step2: inference - T1", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "t1", "--test_dataset", "t1_test"]) print("[INFO] Step2: inference - T2", flush=True) run(["python", "-u", "main.py", "--mode", "test", "--run_name", "t2", "--test_dataset", "t2_test"]) print("[INFO] Done.", flush=True) if __name__ == "__main__": try: main() except subprocess.CalledProcessError as e: # 子进程报错时,以相同的退出码退出,便于评测系统识别失败 sys.exit(e.returncode) error information: [BOOT] start [INFO] INPUT_DIR=/input [INFO] OUTPUT_DIR=/output Torch: 1.11.0 CUDA avail: True CUDA: 11.3 Torchvision: 0.12.0 GPU: NVIDIA RTX A6000 [INFO] Step1: preprocess Traceback (most recent call last): File "data_preprocess/prep.py", line 103, in main() File "data_preprocess/prep.py", line 62, in main patients = sorted(d for d in os.listdir(INPUT_ROOT) if osp.isdir(osp.join(INPUT_ROOT, d))) FileNotFoundError: [Errno 2] No such file or directory: '/input/Testing_data'
Hi @kklay, Could you please share what your argparse looks like? Also, Dockerfile content, if possible. Can you also check whether your Docker template aligns with the sample Docker given under the testing phase guidelines: https://www.synapse.org/Synapse:syn65485242/wiki/633830 To answer your question: when running Docker, we provide absolute paths. Kind Regards, Himashi
@hap Dear hap, Hello, I have a question about the input path loading. My Docker image runs correctly on my local machine when I mount local data, yet every online submission is marked invalid. Could you please explain how the input and output directories are mounted? The logs show that no command-line arguments are received. Alternatively, could you provide the exact absolute paths for input and output? Using those paths directly might allow my submission to pass. Thank you!
Dear @trotteligerotter, You can test your Docker locally on the validation dataset we have provided before making a submission ([Guidelines](https://www.synapse.org/Synapse:syn65485242/wiki/633830)). Please make sure that the predictions are correctly aligned with the headers of the input NIfTI files. Log files for each submission ID can be viewed under the Files tab: [Testing Logs](https://www.synapse.org/Synapse:syn68906573) . Please note that scores are not released during the submission phase, at this stage we only check whether the Docker is acceptable and runs correctly with the required input and output parameters. During this phase, we also test the submission only on a subset of the testing set to ensure basic functionality. Final evaluations on the complete testing set, along with the top scorers, will be announced after the testing phase concludes. Regarding your suggestion: we do not send scores by email for accepted submissions during the submission phase, as the purpose is to ensure technical correctness rather than performance evaluation. As for sanity thresholds (e.g., only counting solutions as accepted if SSIM > 0.6), we do not enforce such criteria at submission time. All submissions that run correctly with the specified commands are accepted for final testing evaluation. Performance-based filtering is done only after the testing phase during the final evaluation. Kind Regards, The ULF-EnC Challenge Organizers
@KhTohidulIslam Before flooding the docker queue: will we get log files of failed attemps? will we get the two scores of the "working" submissions? This would simplify debugging. Sending per mail the score of the accepted submission would at least give us another shot if we messed up something in the docker image that does not result in an error, but completly wrong outputs. Or do you have a sanity threshold maybe that you only count solutions as "accepted" if the SSIM is > 0.6, for example?
Hi! I would just like to confirm that the Docker submission deadline (as per the challenge wiki) is August 25th, also at 11:59 UTC?
Hello @jgro4702525 Yes, we will consider the best score from your two accepted submissions. Please note that each team may submit their Docker until they have reached two accepted submissions. Once two accepted submissions have been made, no further submissions will be allowed. Best regards, The ULF-EnC Challenge Organizers

Docker Submission page is loading…