Hi, I have trouble validating the fact that modifying the "epochs_per_round" value outputted by the "constant_hyper_parameters" function given to the "run_challenge_experiment" function is actually doing something. The steps I followed: - Using the github code on https://github.com/FETS-AI/Challenge - Followed the installation as specified in the Readme.md file of Task 1, - Using FeTS_Challenge.py directly, on the "small_split.csv" partitioning. Thus the functions given as parameters of the "run_experiment_challenge" function are, as initially given: ``` aggregation_function = weighted_average_aggregation choose_training_collaborators = all_collaborators_train training_hyper_parameters_for_round = constant_hyper_parameters ``` This gives, for any round (not only round 0): ``` Collaborators chosen to train for round 0: \experiment.py\:\396\ ['1', '2', '3'] INFO Hyper-parameters for round 0: \experiment.py\:\424\ learning rate: 5e-05 epochs_per_round: 1.0 INFO Waiting for tasks... \collaborator.py\:\178\ INFO Sending tasks to collaborator 3 for round 0 \aggregator.py\:\312\ INFO Received the following tasks: ['aggregated_model_validation', 'train', 'locally_tuned_model_validation'] \collaborator.py\:\168\ [14:27:18] INFO Using TaskRunner subclassing API \collaborator.py\:\253\ ******************** Starting validation : ******************** Looping over validation data: 100%|??????????| 1/1 [00:06<00:00, 6.83s/it] Epoch Final validation loss : 1.0 Epoch Final validation dice : 0.2386646866798401 Epoch Final validation dice_per_label : [0.9437699913978577, 0.007874629460275173, 0.0030141547322273254, 2.570958582744781e-13] [14:27:25] INFO 1.0 \fets_challenge_model.py\:\48\ INFO {'dice': 0.2386646866798401, 'dice_per_label': [0.9437699913978577, 0.007874629460275173, 0.0030141547322273254, \fets_challenge_model.py\:\49\ 2.570958582744781e-13]} METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_loss 1.000000 \collaborator.py\:\416\ METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_dice 0.238665 \collaborator.py\:\416\ METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_dice_per_label_0 0.943770 \collaborator.py\:\416\ METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_dice_per_label_1 0.007875 \collaborator.py\:\416\ METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_dice_per_label_2 0.003014 \collaborator.py\:\416\ METRIC Round 0, collaborator 3 is sending metric for task aggregated_model_validation: valid_dice_per_label_4 0.000000 \collaborator.py\:\416\ INFO Collaborator 3 is sending task results for aggregated_model_validation, round 0 \aggregator.py\:\486\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_loss: 1.000000 \aggregator.py\:\531\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_dice: 0.238665 \aggregator.py\:\531\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_dice_per_label_0: 0.943770 \aggregator.py\:\531\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_dice_per_label_1: 0.007875 \aggregator.py\:\531\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_dice_per_label_2: 0.003014 \aggregator.py\:\531\ METRIC Round 0, collaborator validate_agg aggregated_model_validation result valid_dice_per_label_4: 0.000000 \aggregator.py\:\531\ INFO Using TaskRunner subclassing API \collaborator.py\:\253\ INFO Run 0 epoch of 0 round \fets_challenge_model.py\:\143\ ******************** Starting Training : ******************** Looping over training data: 0%| | 0/80 [00:00Created by Matthis Manthe Matthis
@Matthis You are correct. Non-integer values of epochs_per_round are not supported this year.

Epochs_per_round not working? page is loading…