-
Notifications
You must be signed in to change notification settings - Fork 18
Open
Description
Thank you for your solid work.
Does the repo implement the function that pick the model weights that perform best in val dataset to evaluate in test dataset?
From the code below, it seems that the repo directly choose the best results in test dataset as the final results?
Elevater_Toolkit_IC/vision_benchmark/evaluation/full_model_finetune.py
Lines 267 to 277 in 00d0af7
| train_one(train_dataloader, model, criterion, optimizer, epoch, config) | |
| # evaluate on validation set | |
| acc1, logits = validate(test_dataloader, model, criterion, epoch, config, return_logits=True) | |
| # remember best acc@1 and save checkpoint | |
| if acc1 > best_acc1: | |
| model_info['best_logits'] = logits | |
| best_acc1 = max(acc1, best_acc1) | |
| logging.info(f'=> Learning rate {config.TRAIN.LR}, L2 lambda {config.TRAIN.WD}: Best score: Acc@1 {best_acc1:.3f}') |
Metadata
Metadata
Assignees
Labels
No labels