-
Notifications
You must be signed in to change notification settings - Fork 204
Description
Hi @takerum, thanks for providing the code for the papers. I have several questions regarding reproducing the FID score of CIFAR10 in your cgan paper (page 14, table 3):
-
Is the FID score produced using the full 50k training data statistics in CIFAR10, as found in the chainer-gan-lib repository (https://github.com/pfnet-research/chainer-gan-lib/blob/master/common/cifar-10-fid.npz)? Otherwise, how many real and fake images are being used for computing the score in table 3?
-
Is the FID score computed using the the evaluation.py function found in (https://github.com/pfnet-research/sngan_projection/blob/master/evaluation.py#L220)? Because it points to the statistics file that is used in the chainer-gan-lib repository.
-
Alternatively, I have found a relevant issue (intra-fid reported in the paper #34) that shares the FID statistics, but it seems to be used for intra-fid computation and for just imagenet instead. Would you have the CIFAR10 statistics used in the experiments you could share (or is it the same as the statistics from chainer-gan-lib)?
I have tested with the FID score for your pre-trained models (conditional/unconditional CIFAR10) with 4 variations: choosing from [your_fid_computation_code, official_TTUR_fid_code] and [FID_stats_in_chainer_gan_lib, official_TTUR_fid_train_stats], and found there are quite a few points of difference in the scores, which might be due to the statistics/code being used. Thus it will be very helpful to clarify with you the above questions.
If I have read your papers correctly, I believe SNGAN (unconditional) for cifar10 has scores computed using 10k-5k FID (from appendix B.1) -- is there any supporting code in the repository for reproducing this results?
Thank you for the help.