WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

L2 Momentum Iterative Attack, risk of NA outputs #105

@qlero

Description

@qlero

Hello,
I'm pushing here an issue I've encountered with the MomentumIterativeAttack object when using the ord=2 argument. Sometimes, the computation will result in perturbed outputs (e.g. a batch of 64 MNIST images) that happen to be a torch Tensor of NA values.

This is due to this step, line 434-437, in iterative_projected_gradient.py:

delta.data *= clamp(
    (self.eps * normalize_by_pnorm(delta.data, p=2) /
        delta.data),
    max=1.)

delta.data may sometimes contain 0-valued elements, which will result in NA through the divisor operation (due to the iterative nature of the algorithm, this will propagate to the whole batch of images).

I'm suggesting to add a small value here, e.g. (self.eps * normalize_by_pnorm(delta.data, p=2) / (delta.data + 1e-8), to avoid this (akin to adding 1 to the vector vv in the function batch_l1_proj_flat in utils.py, line 239).

Best regards,

qlero

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions