As I understand it, the robustness evaluation in UnlearnCanvas is averaged over the 60 styles and 20 objects? However, the DiffAtk only provide part of the prompts. Would you consider providing the prompts used for both styles and objects robustness evaluations? I think maybe this should be consistent for following a benchmark and it would reduce much trouble of generating the prompts for users. It would be great gratitude if these prompts could be provided.
As I understand it, the robustness evaluation in UnlearnCanvas is averaged over the 60 styles and 20 objects? However, the DiffAtk only provide part of the prompts. Would you consider providing the prompts used for both styles and objects robustness evaluations? I think maybe this should be consistent for following a benchmark and it would reduce much trouble of generating the prompts for users. It would be great gratitude if these prompts could be provided.