Skip to content

Could you help fix the potential backdoor vulnerability caused by two risky pre-trained models used in this repo? #32

@Slverhand

Description

@Slverhand

Hi, @markkua, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose backdoor threats.Please check the following code example:

diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py

class OnnxStableDiffusionUpscalePipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
    # TODO: is there an appropriate internal test set?
    hub_checkpoint = "ssube/stable-diffusion-x4-upscaler-onnx"
def test_pipeline_default_ddpm(self):
        pipe = OnnxStableDiffusionUpscalePipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
        pipe.set_progress_bar_config(disable=None)

        inputs = self.get_dummy_inputs()
        image = pipe(**inputs).images
        image_slice = image[0, -3:, -3:, -1].flatten()

diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py

class OnnxStableDiffusionImg2ImgPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
    hub_checkpoint = "hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline"
def test_pipeline_default_ddim(self):
        pipe = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
        pipe.set_progress_bar_config(disable=None)

        inputs = self.get_dummy_inputs()
        image = pipe(**inputs).images
        image_slice = image[0, -3:, -3:, -1].flatten()

Issue Description

As shown above, in the test_on_stable_diffusion_upscale.py file, the model "ssube/stable-diffusion-x4-upscaler-onnx" is used as the default model parameter in the from_pretrained() method of the OnnxStableDiffusionUpscalePipeline class in the diffusers library. Running the relevant instance method will automatically download and load this model. Later, the pipe(**input) method is used to execute the model. Similarly, in the test_onnx_stable_diffusion_img2img.py file, the model "hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline" is also automatically downloaded, loaded, and executed.

At the same time, the first model and the second model are flagged as risky on the HuggingFace platform. The model.onnx files in these models are marked as risky and may trigger backdoor threats. For certain specific inputs, the backdoor in the models could be activated, effectively altering the model's behavior.

Image

Image

Related Risk Reports:ssube/stable-diffusion-x4-upscaler-onnx risk report and hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline risk report

Suggested Repair Methods

  1. Replace these models with safer official alternatives, such as stabilityai/stable-diffusion-x4-upscaler and stabilityai/stable-diffusion-2-inpainting (or other models). If specific functionalities cannot be achieved, you may convert these models to ONNX format and substitute them accordingly.
  2. If replacement is not feasible, please include a warning about potential security risks when instantiating the relevant classes.
  3. Temporarily close this test and wait for the official fix from diffusers

As one of the most popular machine learning libraries(star:548), every potential risk could be propagated and amplified. Could you please address the above issues?

Thanks for your help~

Best regards,
Sliverhand

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions