-
Notifications
You must be signed in to change notification settings - Fork 55
Open
Description
I’m interested in post-training existing Diffusion Renderer checkpoints to potentially improve visual realism, generate new g-buffers, and condition the diffusion process with additional inputs (e.g., canny edge maps) using cosmos transfer.
• Can the current training script be used directly for this type of fine-tuning, or are there any specific caveats I should be aware of?
• Are there plans to release more detailed training documentation or implementation notes specific to the Diffusion Renderer setup?
Metadata
Metadata
Assignees
Labels
No labels