Implementation of "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al.)
This project implements Image Style Transfer, a technique that combines the content of one image with the style of another image to generate a new artistic output.
-
Implemented from scratch using PyTorch, without relying on pre-built style transfer libraries.
-
Includes key components such as:
- Content loss
- Style loss (Gram matrices)
- Total variation loss
- Optimization with L-BFGS
For a detailed explanation of the project, concepts, and my learning journey, check out my blog post: Here
- PyTorch implementation of Neural Style Transfer.
- Optimized using L-BFGS optimizer for faster convergence.
- Adjustable weights (
alphafor content,betafor style). - Extendable codebase for experiments with new loss functions or architectures.
# Clone repository
git clone https://github.com/nirdesh17/style-transfer.git
cd style-transferRun style transfer with your own content and style images:
python src/run.py --content path/to/content.jpg --style path/to/style.jpg --output output.jpgOptional changes:
iterations: Number of optimization steps (default: 500)alpha: Content weight (default: 1)beta: Style weight (default: 1e6)
| Content Image | Style Image | Output |
|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
style-transfer/
│── src/
│ ├── run.py
│── images/
│ ├── content
│ ├── style
│ ├── outputs
│── README.md
- Multi-style transfer (combine multiple styles).
- Real-time transfer using feed-forward networks.
- Web/GUI demo for interactive use.







