StyleAligned is a diffusion-model editing technique and codebase that preserves the visual “style” of an original image while applying new semantic edits driven by text. Instead of fully re-generating an image—and risking changes to lighting, texture, or rendering choices—the method aligns internal features across denoising steps so the target edit inherits the source style. This alignment acts like a constraint on the model’s evolution, steering composition, palette, and brushwork even as objects or attributes change. The result is more consistent edits across a set, which is crucial for workflows like product variations, character sheets, or brand-coherent art. The repository provides reproducible scripts, reference prompts, and guidance for tuning strengths so users can dial in subtle retouches or bolder substitutions. Because it builds on widely used diffusion checkpoints, creators can integrate it without training or dataset collection.
Features
- Cross-step feature alignment that preserves source style
- Text-driven edits with controllable strength and locality
- Consistent multi-image editing for sets and lookbooks
- Works with common diffusion checkpoints and schedulers
- Reproducible notebooks and scripts for quick adoption
- Useful for brand-coherent variations and character consistency