**Objective:** Generate images with AI and learn about related pipelines
**Takeaways**
- Generating (good) images takes a lot of patience and tweaking
- Canny edge detection is real cool
- Seeds and weights can vastly alter outcomes
**Tools Used:**
- [Stable Diffusion](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 1.4/1.5 running on local PC
- [Dreambooth](https://dreambooth.github.io/) running in Google Colab to train models
Below are a few images that were generated:
![[ego_minifigure__photo__bust__dynamic_lighting__plastic_Seed-2777227_Steps-50_Guidance-7.2.jpeg|295]]
*Lego version of myself based on model trained with 15 images*
![[PXL_20230415_011141217.jpg|300]] ![[brz canny.png|305]]
![[BSRGAN.png|300]]
*Cartoon rendering of a photograph. Final image is sharpened with BSRGAN*
![[00051-3115269734-A hyper-detailed 3d render like a Oil painting of the Ocean’s dream of The Construction of a Unified Theory, surrealism!!!!! sur.png|300]] ![[bsr horse.png|300]]
![[00104-1769168959-portrait of half chicken half man, perfect shading, atmospheric lighting, by makoto shinkai, stanley artgerm lau, wlop, rossdraw.png|300]] ![[00011-2173319863.png|300]]
*Good outcomes*
![[horror_movie_poster_starring__Danny_DeVito__Seed-4461025_Steps-50_Guidance-7.5.png|290]]
*Not so good*
#AI #Python