描述Algorithmically-generated AI-generated artwork of a futuristic city left in destruction.png |
Algorithmically-generated AI science-fiction artwork featuring a futuristic metropolis laid to complete devastation, created using the Stable Diffusion V1-4 AI diffusion model.
- Procedure/Methodology
All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.
A single 768x512 image was generated with txt2img using the following prompts:
Prompt: highly detailed, high quality, digital painting of a destroyed and lifeless futuristic city in ruin and disarray, planets, galaxies, art style of Juan Wijngaard and Albert Bierstadt, ray traced, octane render, 8k
Negative prompt: none
Settings: Steps: 50, Sampler: Euler a, CFG scale: 7, Size: 768x512
Afterwards, the image was extended by 128 pixels on the top, bottom, left and right sides using fourteen successive passes of the "Outpainting mk2" script within img2img, adding additional detail to the image one after the other, until the image's dimensions reached 2048x1024 as its natively generated size (prior to the commencement of any upscaling). For each individual pass, this was done using a setting of 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7, mask blur of 8, fall-off exponent value of 1.8, colour variation set to 0.03. This subsequently increased the field of view of the image compared to the originally generated image, from one tiny portion of the cityscape in the centre of the image, to a significantly wider view of the foreground debris.
Then, two passes of the SD upscale script using "SwinIR_4x" were run within img2img. The first pass used a tile overlap of 128, denoising strength of 0.01 (anything above 0.03 loses a lot of detail among the burning city grid in the distance), 150 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 256, denoising strength of 0.01, 150 sampling steps with Euler a, and a CFG scale of 7. This creates our final 8192x4096 image. |
授权 (二次使用本文件) |
- Output images
As the creator of the output images, I release this image under the licence displayed within the template below.
- Stable Diffusion AI model
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
- Addendum on datasets used to teach AI neural networks
Artworks generated by Stable Diffusion are algorithmically created based on the AI diffusion model's neural network as a result of learning from various datasets; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated artworks cannot be considered derivative works of components from within the original dataset, nor can any coincidental resemblance to any particular artist's drawing style fall foul of de minimis. While an artist can claim copyright over individual works, they cannot claim copyright over mere resemblance over an artistic drawing or painting style. In simpler terms, Vincent van Gogh can claim copyright to The Starry Night, however he cannot claim copyright to a picture of a T-34 tank painted with similar brushstroke styles as Gogh's The Starry Night created by someone else. |