I want to add a preface to this, as there are controversies surrounding these neural net-generated "AI" artworks: These are not going to directly replace (Imo) artwork that is the product of intentional/intelligent composition and planning - i.e. the art made by a human artist with intentions to convey a specific meaning or emotion.
That said, these AI-generators are a tool that could provide modders, such as myself, that might find themselves artistically challenged at times, with a way to enhance their mods with 2D art that is similar (depending on how much work you want to put in) to, or "good enough" to match the style of event art in the vanilla game.
There are 3 big generators (at time of writing - this might all be woefully out of date in a few months), Midjourney, Dall-E 2 and Stable Diffusion ("SD").
Of the 3, SD can be run locally on your own pc (if you have a weak system it can take up to 5 mins to render an image - if you have a fast GPU, i have an RTX 3060TI, it can take as little as 10 seconds at times, with a 512^2 image). SD is the one to focus on here, as it's free and all 3 can make very good results - though SD certainly takes a little more precision than the others.
There are various ways to run it but I went with the smooth brain installer off Github for Windows [ https://github.com/cmdr2/stable-diffusion-ui ]. Follow the steps, read carefully, and you'll be up and running in no time. It's initially tiny, but dont be fooled it'll download the 4-4.5gb (at time of writing) 1.4x SD neural network as part of the install process off github [edit double checked my install directory, all-uncompressed, it amounts to ~22GB install size].
Once youre up and running, you can use a variety of prompts to create your image, sites like this [ https://promptomania.com/stable-diffusion-prompt-builder/ ] can help give you a bit of an example of what certain key terms will do to an image, altering the art in different ways, and explain concepts like {} in the prompt (curly braces add emphasis).
Native Stellaris Evt image files are 450w x 150h pixels.
Below are a few raw (i.e. not downscaled/cropped) examples from my fiddling about.
Prompt used (without quotes -- the 'burning' is swapped out for things like colony, farm etc to alterwhat is in the scene):
"{{horizon city}} {burning} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{horizon city}} {agriculture} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{space ship armada}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{crashed colony ship}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
etc
Other info:
Seed: random,
Sampler: plms,
Inference Steps: 69 (i input 69, but the plms sampler is a dynamic sampler it can go higher or lower based on how "done" it thinks the image is, less means faster, most samplers have diminishing returns over 60-70 steps),
Guidance Scale: 8.0,
Model: sd-v1-4
Note: for every one image shown below, I might discard a dozen or more or spend dozens of minutes tweaking the prompt to try and get a better rendition of the scene I want, or alter the colours [to skip time in photoshop doing tints] (beyond the example prompts i have pasted above - though that is the rough format i have found to semi-mimic PDX event art). For me, it's still infinitely faster than trying to paint out these scenes in PS or equivalent, though.
I have found the model is pretty good at creating landscapes (like cities, infrastructure, some orbital shots, satellites) with minimal effort, crowds and space ships with more effort and aliens can be quite difficult to make without being hyper specific with your prompts - the model is trained on places, animals and people - not aliens - after all. For the two alien pictures at the end of the spoiler gallery i use statements like turtle-human hybrid and alien, rather than just saying alien, youve got to describe it in terms that the model will have been trained in, giving it's net a hook or two to generate imagery.
So mods looking to make use of this would probably get the most value out of it if their events focus on the planetary scale and avoid directly focussing on characters in the artwork (e.g. writing events about bombings on a planet after stability dropped too low, or some situation about economic/infrastructure shifts, or the impacts of espionage events [like showing poisoned water, damaged equipment etc], or other colony/archaeology stories).
Anyway, this isn't a comprehensive guide more of a "this thing exists go try it", as i'll continue to refine my prompts to make more interesting stuff. But I just wanted to post this for those out there that, like me, want their mods to feel more "vanilla ++" and blend in to the core aesthetic as much as possible - whilst wanting to add additional/new content.
That said, these AI-generators are a tool that could provide modders, such as myself, that might find themselves artistically challenged at times, with a way to enhance their mods with 2D art that is similar (depending on how much work you want to put in) to, or "good enough" to match the style of event art in the vanilla game.
There are 3 big generators (at time of writing - this might all be woefully out of date in a few months), Midjourney, Dall-E 2 and Stable Diffusion ("SD").
Of the 3, SD can be run locally on your own pc (if you have a weak system it can take up to 5 mins to render an image - if you have a fast GPU, i have an RTX 3060TI, it can take as little as 10 seconds at times, with a 512^2 image). SD is the one to focus on here, as it's free and all 3 can make very good results - though SD certainly takes a little more precision than the others.
There are various ways to run it but I went with the smooth brain installer off Github for Windows [ https://github.com/cmdr2/stable-diffusion-ui ]. Follow the steps, read carefully, and you'll be up and running in no time. It's initially tiny, but dont be fooled it'll download the 4-4.5gb (at time of writing) 1.4x SD neural network as part of the install process off github [edit double checked my install directory, all-uncompressed, it amounts to ~22GB install size].
Once youre up and running, you can use a variety of prompts to create your image, sites like this [ https://promptomania.com/stable-diffusion-prompt-builder/ ] can help give you a bit of an example of what certain key terms will do to an image, altering the art in different ways, and explain concepts like {} in the prompt (curly braces add emphasis).
Native Stellaris Evt image files are 450w x 150h pixels.
Below are a few raw (i.e. not downscaled/cropped) examples from my fiddling about.
Prompt used (without quotes -- the 'burning' is swapped out for things like colony, farm etc to alterwhat is in the scene):
"{{horizon city}} {burning} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{horizon city}} {agriculture} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{space ship armada}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{crashed colony ship}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
etc
Other info:
Seed: random,
Sampler: plms,
Inference Steps: 69 (i input 69, but the plms sampler is a dynamic sampler it can go higher or lower based on how "done" it thinks the image is, less means faster, most samplers have diminishing returns over 60-70 steps),
Guidance Scale: 8.0,
Model: sd-v1-4
Note: for every one image shown below, I might discard a dozen or more or spend dozens of minutes tweaking the prompt to try and get a better rendition of the scene I want, or alter the colours [to skip time in photoshop doing tints] (beyond the example prompts i have pasted above - though that is the rough format i have found to semi-mimic PDX event art). For me, it's still infinitely faster than trying to paint out these scenes in PS or equivalent, though.
























I have found the model is pretty good at creating landscapes (like cities, infrastructure, some orbital shots, satellites) with minimal effort, crowds and space ships with more effort and aliens can be quite difficult to make without being hyper specific with your prompts - the model is trained on places, animals and people - not aliens - after all. For the two alien pictures at the end of the spoiler gallery i use statements like turtle-human hybrid and alien, rather than just saying alien, youve got to describe it in terms that the model will have been trained in, giving it's net a hook or two to generate imagery.
So mods looking to make use of this would probably get the most value out of it if their events focus on the planetary scale and avoid directly focussing on characters in the artwork (e.g. writing events about bombings on a planet after stability dropped too low, or some situation about economic/infrastructure shifts, or the impacts of espionage events [like showing poisoned water, damaged equipment etc], or other colony/archaeology stories).
Anyway, this isn't a comprehensive guide more of a "this thing exists go try it", as i'll continue to refine my prompts to make more interesting stuff. But I just wanted to post this for those out there that, like me, want their mods to feel more "vanilla ++" and blend in to the core aesthetic as much as possible - whilst wanting to add additional/new content.
Last edited:
- 10
- 2