• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.

Pancakelord

Lord of Pancakes
44 Badges
Apr 7, 2018
3.375
12.291
  • Cities: Skylines - Green Cities
  • Stellaris: Leviathans Story Pack
  • Cities: Skylines - Natural Disasters
  • Hearts of Iron IV: Together for Victory
  • Stellaris: Ancient Relics
  • Cities: Skylines - Mass Transit
  • Surviving Mars
  • Hearts of Iron IV: Death or Dishonor
  • Imperator: Rome
  • Stellaris: Digital Anniversary Edition
  • Hearts of Iron IV: Expansion Pass
  • Stellaris: Humanoids Species Pack
  • Stellaris: Apocalypse
  • Cities: Skylines - Parklife
  • Stellaris: Distant Stars
  • Shadowrun Returns
  • Cities: Skylines Industries
  • Imperator: Rome Deluxe Edition
  • Magicka: Wizard Wars Founder Wizard
  • Stellaris: Nemesis
  • Europa Universalis IV
  • Stellaris: Necroids
  • Sword of the Stars
  • Crusader Kings III
  • War of the Roses
  • Cities: Skylines
  • Stellaris: Federations
  • Cities: Skylines - After Dark
  • Cities: Skylines - Snowfall
  • Stellaris: Lithoids
  • Stellaris: Galaxy Edition
  • Stellaris: Galaxy Edition
  • Stellaris: Galaxy Edition
  • Hearts of Iron IV: Cadet
  • Hearts of Iron IV: Colonel
  • Stellaris - Path to Destruction bundle
  • Stellaris: Megacorp
  • Stellaris: Synthetic Dawn
  • Crusader Kings II
  • Stellaris
  • Cities: Skylines Deluxe Edition
  • Sword of the Stars II
  • March of the Eagles
  • Darkest Hour
I want to add a preface to this, as there are controversies surrounding these neural net-generated "AI" artworks: These are not going to directly replace (Imo) artwork that is the product of intentional/intelligent composition and planning - i.e. the art made by a human artist with intentions to convey a specific meaning or emotion.

That said, these AI-generators are a tool that could provide modders, such as myself, that might find themselves artistically challenged at times, with a way to enhance their mods with 2D art that is similar (depending on how much work you want to put in) to, or "good enough" to match the style of event art in the vanilla game.

There are 3 big generators (at time of writing - this might all be woefully out of date in a few months), Midjourney, Dall-E 2 and Stable Diffusion ("SD").

Of the 3, SD can be run locally on your own pc (if you have a weak system it can take up to 5 mins to render an image - if you have a fast GPU, i have an RTX 3060TI, it can take as little as 10 seconds at times, with a 512^2 image). SD is the one to focus on here, as it's free and all 3 can make very good results - though SD certainly takes a little more precision than the others.

There are various ways to run it but I went with the smooth brain installer off Github for Windows [ https://github.com/cmdr2/stable-diffusion-ui ]. Follow the steps, read carefully, and you'll be up and running in no time. It's initially tiny, but dont be fooled it'll download the 4-4.5gb (at time of writing) 1.4x SD neural network as part of the install process off github [edit double checked my install directory, all-uncompressed, it amounts to ~22GB install size].

Once youre up and running, you can use a variety of prompts to create your image, sites like this [ https://promptomania.com/stable-diffusion-prompt-builder/ ] can help give you a bit of an example of what certain key terms will do to an image, altering the art in different ways, and explain concepts like {} in the prompt (curly braces add emphasis).

Native Stellaris Evt image files are 450w x 150h pixels.

Below are a few raw (i.e. not downscaled/cropped) examples from my fiddling about.

Prompt used (without quotes -- the 'burning' is swapped out for things like colony, farm etc to alterwhat is in the scene):
"{{horizon city}} {burning} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{horizon city}} {agriculture} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{space ship armada}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
"{{crashed colony ship}} {Sci-fi} Visual Novel, digital art, digital painting, 5D, 4k, concept art"
etc
Other info:
Seed: random,
Sampler: plms,
Inference Steps: 69 (i input 69, but the plms sampler is a dynamic sampler it can go higher or lower based on how "done" it thinks the image is, less means faster, most samplers have diminishing returns over 60-70 steps),
Guidance Scale: 8.0,
Model: sd-v1-4

Note: for every one image shown below, I might discard a dozen or more or spend dozens of minutes tweaking the prompt to try and get a better rendition of the scene I want, or alter the colours [to skip time in photoshop doing tints] (beyond the example prompts i have pasted above - though that is the rough format i have found to semi-mimic PDX event art). For me, it's still infinitely faster than trying to paint out these scenes in PS or equivalent, though.

__horizon_city____on_fire__Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concep...png
__horizon_city____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png
__horizon_city____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png

__horizon_city____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png
__horizon_city____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png
__horizon_city____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png

__horizon_city____burning___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__conce...png
__horizon_city____bombardment___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__c...png
__horizon_city____bombardment___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__c...png
1668121354986.png

__horizon_city____alien_invasion___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k...png
__horizon_city____alien_invasion___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k...png
__space_ships_in_orbit____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept...png
__stellaris____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_Seed-2...png

__crashed_colony_ship____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_...png
__crashed_colony_ship____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_...png


__space_armada____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png
__space_armada____Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_See...png


1668121291706.png
1668121317596.png


__stellaris___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_Seed-30...png
__stellaris___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__concept_art_Seed-20...png


__turtle_human_hybrid____alien___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__...png
__turtle_human_hybrid____alien___Sci_fi__Visual_Novel__digital_art__digital_painting__5D__4k__...png

I have found the model is pretty good at creating landscapes (like cities, infrastructure, some orbital shots, satellites) with minimal effort, crowds and space ships with more effort and aliens can be quite difficult to make without being hyper specific with your prompts - the model is trained on places, animals and people - not aliens - after all. For the two alien pictures at the end of the spoiler gallery i use statements like turtle-human hybrid and alien, rather than just saying alien, youve got to describe it in terms that the model will have been trained in, giving it's net a hook or two to generate imagery.

So mods looking to make use of this would probably get the most value out of it if their events focus on the planetary scale and avoid directly focussing on characters in the artwork (e.g. writing events about bombings on a planet after stability dropped too low, or some situation about economic/infrastructure shifts, or the impacts of espionage events [like showing poisoned water, damaged equipment etc], or other colony/archaeology stories).

Anyway, this isn't a comprehensive guide more of a "this thing exists go try it", as i'll continue to refine my prompts to make more interesting stuff. But I just wanted to post this for those out there that, like me, want their mods to feel more "vanilla ++" and blend in to the core aesthetic as much as possible - whilst wanting to add additional/new content.
 
Last edited:
  • 10Like
  • 2Love
Reactions:
Thanks for the info! I was having a tough time trying to figure out what the best settings work and yours work marvelous. This is such a helpful post! Here is my what environment looks like in case anyone is curious or confused.

1671774803105.png


Some tips for Stable Diffusion
  • Don't change from the 512x512 size. It will just generate junk since the AI is trained on 512x512. Crop or resize them after generating them instead, or use an upscaler like Real-ESRGAN to increase resolution.
  • "Batch Size" changes the number of simultaneous images generated, and will overload your GPU's memory if you do more than 1. Instead, right click on "Generate" and select "Generate Forever" if you want to generate many batches. Right click and select "Cancel" to interrupt. Changing the settings will make the next batch generate using those new settings.
  • You can find every image you have ever generated in the outputs folder
  • AI is like every other tool, it takes practice and time to learn. Don't expect to generate masterpieces right away. I'm still a huge newb, too.
 
Last edited:
  • 1Like
  • 1
  • 1
Reactions:
Some tips for Stable Diffusion
I wanted to add in a little experiment i've been trying, to your list:

As of writing im busy with my Modjam22 project, but i've been looking in to making special event images for some other side projects too.

One of which is archeological dig site stuff - It's possible to use seed-locking (the seed is what varies the image) to create vistas and then alter the properties. By keeping the seed the same, you can create more sublte variations of the same(ish!) scene.

For example, see below, I've got a city scape spanning 4 eras, which could be used to tell the story of a random planet, and how it was once a great place - but ended up as a barren world, or a relic world and so on. The visual consistency (some photoshopping may be needed -- the below images are raws out of SD 2.1) helps with conveying story-telling across dig-site phases.

An aquatic "golden age/ecumenopolis"
1672241843956.png


A bombed out hell-scape "tomb-world"
1672241854930.png


The deserts reclaim the ruins "relic-world"
1672241870017.png


Nature reclaims the deserts, with time, the ecosystem recovers
1672241886356.png


A slightly more overgrown variation of above (havent managed to get dinosaurs or techno-primitives to appear - sadly).
1672242416837.png


Edit: didnt get my techno-primitives, but society looks to have recovered in some form of post-technology manner - should work, along with event-text prompts.
1672243028332.png

Or maybe a return to high-technology. A blend of the old world, and the lessons learned from the collapse...
1672243229975.png
[ive attached this as an ERGAN upscaled 4k x 2k wallpaper for anyone that wants it]
Could be a cool long-term situation, to work with some primitives (rather than annexing them) and be rewarded with a more-powerful planet as a result.


This is achieved by locking down the seed then adding statements to the prompt as you go -- in this case i've not been using the same style prompt as in my OP, as Stable diffusion 2.1(and 2.0) is a separate/new/retrained model, so thought it best to try and see what works best here, from scratch.
1672242084223.png


1672242216917.png

I.e. i first generated the phrase without "destroyed buildings, desert, overgrown lush jungle" (i cant seem to get little tribal villages to appear in the green area in front, was hoping to create some techno-primitives), then for each stage i added an extra phrase in to convey the scene changing over time.
 

Attachments

  • 00000.jpg
    00000.jpg
    655,1 KB · Views: 0
  • 00000.png
    00000.png
    20,8 MB · Views: 0
Last edited:
  • 1Love
  • 1Like
  • 1
Reactions:
I noticed you increased the horizontal resolution. I tried generating with the 512 x 512 and 1536 x 512 comparatively, and it feels like the 1536 produces less varied content. The AI model is trained on 512x512 which would explain why it would produce weaker content at other resolutions. I would suggest sticking with 512 x 512.
  • a sci fi futuristic city skyline, clear sky in background, farmland in the foreground, ruined buildings
  • 59 sampling steps
  • PLMS
  • CFG Scale 5.5
512 x 512
grid-0255.jpg


1536 x 512
grid-0256.jpg
 
  • 1Like
  • 1
Reactions:
Yeah the horizontal was my test to see if I can get a 3:1 aspect ratio to make fitting in photoshop easier as it's similar to Stellaris event image ratio. My findings were similar to your results, too.
I have found that a 1.5:1 aspect ratio (i.e. 768w:512h) works OK -- if you also use a fairly low CFG and quite specific prompts (usually I can get my images to have most of their stuff going on in the bottom half to 2-3rds, but it's hit and miss).
1672268098529.png
1672268456340.png


(P)LMS also seems to do more repetitive looking images (on 2.1) than euler a [or any ancestral sampler], for non-square images, I think this is because it adds, generally, more image detail than euler a (which is pretty much what i'm sticking to for things on 2.1 now).

e.g.:
01106-49887366-A ((scifi, futuristic, cyberpunk)) asteroid space station, industrial, mining, ...png
1672267460425.png


I have also tried forcing it to add dead space (ie. telling it to have the picture in the top or bottom of the frame, or adding terms to leave dead-space in the image (like the "desert foreground") with... mixed success.

The other option is to try out-painting with one of the derivative models on a 512^2 image a bit, but not tried it yet.
 
Last edited:
  • 1Like
Reactions:
Due to it not being fun in my own playtesting - and real life cutting my modding time too short of late to finish the project - I didn't end up submitting my collectivism MJ22 submission. But the Art from that project can live on (plays on a few different themes - riots, protests, military parades etc)..

All of it is done in a stellaris-ish style, and most of it should be reusable by other mods. All generated in stable diffusion 2.x (see earlier posts on how it was done). I link the below zip with permission for anyone to use in their mods - provided it's not for profit (not for sale, patreon etc), and I don't need to be personally credited, if you don't want to.
Sample:
event picture board.PNG
 

Attachments

  • MJ2022_collectivism_image_assets_.rar
    11,2 MB · Views: 0
  • 3Like
Reactions:
"Hey that Blorg looks uncanny!"

"That's how you know it's real!"


... but seriously this is some amazing stuff.
 
  • 1Haha
Reactions:
SDXL + Paragons test
Small update. Been testing the latest version of Stable diffusion, SDXL 1.0 (without the add-on refiner, as its a dirty RAM hog - and because cartoonish/painted graphics work better without it IMO)

My test? Reskin a paragon, using IMG2IMG. (Note: as of this post date, ControlNet is not available for SDXL, so i cant as easily sketch out my own paragons and use that as the basis - that will come in a few weeks i guess)

ComfyUI_00088_.png
Very handy reddit thread with the model URLs and the Comfy UI link: https:// old.reddit.com/r/StableDiffusion/comments/15af9ct/sdxl_10_model_weights_released_now/

Used Comfy UI, and an example of the workflow: (if you use comfy ui, you can actually just drag one of my output images onto the window to clone the workspace, this screenshot is just illustrative for those that arent using it)
1690496092177.png



ComfyUI_00091_.png
ComfyUI_00092_.png
ComfyUI_00093_.png
ComfyUI_00094_.png

Some masking and touching up was needed (probably could have removed her cape too, which hides the gap below her foreground arm and torso on the OG texture)
Kai-sha 2.0 in-game,
1690495918174.png

And yes, she is animated, her eyes blink, her face morphs. For some reason I cant get the microsoft game recording thing to work, but have attached the DDS file [as a zip] if anyone wants to drop it in to their game to test
  • Stellaris\gfx\models\portraits\paragon\paragon_01_renown_portrait_14.dds
  • Console command: event paragon.41300 to force her recruit screen
  • (this will require paragons DLC)
It would be pretty easy to take this model (tidy it up a bit more) and re-insert it to the game as a new / standalone paragon (though the mod will likely need the paragons DLC as a req. because its using kai shas 3d mesh).

Bonus: raw Armored alien/robot images. (not tested but likely to be glitchy if inheriting kaishas face morph/mouth animations, might be a way to just keep the head bob whilst skipping other anims, but never really modded leaders before, so this is new to me...)
ComfyUI_00096_.png
ComfyUI_00098_.png
ComfyUI_00099_.png
ComfyUI_00105_.png
ComfyUI_00110_.png
It would not be inconcievable to create a renowned-paragon pack for Gestalts using this approach - right now ive only experimented with robot forms, so MIs are definitely doable, but maybe some fleshy monstrosity for Hives (using the fat fish governor's mesh from paragons?) is possible too.

ComfyUI_00103_.png

ComfyUI_00112_.png

ComfyUI_00115_.png
 

Attachments

  • paragon_01_renown_portrait_14.zip
    287,9 KB · Views: 0
Last edited:
Testing robots out, this time using Jynns [paragon.40700] template (as it's also a super simple mesh, easy to get an AI to work with it, then just rip the head off+align/warp the eyes in PS - eye animations still work fine in this case.). Trying to make a "friendly neighbourhood rogue-servitor"-themed leader.

paragon_01_renown_portrait_06 copy4.png
paragon_01_renown_portrait_06 copy2.png


1690506950573.png
1690506996719.png
1690507517924.png



And a DA /cyborg themed one, using the same anim/rig
1690511959962.png
 
Last edited:
OK wait, are you telling me you've actually found a workflow for effectively making AI generated animated leaders here???
Yes, more or less. They only do the simple head bob & blink animation, that comes with jynn, though.

It's basically a small amount of mesh modding to fix glitches, then taking the silhouette of the vanilla character as an img2img base, prompt until happy, rip the head off & tidy up in Photoshop & good to go. It's still quite time consuming getting just the right look though. Can comment out the eye animations for characters that don't have em or like robots etc.