• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
It lags for me on a medium galaxy, but that is different for each player, i consider 12 seconds a month slow.

My rig has 8 year old tech in it [owned about 7 years]. 1000 stars, 1x habitability, 2 guaranteed planets, max AIs, etc. runs as well as can be expected in year 2400 with 32K galactic pops ... about 20 seconds per month.

I'm still playing on 400 stars, 0.25x habitability, 0 guaranteed planets because of perceived "micro" issues as opposed to performance issues. I simply don't want to play "SIM Planet" when the number of planets to manage grows to more than ~10. I'd prefer a game closer to "SIM Empire" if you see my meaning.

Devil's Advocate: For an 8 year old rig to be "kinda-playable-depending-on-taste" with relatively high game settings isn't bad.
 
9900k@5.0Ghz, 32GB CL14 RAM 3200Mhz, 2080TI here. In ~2450 im getting some serious shuttering. By the end of the each year i need to wait like 8 secs. I play on normal speed because other ones are unplayable. 800 stars in galaxy, max fallen empires.
 
Some of you might remember I'm of the opinion that the main thread of the game is managing the game logic as well as drawing calls and that it is a cause of performance hit. I just did a test with my Stellaris version that is graphically modded, and results are quite interesting. I found out that my GPU usage is halved and my CPU usage doubles when unpausing the game. I can turn on my GPU fans by pausing the game, and turn them off by unpausing the game. It is noticeable by ear when galaxy is zoomed in. It's not when unmodded, but the usage relation stays the same. If my CPU was less powerful or less well cooled, I could switch which fans I am hearing with just pausing/unpausing.

It concurs with previous tests we made with the ticks_per_turn command which is able to double game tick speed by sacrificing nearly 100% of display. I take it as a proof drawing and game logic are competing for main thread time, that executing game logic forces the main thread to issue less work to the GPU, meaning drawing forces the main thread to slow down game tick as the work load increases, resulting in roughly half of the only core being maxed out being wasted by not issueing the drawing work to another thread.

No stuttering so far, test done in ~2325.
Paused : ~15% CPU / ~40% GPU.
Unpaused : ~30% CPU / ~20% GPU.
Most used core : ~60 - 80%.

Theoretically it should start to stutter when the main thread reaches 100% of the capacity of the core executing it, strongly suggested by the warbeast of a PC in the post above this one. Testing the 2500+ save @exi123 provided earlier :

Clear stuttering observed.
Paused : ~15% CPU / ~40% GPU
Unpaused : ~30% CPU / ~20% GPU
Most used core : ~70 - 90%.

The only real difference is that in the second case, there are activity spikes immediately followed by huge dropdowns. CPU will have all cores spiking regularly before dropping between 0 - 10%, the main core dropping to ~60% in the process before numbers go back to the above. GPU will follow in short order with spikes to 80% and immediately drops to 0%, strict and clear. One then the other.

This is a bit akin to thermal throttling behavior. My thermals are 40°C for CPU, 45°C for GPU. Main thread controlling who does what is bloated and throttles everything. I now think no amount of code refactoring, optimization or caching at game level can solve that, short of reworking the main thread or axing features drastically. If anything their work has made behavior more visible, I never saw those spikes/dropdown chain so orderly before. One thing is certain though, if the game logic was bottlenecking the CPU with the core usage observed, and drawing tasks were managed by their own thread, the game would freeze but camera movement would stay smooth. Right now the camera has micro stutters while unpaused, and the big spikes fully freeze the camera.
 
I dearly hope for both game performance and a better game we'll be rid of this ridiculous pop system leftover from when 1planet tile/size = 1pop. Even after one of the main "features" of this expansion at least for me was performance... Its still not even close to good and i doubt it will ever be with the current systems the way they are.
 
Some of you might remember I'm of the opinion that the main thread of the game is managing the game logic as well as drawing calls and that it is a cause of performance hit. I just did a test with my Stellaris version that is graphically modded, and results are quite interesting. I found out that my GPU usage is halved and my CPU usage doubles when unpausing the game. I can turn on my GPU fans by pausing the game, and turn them off by unpausing the game. It is noticeable by ear when galaxy is zoomed in. It's not when unmodded, but the usage relation stays the same. If my CPU was less powerful or less well cooled, I could switch which fans I am hearing with just pausing/unpausing.

It concurs with previous tests we made with the ticks_per_turn command which is able to double game tick speed by sacrificing nearly 100% of display. I take it as a proof drawing and game logic are competing for main thread time, that executing game logic forces the main thread to issue less work to the GPU, meaning drawing forces the main thread to slow down game tick as the work load increases, resulting in roughly half of the only core being maxed out being wasted by not issueing the drawing work to another thread.
It's largely consistent with what was happening in 2.5. One thing to keep in mind is that there can be significant difference between computers here. The drawing involves not just Paradox code, but also other libraries (DirectX/OpenGL) and video drivers which all can have their own bottlenecks in various operations. So by the time drawing instructions get to GPU significant time can be spent in CPU (and in Paradox games this tends to be noticeable). Windows users could also try switching between DirectX and OpenGL renderers to see if one performs better than another.
 
@Dëzaël : I think those are similar to the findings we did when testing on the 2.2.x and 2.3.x version(s) of the game. I would like to see them decouple as much of the graphics & logic as possible. Not seeing their engine though I'm not sure how many "shared variables" would need to be synchronized between any separate graphics & logic threads. If the code is spaghettified it could make it pretty darned hard to extract the graphics piece properly.

EDIT: The graphics may have its own thread BUT if the logic thread controls when the graphics thread can & can't paint then that winds up largely being [almost] the same thing.
 
Yeah, I actually didn't aim at a specific version. It just seems more visible now to me, but I maybe just didn't pay attention. Thing is, this specific behavior is hitting hard on my modding experience. I've been modding my games for 15 years now, mostly big compilations using other people's work, and more often than not I push games to stuttering state, then scale down a little by removing a thing or two, so the game is tailored to my PC. Depending on the game, I hit this state by CPU, GPU, or RAM bottleneck, and I very well end up hitting CPU bottleneck on GPU heavy games, or GPU bottleneck on CPU heavy games, RAM being rare.

In my experience, GPU usage is not supposed to drop when workload increases if not hitting a stuttering state, and Stellaris does that from day 1 with no stuttering. The thing is I don't care exactly what it is, the discrepancies in people's hardware for complainers and what is achievable by other companies with the same drivers & stuff makes it unlikely to not be on PDox side. If it's drivers, I don't see how it would not be their handling of the drivers.

My point being, it is a simple case of gluttony. Modding a game and developping new features are quite the same thing, the only difference when you sell is that you develop for a range of hardware. When hitting a stuttering state, there's only two ways out. Either you scale your means up to your ambitions, or you scale your ambitions down to your means. The game ran quite fine from 1.9 and downwards. I'm OK with their engine being old, or anything really, but they seem to need to get back to reality. I don't believe the driver sided issue. NVidia's code is used by the entire world. Stellaris triggering driver issues to this point while having toaster level graphics is a singularity, a statistical artifact.

The most probable thing, and by a fair margin, is that they bloated the engine and they don't want to backpedal. Moah stated once it was not taken very well at work when proposing to remove features, this is the sane approach, I do this all the time when modding. Sometimes there is no choice.


If the code is spaghettified it could make it pretty darned hard to extract the graphics piece properly.

I fear that might very well be the case.

The graphics may have its own thread BUT if the logic thread controls when the graphics thread can & can't paint then that winds up largely being [almost] the same thing.

In fact, by taking a closer look, there is a core alongside the most used one that is showing higher activity than the others by a margin. Could be anything, could be they managed to offload more drawing stuff in there and disconnect it from main thread. Out of curiosity, I monitored Civ6 when hitting next turn by late game, so simulation + camera panning.

It shows the same signature. One core maxed out, another one with higher activity, slight stuttering. Though, other cores climb way higher, GPU is maxed out. There are activity drops when stutters occur, but GPU stands still, and the two mostly used cores don't blink. In Stellaris, from the two cores, the less used one falls with the others with a delay, right before GPU spikes and falls as well.
 
Last edited:
Surprise! The performance is still bad.

At this point I think that if it were possible to fix the performance, it would have been done by now. Are we supposed to just wait another year? Two years? At what point do you cut your losses and accept it's not going to be made any better?
 
The game lags so much in the endgame that every day takes seconds on the fastest time setting. pausing takes several tries. my pc meets the recomended requirments, so what is going on?

What game settings have you chosen? [1000 stars, 5x habitability, ...???] In what year are you playing? What is the galactic population?? What's your exact game version / build??

What are your PC stats [CPU, Ram, HDD type, etc.]??


The reason I'm asking is more about understanding your expectations. For example my gaming PC, aside from the SSD, barely meets the recommended requirements for the game and is running on 8 year old technology for my CPU, Graphics, RAM ... As such I would expect that the recommended [DEFAULT] game settings should be fairly playable until the default end game time.

For a system as old as mine I wouldn't expect ANY GAME [let alone Stellaris] to run silky-smooth on MAX settings when you barely reach recommended settings.

Devil's Advocate: If 1K Stars ; 5X habitability ; etc. is an allowed setting I WOULD want extreme gaming systems [3000$+] to actually run those fairly well OR at worse only have to drop back a notch or two to run fairly well. That is where I have a bone to pick with performance as the current systems we have in place do not scale well with high-end gaming systems.
 
What game settings have you chosen? [1000 stars, 5x habitability, ...???] In what year are you playing? What is the galactic population?? What's your exact game version / build??

What are your PC stats [CPU, Ram, HDD type, etc.]??


The reason I'm asking is more about understanding your expectations. For example my gaming PC, aside from the SSD, barely meets the recommended requirements for the game and is running on 8 year old technology for my CPU, Graphics, RAM ... As such I would expect that the recommended [DEFAULT] game settings should be fairly playable until the default end game time.

For a system as old as mine I wouldn't expect ANY GAME [let alone Stellaris] to run silky-smooth on MAX settings when you barely reach recommended settings.

Devil's Advocate: If 1K Stars ; 5X habitability ; etc. is an allowed setting I WOULD want extreme gaming systems [3000$+] to actually run those fairly well OR at worse only have to drop back a notch or two to run fairly well. That is where I have a bone to pick with performance as the current systems we have in place do not scale well with high-end gaming systems.


I run on an i5 processor, 8GB RAM laptop. I have a intel 620 graphics card. I use the lowest graphics settings that I can, and default galaxy config. I run on the latest version of stelaris. I have no mods.
 
Last edited:
I run on an i5 processor, 8GB RAM laptop. I have a intel 620 graphics card. I use the lowest graphics settings that I can, and default galaxy config. I run on the latest version of stelaris. I have no mods.

I think see the gotcha. I believe the PC requirements listed assumes a desktop equivalent as opposed to a laptop equivalent system. So the odds are that you might have a "mobile" CPU that performs less than a comparable desktop cpu in single-threaded-workloads, the graphics is less than half the speed of the minimum requirements [desktop part], the graphics might be stealing "system RAM", the system may be throttling [slowing down] to better manage power consumption and/or heat, ...

If you like Stellaris and want to try to play I'd either fall back to 2.1 [1.9?] OR I guess you could try a TINY galaxy [200 stars] with 0.25x habitability and no guaranteed planets with 2.6.2. Theoretically this might cut the CPU load on your system by 2/3 compared to "default". If you were running at 1080P you could move to 720P -- that may cut your GPU load by 1/2. Manually lowering game quality settings instead

If the above works you could try bumping up the star and/or habitability count and see if that works. Eventually you'll hopefully find a setting that is your personal best compromise between galaxy size [planets] and performance.

FYI: I play on 400 stars ; 0.25x habitability ; no guaranteed planets and performance with this patch isn't a problem for me -- YMMV.
 
I've just fired up Stellaris to check out Federations and it runs far, far slower than it did pre-Federations. Has it slowed down with the new expansion (like super-slowed down - current game start speed feels worse than 2400+ speed - it's literally at the "get out a book while you play, and look up every half-page or so and take an action" stage - and this is with a small galaxy, all visual settings turned down to minimum, all that stuff) or is it more likely that there's something with the new expansion? There are no mods active either.

The system isn't cutting-edge or anything like that - it's an AMD Ryzen 5 1600, 8GB of RAM and a 3GB Nvidia 1060 for the graphics - but it can play every other PDS game at enjoyable speeds (with high-ish graphics settings), usually all the way through, so it feels a bit strange that Stellaris is so slow as to make it a hard sell vs other games, where there's a lot more playing and a lot less time sitting around waiting. I've enjoyed Stellaris a lot in the past, but if this is the way it is now, it feels like I need to wait until I get a new PC (and a fairly expensive one!) to be able to play it with small galaxies with minimal settings. Of course, I don't think the devs would have released a game that required that much heft just to be "fun" playable, so suspect some kind of technical gremlin causing trouble.