• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
And presumably OpenGL for CKII and on, given I can play them natively on Linux. ;)

Though that makes me wonder, have you guys thought about using Vulcan for future versions of Clausewitz?

vulcan? i think i heard about that. isn't that like the non-proprietary version of DX12?
 
I assume that PI does use those idle loops given how you get 150+ fps when the game's paused.
Not really; it's not some maniacal scheme by Paradox to trick people into thinking there's better performance, it's just handled by your GPU. By default every GPU will just render as fast as it can. You can stop this behavior by enabling vsync, which syncs the rendering speed of your GPU with the refresh rate of your monitor. This is designed to eliminate screen tearing, but has the secondary benefit of limiting your framerate.
Some games set vsync to on by default, which is evidently what gluck3d considers to be the defining characteristic of a "professional game studio."
 
Thank you for answering me. Since I am not completely stupid I monitor my cores one seems to max out with the others having nearly zero percent in usage is there a reason why the engine does this because 13-17 fps eats ass
 
vulcan? i think i heard about that. isn't that like the non-proprietary version of DX12?
Kind of; it's an open-source low-level graphics API intended to serve as a industry-wide standard closer-to-the-metal choice than OpenGL. Ideally, with good enough driver support, it should give performance similar to DirectX12 without being tied to a single OS (so in theory, a game could be made to run with Vulkan and work on Windows, Mac, and Linux, rather than having a DirectX version for Windows and an OpenGL versio for Mac and Linux. It's still new enough that there aren't many games to test it on just yet, though).

For Mac and Linux, yes, obviously.
Sorry if that came out more snarky than it sounded in my head, it's been a long week and my sense of humor is doubtlessly a bit wonky by now. :oops:
 
Some games set vsync to on by default, which is evidently what gluck3d considers to be the defining characteristic of a "professional game studio."
Not exactly.
I was saying that no credible studio would create CPU idle loop to make more FPS.
This doesn't mean that GPU will re-draw the screen as fast as it can.

For example if you run an old game like Quake 3 or something, you may easily get thousands of fps. This doesn't mean that even one CPU would be at 100% usage to just "fill up the idle time".

And all this means that the situation that PDX games lag while using 100% of one/two logical cores out of modern 4-8 standard has nothing to do with screen re-draw cycle.
 
There is nothing there that should be unknown to either the MEIOU team or to us :) The quoted part was common knowledge even back when I modded EU3.
The effect of optimizing script will be much larger in general on MEIOU than in vanilla though as the entire performance difference between the two can be attributed to script more or less. Triggered modifiers are another major resource hog script side btw (but I believe the MEIOU team is aware of this as well).
We have noticed that disabling draw water seems to have a gigantic effect on performance, some of our beta testers have noticed a 25% speed up when doing that. Testing is required to see if that is also the case for vanilla.
Idle loop to redraw the screen when nothing has changed to increase FPS numbers is just a bad coding to spread additional power into atmosphere. I am pretty sure that this is not the case, it is below level of professional game studios.

I was also describing constantly lagging late-game Stellaris. It has nothing with UI redraw and is not an indication of the graphics engine approach that you have described before (it just makes no sense to trigger additional redraws if you can't draw previous one in time).

Also my GPU subsystem is 1080 GTX SLI, so I doubt it is the choke point ROFL

Not sure why you are rejecting a simple idea that if the only system that is shown in OS monitoring to be near to the limit is a single core of CPU out of 4 physical ones, it is something else other than lack of calculation speed in mostly single-thread game design? Which may be effectively increased by utilizing parallelism.

Using Occam razor, it is the simplest reason for the lags in Stellaris case, while you can of course press on other possible issues up to the Putin hacker playing in my computer when I play the game.
If you think that's bad you should look at MS VS, that is one horrid CPU intensive program due to refreshing the GUI way too much.

Amdahl's law is extremely relevant here, there are certain things that can be sped up and there are certain ones that can't be sped up be throwing more cores at it. In EU4 for example where I have the most experience a major bottleneck is the event handler and I don't anticipate that to be multithreaded any time in the near future due to concurrency issues.

Will the engine be moved over from dx11 to dx12 because that api is specifically designed to address the poor load balancing across cores and not just cpu cores but the gpu cores as well.

I would love to know why paradox hasn't jumped at the chance to add this as it's a big boost for games with complex calculations, ashes of the singularity has built there game around dx12 to allow more units on screen.

This could address the big slow down that happens late game in stellaris.
People who think the games use DX11, that's funny.
Not really; it's not some maniacal scheme by Paradox to trick people into thinking there's better performance, it's just handled by your GPU. By default every GPU will just render as fast as it can. You can stop this behavior by enabling vsync, which syncs the rendering speed of your GPU with the refresh rate of your monitor. This is designed to eliminate screen tearing, but has the secondary benefit of limiting your framerate.
Some games set vsync to on by default, which is evidently what gluck3d considers to be the defining characteristic of a "professional game studio."
Note, the GPU relies on the CPU to feed it data to process, if the program chokes, depending on how it is coded it very much can reduce frame rate.
 
If you think that's bad you should look at MS VS, that is one horrid CPU intensive program due to refreshing the GUI way too much.

Amdahl's law is extremely relevant here, there are certain things that can be sped up and there are certain ones that can't be sped up be throwing more cores at it. In EU4 for example where I have the most experience a major bottleneck is the event handler and I don't anticipate that to be multithreaded any time in the near future due to concurrency issues.
Nobody has ever said that MS is a good software company :) While I've never tried to analyze MS VS resource usage, it works ok for me.

Reg. event handler and concurrency. The life itself is a hard thing. To effectively utilize multi-threading, you have to develop the code which handles concurrency issues.
I don't think that processing events simultaneously is impossible, why different events which affect different entities can't be paralleled?
Make locks on character, countries etc and go ahead.
Of course there may be race conditions here, but as far as events are all random-based, we do not need the determinism here.

Also introducing a massive array of locks would increase memory usage and potentially slow the system more than benefits gained, but I don't believe that there is nothing possible to improve there.
And yep, re-writing such a significant part of an engine from single-thread to multi-thread design may be as expensive as developing the game...

P.S. And developers capable to do this are also very expensive :) I know this, cause we have to hire them...
 
Nobody has ever said that MS is a good software company :) While I've never tried to analyze MS VS resource usage, it works ok for me.

Reg. event handler and concurrency. The life itself is a hard thing. To effectively utilize multi-threading, you have to develop the code which handles concurrency issues.
I don't think that processing events simultaneously is impossible, why different events which affect different entities can't be paralleled?
Make locks on character, countries etc and go ahead.
Of course there may be race conditions here, but as far as events are all random-based, we do not need the determinism here.

Also introducing a massive array of locks would increase memory usage and potentially slow the system more than benefits gained, but I don't believe that there is nothing possible to improve there.
And yep, re-writing such a significant part of an engine from single-thread to multi-thread design may be as expensive as developing the game...

P.S. And developers capable to do this are also very expensive :) I know this, cause we have to hire them...
While you are doing nothing it will still manage to suck up 12% CPU load for some people solely from drawing the GUI.

Games like EU4 already run so fast on modern systems it really isn't worth it to rewrite the engine like that. To do that with the event system drastically increases the odds of game freezes and players aren't going to like those teething pains from a new game, sure they can handle slowness, but system hangs? That's much more unpleasant.

It isn't that it is impossible, it is that it'd be quite expensive for the returns and there is a good chance they'd look for easier optimizations before they massively rewrite the engine.
 
Games like EU4 already run so fast on modern systems it really isn't worth it to rewrite the engine like that. To do that with the event system drastically increases the odds of game freezes and players aren't going to like those teething pains from a new game, sure they can handle slowness, but system hangs? That's much more unpleasant.

It isn't that it is impossible, it is that it'd be quite expensive for the returns and there is a good chance they'd look for easier optimizations before they massively rewrite the engine.
I don't have any issues with CK2, HOI4 or EU4, but I have to abandon late-game Stellaris plays due to lags. And I have top-tier PC.
So PDX has at least one game which requires massive improvements to performance which you can't achieve b minor fixes (unless this all is due to some bug which os not yet found).
But I agree with your last sentence that it is very expensive and they will probably do this as late as possible :)
 
I don't have any issues with CK2, HOI4 or EU4, but I have to abandon late-game Stellaris plays due to lags. And I have top-tier PC.
So PDX has at least one game which requires massive improvements to performance which you can't achieve b minor fixes (unless this all is due to some bug which os not yet found).
But I agree with your last sentence that it is very expensive and they will probably do this as late as possible :)
Stellaris is a new game, not an iteration of a series with an improved engine. It does things the engine hadn't been made to do before so there are a lot of things that aren't coded that well (done is better than perfect when you have time and budget constraints). Expect Stellaris to become faster over time.
 
Though that makes me wonder, have you guys thought about using Vulcan for future versions of Clausewitz?
@Guraan had this to say about it in an interview with "GamingOnLinux" in June 2016:
GOL inteview with Gustav aka Guraan said:
GOL: What are your thoughts on Vulkan? Have you considered updating Clausewitz in the future to support this new API?

Gustav: Vulkan is what the major gfx API should have been 10 years ago… We love it, but we do not need it as it is right now.

Clausewitz has as it is right now, an unlimited opportunity for graphical backends. Right now as primary we require DX9 or OGL2.1:isch when it comes to gfx, but what is more important for PDS games is a fast cpu and as much ram as is tolerated.

Our games does a lot of simulation in the background and not so much gfx work.
(source)
 
Fun (multi)thread!
As many of you guys already have said:
Multi-threading and concurrent systems is not easy...
You cannot just put mutexes everywhere in existing code and think it will be fast just because you where able to spawn 8 different threads doing the work.
Most of the time you will just introduce deadlocks or code that actually runs slower due to overhead in the lock/spawn of thread and/or that they are just not allowing any concurrency.
Systems need to be designed from a mt perspective and even when they are they might be designed from a bad perspective for different times of the game (aka not really scalable).