• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
Inferis said:
My library currently includes support for provinces too...
It's all in C#, and in separate assemblies, so it could be used as a lib.
Yes, your dev environment is closer to the Pegasus project than mine :)
 
Update. I'm working on the downscaling method to make the zoomed maps. Using the algorithm mentioned a couple pages ago, it works fast but the text on each province becomes very light.

Now I've been reading up on bicubic interpolation and it seems I should use something like this instead of just taking mean values. However, interpolation like this will take some time to calculate, so I will provide two functions, one for just saving the map quickly and one for when the user is ready to release their map and wants higher quality.
 
WiSK said:
Now I've been reading up on bicubic interpolation and it seems I should use something like this instead of just taking mean values.

What are you taking these mean values for? And what is "bicubic interpolation"?

I've understood it up till now, but you've lost me now.
 
tombom said:
What are you taking these mean values for? And what is "bicubic interpolation"?

I've understood it up till now, but you've lost me now.
In order to create lightmap3 by scaling lightmap1, my idea was to just reduce the size of each quadtree leaf (divide area of leaf by 16) and combine the smallest leaves. It's very fast because you can just concatenate the 16 quadtrees, reorder the province descriptor indices, and then iterate through the leaves to combine all the ones smaller than 4x4.

However, this last step proved not so simple. I had guessed that the best way to calculate the shading was by taking the mean shading value of all 16 pixels, and then to calculate which province descriptor was by taking the modal value of all 16 pixels. For the province descriptors this is mostly fine, but you do lose some accuracy on rivers and such at higher zoom levels.

For the shading there are several inaccuracies. This is mostly down to the fact that anything bigger than a 1x1 leaf is smoothed into its bottom and right neighbours (as you'll know from the many discussions in this thread). Since I am reducing 4x4 leaf into 1x1 leaf without accounting for the smoothing, there is already a big error there. Secondly, when I combining quadtree leaves of 2x2 I was hoping that the error wouldn't matter (it's only half a pixel...) but for things like province names and borders this is crucial. Worst of all, since the province names and borders are rather thin, you get the following effect (value 10 is province color, higher values represent a dark border):
Code:
original four 4x4 areas
10 10 10 10   10 10 10 [B]50   30[/B] 10 10 10   10 10 10 10
10 10 10 10   10 10 10 [B]40   40[/B] 10 10 10   10 10 10 10
10 10 10 10   10 10 10 [B]50   30[/B] 10 10 10   10 10 10 10
10 10 10 10   10 10 10 [B]60   20[/B] 10 10 10   10 10 10 10

resultant four 1x1 scaled 'areas'
10   20   15   10
As you can see (well it would be more obvious if I posted a picture here but I'm at work right now), what in the original four areas would display a strong border does not transform very well to a smaller resolution. The border has in effect been smoothed away.
 
The reduced maps I posted before were done with mean interpolation. I didn't think it was important enough to do more research on it... It's all about priorities, baby. :)

That said: GDI+ includes Bicubic interpolation. That does mean I'd have to be generating images for each 64x64 block, resize it and then recompress it. Which is rather costly and timeconsuming.

Wisk, can you point me at some good bicubic algorithm links? I might include stuff like this later on...
 
Inferis said:
Wisk, can you point me at some good bicubic algorithm links? I might include stuff like this later on...
There is a load of stuff all over the net that I've found, but no definite conclusions yet. Searching for this stuff is rather hidden by Google, because there are more search hits on how to use Photoshop than explanations of algorithms.

What I've discovered is that you use a function called a filter which resamples each pixel based on its neighbours. The filter seems to be a matrix which determines how you should weight source pixels. If I understand it right, you basically take the sum of the source pixels multiplied by the filter matrix and that gives you your final value for the resultant pixel. The general algorithm in computer graphics for this is called convolution. What is interesting is that convolution is used for all kinds of thing which I previous considered unrelated. For example, blurring and sharpening are both convolution operations but with a different filter. Emboss and posterize are similarly just a different filter.

When reducing an image, what's important to the eye is that the image is still relatively sharp. On the other hand, you don't want it to have jagged edges or 'ringing' (ghost effects around edges or different colored areas). Since we are reducing images quite considerably, we are very likely to lose much of the detail. So what I've been searching for is a filter which works well in image reduction as opposed to enlargement, will retain changes in contrast, but won't introduce many artefacts. At the moment I'm looking at the Lanczos filter, but it is slow (two calls of sine per pixel -- I think), and there seem to be several variations of which I don't understand what they are all for.

So when I've been able to compare a couple of the filters and look at the results, I'll post more about my research.
 
Here's an article which covers the basics of convolution, http://www.gamedev.net/reference/programming/features/imagefil/

Incidentally, I've read somewhere that GDI+ includes some interpolation filters. So maybe you can use those instead.

EDIT: Oh, and some similar techniques as we need are commonly used in 'mipmapping' in 3D games for rendering of faraway textures.
 
Last edited:
Update. I finally had some time to try out my downscaling idea. I've written a whole bunch of code for it. Not surprisingly, it didn't work first time, so nothing to show. I'll debug it tomorrow and post a screenie of whatever comes out.
 
Johan said:
I do not have any fancy interpolation code :)

Photoshop works fine for me..
And which function do you use when downscaling? Bilinear, bicubic, Lanczos, or just the "default" resizing option - whatever that is?

Photoshop will probably be better for users who want to maintain a full map (ie 3 separate copies of the lightmap, etc). However, I'm offering resampling of lightmap1 for the users who just want to change a province name or move a border. Some people might not have a graphics editor capable of doing 'fancy' downscaling.
 
I've been experimenting a bit too...
Shrinking the maps using GDI works, but is much slower than my "native" method (which makes sense).
Also, I shrink 2x2 blocks to one block (effectively halving the mapsize), but the shrinking leaves artifacts at the borders of the block. Not really a problem when resizig a whole image, but extremely annoying when doing parts of an image.
Resizing the map as a whole is no option as a 18944x7296 bitmap is not a very good treat for your system if you load it in GDI+ ;)

Back to square one, in other words. There are other options (for example resize overlapping blocks to cancel out the artifacts, but I'm not going to occupy myself with this for now. Can do that laterz.

Back to writing photoshop files, it is.