Note that utilizing multiple cores covers a wider area than just how many threads an application has that can run in parallel. A system running a game consists of more that just the game that's running (unlike in the good old days of MS-DOS). The OS itself runs multiple threads, as do all the services it has active. Also, some API calls an application makes can actually run in background threads, spawned by the OS itself, unbeknown to the application. All these benefit from having multiple cores, even if the application itself would be single thread/single core.
By offloading all these background processes to the other cores, even a single thread/single core application benefits, as it doesn't have to share that core with background processes.
Writing contents to a file is one such task. When an application writes contents to a file, it doesn't need to wait until all the writing is completed, and actually present in the file on disk. The OS intervenes, copies all to-be-written data to internal buffers, and then uses a background thread (possibly running on a different core) to handle the actual writing of this buffered data to disk, while the application's thread is already doing something else, in parallel.
Having said that, there is of course low hanging fruit that applications can take advantage of, such as handing the playing of (background) music off to a separate thread, which could then run on a separate core. And I'm pretty sure the Clausewitz engine does that. Other tasks that don't rely on sharing memory with the game engine are also obvious candidates for background threads running on other cores.
But as soon as multiple threads need access to the same pool of shared data, then you need to regulate that access, or risk corruption of data (as AndrewT pointed out). This means that such access needs to be serialized. Which, as the name suggests, allows only one thread at each given time access. All the others have to wait. And if this happens a lot (which you can assume in a game engine), you end up in a situation where pretty much only one of multiple parallel threads is actually executing at any given moment in time. If that's the case, then it's more efficient to run that code in one single thread, and eliminate the overhead of this serialization.
There is no magic bullet here. No golden solution. Each case has to be reviewed, analyzed and profiled to see what the best solution is. And realize that splitting up your code over multiple , parallel running threads introduces it's own additional overhead that isn't needed if you keep an algorithm in single thread.
Examples where multi threading works great is, for example, a web server. Each connected user to the server will be serviced via it's own thread, running in parallel with the threads for the other connected users. Same thing if the web server hosts multiple web sites. Each web site will be serviced by it's own listener thread, running in parallel with all the others. And this works, because neither the various web sites, nor the various users connecting to it make use of (much) share data. Which means that each of these threads can run unhindered, without (much) need for serialization.