Which means the OS looses its time doing cross-core context changes... Not the best way to squeeze out the highest performances from your CPU... Under Linux, whatever the viewer, it always use 100% of one CPU core for the main loop and renderer and a few percents (may yet climb up to 100% of a second core during texture decoding after a TP to a new sim, for example) of the other cores for background threads (texture caching and decoding, network threaded requests and corresponding non-blocking DNS requests, etc).
Which is normal, really...
Really ?... I don't see any difference here, under Linux... Oh, yes, there is one *big* difference: when clearing part of the texture cache to make room in it, instead of "hanging" during one or two full seconds like with LL's viewers (v1/2/3), the Cool VL Viewer uses a time-sliced cache purging, which only results in several tiny "pauses" of less than 0.1s each time, as it clear the textures in small batches. If you don't like it, you can disable it in Advanced -> Caches -> "Time-Sliced Texture Cache Purges".
Also, in Advanced -> Network, you can enable "Multi-Threaded Curl", which is enabled by default in newest v3 viewers and disabled by default in the latest Cool VL Viewer releases (it caused crashes on some Windows system but should now work just fine in both v1.26.1.8 and v1.26.0.20). This option has been reported to make the viewer run smoothly on some Windows systems.
There's no such option... It's not the user application that can decide whether or not it will use multiple cores and how many, it's the OS. The application can only be written in a threaded way so that it's easier for the OS to spread its code over several cores. I guess you meant "RunMultipleThreads" instead.
Do *not* do that !... It will most probably cause bad things to happen. The video driver should let the applications decide whether they want to and, most important,
can use threads or not: to be thread-safe and prevent crashes, the application code must follow precise rules and forcing threading, even within the video driver code, is the cause of much havoc when the application is not designed to be thread safe. In the viewer, only some parts of the code (that are designed to run as threads) are thread-safe, and certainly not the main loop during which rendering happens.