Cool VL Viewer forum

View unanswered posts | View active topics It is currently 2024-03-19 02:06:58



This topic is locked, you cannot edit posts or make further replies.  [ 21 posts ]  Go to page 1, 2, 3  Next
Observations on framerate 
Author Message

Joined: 2020-07-08 23:18:37
Posts: 34
First of all, I run Ubuntu with a Gnome desktop environment. I have an AMD RX580 8GB graphics card (Mesa 20.1.3). I try every Linux viewer available, even testing versions when I can find them. I like to use Ultra settings with some adjustments. I was quite interested in the new EEP system and even logged into Windows a bit to use the LL EEP test viewer and Black Dragon when EEP was added to it.

In my time with EEP, my initial excitement has been dulled by the horrible optimization and major issues with specularity. I have been using the official Kokua and testing versions of Firestorm and the LL viewer compiled for Linux, and so far they fail to make sun and moon light properly "shine" on my avatar. I put a lot of time into custom materials, even using the alpha channels on normal and specular textures to get variations on 'roughness' and environment intensity. Perhaps the code used on these viewers has not included fixes. Cool VL Viewer, on the other hand DOES show my avatar materials as I think they should be rendered!

Now, of course, I also keep an eye on the framerates I get. EEP reduces framerate by up to 50% no matter what viewer I use, compared to the Windlight renderer. I can turn off shadows and sometimes double the framerate. However, what I'm also concerned with is the 'feel' of the rendering. This is where I may have an issue with Cool VL Viewer, and it could very well be specific to my particular hardware and Linux desktop environment. For example, running Singularity (WL) or Kokua (EEP) feels reasonably smooth at say 20 frames per second. Running Cool VL Viewer and getting 20 frames per second feels a bit 'choppy.' I have noticed this with either WL or EEP renderer. I cannot figure out why. My initial guess was Vsync, but Cool VL Viewer has the Vsync setting off by default. Gnome's Mutter compositor always gives me a tear-free experience and works very well without fuss compared to older systems, though possibly improvements have been made which have not been backported to Ubuntu 18.04. The Mesa drivers for my GPU support adaptive sync. Right now, I'm just thinking out loud... Any ideas?


2020-07-11 14:02:29
Profile

Joined: 2009-03-17 18:42:51
Posts: 5523
Please be aware that whatever the viewer in use under Linux, they all are X11 applications, not Wayland ones (and AFAIK Gnome 3+ is fond of Wayland)... Be also wary of compositors, that are known to ruin OpenGL drivers (it's best not using one at all !).

Likely, all viewers will benefit from proprietary drivers (Mesa sucks for NVIDIA and is not much better for ATI/AMD !). If you use anything else than an Intel iGPU, you should seek for a proper proprietary Linux driver for your GPU.

This said, I see no "choppy" frame rates here (on Linux as well, either on Q6600 + GTX460, 2500K + GTX970 or 9700K + GTX1070Ti systems).

The Cool VL Viewer takes benefit from NVIDIA multi-threading drivers (available in proprietary drivers), something many viewers cannot use (because they fail to initialize Xlib in thread-safe state). This fact taken apart, there is no difference in how it renders things (excepted for an optimized main loop code, that does give higher frame rates but won't cause "choppiness").

Note that the SL viewers are sadly purely mono-threaded when dealing with rendering, so the rendering performances are largely tied to the single-core performances of your CPU, and things could get "choppy" if the CPU core affected to the viewer by the OS gets interrupted by other software or by blocking OS calls (a slow hard disk comes to mind: putting the viewer cache in RAM disk would solve such an issue).

Finally, when comparing viewers, be sure to do so with the same settings applied... For example, the Cool VL Viewer will by default boost attachments textures, mesh LODs, oblong sculpties LODs... All of which can cause higher network traffic (the latter could possibly make things choppy if the network is of bad quality) and rendering load. Also disable any specific feature (Lua comes to mind, as the callbacks it uses could well, if scripted too heavily, cause "choppiness" when they get called, e.g. on avatar rezzing for an on-rez callback). Benchmarking is an art.


2020-07-11 14:25:40
Profile WWW

Joined: 2020-07-08 23:18:37
Posts: 34
Henri Beauchamp wrote:
Please be aware that whatever the viewer in use under Linux, they all are X11 applications, not Wayland ones (and AFAIK Gnome 3+ is fond of Wayland)... Be also wary of compositors, that are known to ruin OpenGL drivers (it's best not using one at all !).

I logged into a Wayland session for Ubuntu (Gnome). It was a different experience. I may need to experiment more with it, but I doubt it would be a viable desktop environment for my use with lots of games, viewers, and Blender. I then had a bit of an adventure in a 'live' Slackware session, since I had installed the ISO to a USB stick for testing. Even though everything was run from RAM, the experience was barely any different under Slackware KDE than it had been for Ubuntu (Gnome). My system has 16 GB DDR4 3200 (2x8GB) RAM set in BIOS to actually be 3200 MHz with XMP.

I installed XFCE as an alternate desktop environment. I turned off the rather ineffective, in my experience, XFCE vertical sync just to be sure it was not going to affect my results. I got screen tearing with Vsync "off" in Singularity. In Cool VL Viewer, setting 'DisableVerticalSync' to TRUE or FALSE had no discernible effect. There was no screen tearing either way. Is Vsync ever really off in Cool VL Viewer? I restarted the viewer after each change just to be sure. Edited to add that running Cool VL Viewer in XFCE with compositor vsync disabled felt smoother.

Henri Beauchamp wrote:
Likely, all viewers will benefit from proprietary drivers (Mesa sucks for NVIDIA and is not much better for ATI/AMD !). If you use anything else than an Intel iGPU, you should seek for a proper proprietary Linux driver for your GPU.

Nvidia Nouveau performs poorly compared to the proprietary drivers. However, Mesa for AMD and Intel has greatly improved in recent years. If Phoronix tests are to be believed, the open source drivers are better on average than the AMDGPU-Pro drivers for Linux. I play many AAA Windows games on Steam using DXVK at far higher framerates than the native Linux versions that use OpenGL. Certainly, Vulkan is where AMD shines and one would hope LL eventually moves to the Vulkan API.

I have a second computer with an Intel i3 6100 @ 3.7 GHz that only uses the integrated graphics. While not very powerful, the Mesa driver does work well enough to get usable framerates with Singularity. Singularity allows me to set 1GB texture memory manually. I can even turn on advanced lighting model. All other viewers have rendering artifacts no matter what settings are used, if they even allow graphics adjustments at all.

Henri Beauchamp wrote:
This said, I see no "choppy" frame rates here (on Linux as well, either on Q6600 + GTX460, 2500K + GTX970 or 9700K + GTX1070Ti systems).

The Cool VL Viewer takes benefit from NVIDIA multi-threading drivers (available in proprietary drivers), something many viewers cannot use (because they fail to initialize Xlib in thread-safe state). This fact taken apart, there is no difference in how it renders things (excepted for an optimized main loop code, that does give higher frame rates but won't cause "choppiness").

Based on my tests, it does seem to be something related to compositing and vertical sync. When I used a GTX 960 for a couple of years, I could and did use 'force full composition pipeline' to rid myself of annoying screen tearing with the proprietary drivers, especially when using Compton as a compositor. It was always a battle between screen tearing and micro-stutter, neither choice acceptable to my personal experience. With current kernel and Mesa drivers having mature Polaris [RX4xx-5xx] chipset support, I have adaptive sync to my 144Hz monitor with no tearing.

Henri Beauchamp wrote:
Note that the SL viewers are sadly purely mono-threaded when dealing with rendering, so the rendering performances are largely tied to the single-core performances of your CPU, and things could get "choppy" if the CPU core affected to the viewer by the OS gets interrupted by other software or by blocking OS calls (a slow hard disk comes to mind: putting the viewer cache in RAM disk would solve such an issue).

My /home is a partition on an SSD. Tests with running viewers in a 'live' environment show little gain. I'm sure you know SL viewer code better than most!

Henri Beauchamp wrote:
Finally, when comparing viewers, be sure to do so with the same settings applied... For example, the Cool VL Viewer will by default boost attachments textures, mesh LODs, oblong sculpties LODs... All of which can cause higher network traffic (the latter could possibly make things choppy if the network is of bad quality) and rendering load. Also disable any specific feature (Lua comes to mind, as the callbacks it uses could well, if scripted too heavily, cause "choppiness" when they get called, e.g. on avatar rezzing for an on-rez callback). Benchmarking is an art.

Benchmarking is indeed an art! I use all the current SL viewers for Linux because each one does something I really like, and each one is unique in a good way. Also, as someone who is constantly creating content, I need to know what's new, what's changed, and what's coming.


2020-07-12 14:44:10
Profile

Joined: 2009-03-17 18:42:51
Posts: 5523
KJ_Eno wrote:
I installed XFCE as an alternate desktop environment. I turned off the rather ineffective, in my experience, XFCE vertical sync just to be sure it was not going to affect my results. I got screen tearing with Vsync "off" in Singularity. In Cool VL Viewer, setting 'DisableVerticalSync' to TRUE or FALSE had no discernible effect. There was no screen tearing either way. Is Vsync ever really off in Cool VL Viewer? I restarted the viewer after each change just to be sure.
VSync is turned off in the viewer: you do not (and never) need it, since double-buffering is used. I have never seen any tearing here (even if tearing might be seen when using a compositor, but I never use one). Be sure NOT to try and force VSync via drivers or compositor configurations !

Quote:
However, Mesa for AMD and Intel has greatly improved in recent years.
Try AMD's propretary driver, and come back after you did benchmark both Mesa and AMD's OpenGL drivers... Maybe things improved over years, but my last ATI GPU (Radeon 9700) never saw any improvement in Mesa and, sadly, the last Catalyst driver available for it (that doubles the frame rate when compared to (even modern) Mesa), is incompatible with contemporary Xorg and kernel versions; this, and the fact ATI/AMD never ever replied or even acknowledged my bug reports (the Radeon 9700 got a material crash bug on the "hardware" pointer that is worked around in Mesa but was never worked around in Catalyst), while NVIDIA always acknowledges promptly and fixes the bugs I report (and supports their GPU for 10+ years with updated drivers, when AMD stops offering updates around 2 years after they stop selling a GPU) made me give up entirely on ATI/AMD for graphics cards. I never regretted that choice !

Quote:
I have a second computer with an Intel i3 6100 @ 3.7 GHz that only uses the integrated graphics. While not very powerful, the Mesa driver does work well enough to get usable framerates with Singularity. Singularity allows me to set 1GB texture memory manually.
You can override VRAM auto-detection as well in the Cool VL Viewer, when it is not properly auto-detected. See the LL_VRAM_MB environment variable in the cool_vl_viewer wrapper script.


2020-07-12 15:02:54
Profile WWW

Joined: 2020-07-08 23:18:37
Posts: 34
On a whim, really, I downloaded the ISO for Open Mandriva Lx 4.1 KDE (Zen). I now know this distro has a long history, but I had not considered trying it until reading a Phoronix article. I was curious to know how an operating system specifically compiled for Ryzen processors would perform. It's one thing to look at some tables of artificial benchmarks on Phoronix and quite another to install and run it. What works for servers (where the money is, apparently) isn't necessarily the same as what a typical desktop user wants.

I ran the live ISO. It booted up amazingly quickly. I wish I had timed how quickly Slackware 14.2 booted so I could have seen which of the two were faster. I ran my optimized compiled Cool VL Viewer from the live session. It ran at least 20% faster than Ubuntu 20.04, so I went ahead with the installation. Not only do I have better framerates, I see no "choppiness" now. I cannot say whether that is due to having a complete operating system compiled for my CPU or that KDE is just better than Gnome at compositing.


2020-07-23 14:15:10
Profile

Joined: 2009-03-17 18:42:51
Posts: 5523
KJ_Eno wrote:
Not only do I have better framerates, I see no "choppiness" now. I cannot say whether that is due to having a complete operating system compiled for my CPU or that KDE is just better than Gnome at compositing.

A compositor is actually totally unneeded and a real nuisance... It might bring "pretty" effects, but who *needs* semi-transparent windows or GPU-driven 3D effects for a desktop ???... I personally prefer 0 effect and instead menus that get pulled down instantly and windows that appear without delay (I've got no time to waste in "admiring" windows, menus, icons and whatnot that take 1, 2 or more seconds to get ready for use just because they play an animation !).

I'm personally using MATE v1.14 (v1.14 because it's the last GTK2 version, and I hate GTK3 and its flat themes), without any compositor, and with Sawfish as the window manager (better than any other WM out there, but MATE's default one would do just as well if you don't need a fully configurable/scriptable WM).


2020-07-23 15:36:05
Profile WWW

Joined: 2020-07-08 23:18:37
Posts: 34
Henri Beauchamp wrote:
KJ_Eno wrote:
Not only do I have better framerates, I see no "choppiness" now. I cannot say whether that is due to having a complete operating system compiled for my CPU or that KDE is just better than Gnome at compositing.

A compositor is actually totally unneeded and a real nuisance... It might bring "pretty" effects, but who *needs* semi-transparent windows or GPU-driven 3D effects for a desktop ???... I personally prefer 0 effect and instead menus that get pulled down instantly and windows that appear without delay (I've got no time to waste in "admiring" windows, menus, icons and whatnot that take 1, 2 or more seconds to get ready for use just because they play an animation !).

I'm personally using MATE v1.14 (v1.14 because it's the last GTK2 version, and I hate GTK3 and its flat themes), without any compositor, and with Sawfish as the window manager (better than any other WM out there, but MATE's default one would do just as well if you don't need a fully configurable/scriptable WM).

I really do like MATE. Ubuntu MATE 20.04 was indeed faster than Ubuntu 18.04 default with Gnome 3. There are often too many variables to name a specific reason for better performance, but I realize current MATE uses GTK3. The first distro I used full time was Ubuntu 10.04. I absolutely hated Unity and Gnome 3 when they first came out. For quite some time, XFCE was my desktop of choice. That said, Plasma ain't all that bad these days! 8-)

Edit: I looked up the official Sawfish webpage. Some of the themes made for it look really good. I've used similar WM in the past, most notably Openbox on LXDE. I had a look at what is currently available for OpenMandriva. They maintain IceWM, so I'm trying it out. They have one theme that looks like glossy black paint on the window title bars and panel. I can live with that until I have a look at other themes that I can download. Speaking of window title bars, I absolutely hate what Gnome 3 has done sticking icons in the title bars, making them 'fat' and causing inconsistencies between menu systems on different applications.


2020-07-23 16:09:37
Profile

Joined: 2020-07-08 23:18:37
Posts: 34
Tests with and without a compositor enabled show negligible results on my system. Tests between IceWM and Kwin also show no discernible difference in frame rate. I'm already impressed by how well Open Mandriva runs compared to Ubuntu, and I'm not about to go full Gentoo. :lol:

I had a look at the cool_vl_viewer executable and made a few changes:
Code:
#!/bin/bash

## Here are some configuration options for Linux.

## If the viewer fails to properly detect the amount of VRAM on your graphics
## card, you may specify it (in megabytes) via this variable.
#export LL_VRAM_MB=512

## NVIDIA-specific optimizations. Check your driver documentation for ATI or
## Intel GPUs...

## NEVER sync to V-blank !!!  This slows down the whole viewer TREMENDOUSLY !
## To fight tearing, ALWAYS prefer triple-buffering (which you can use when you
## have 256Mb or more VRAM) to V-blank syncing.
#export __GL_SYNC_TO_VBLANK=0
#export vblank_mode=0

## When on (=1), allows to use multi-threaded rendering at the driver level
## with the newest drivers (v310+), but it also increases a lot the CPU usage
## (typically using one more processor core at 100%). The FPS gain will vary
## from 10 to 25%, with higher gains in more rendering intensive scenes, so it
## is a significant gain... for people with a quad-core CPU (dual-core CPUs
## will see a barely lower gain, but the viewer will use 100% of the CPU).
## With a single-core CPU, it's definitely best to turn this off (set to 0).
## With "auto", this script will automatically adjust to 0 (for mono-core CPUs)
## or 1 (for multi-core ones).
#export __GL_THREADED_OPTIMIZATIONS=auto
export __mesa_glthread=true

## Brings a slight speed increase with NVIDIA GPUs, at the cost of a slightly
## higher CPU usage...
#export __GL_YIELD=NOTHING

## For faster logins with NVIDIA cards
#export __GL_SHADER_DISK_CACHE=1

## Some Mobility ATI Radeon users with fglrx driver report random X server
## crashes when the mouse cursor changes shape and the CPU is 100% loaded. The
## default behaviour (when the variable below is set to "auto") is to disable
## mouse cursor shape changes when the computer the Cool VL Viewer is running
## on got a Mobility Radeon fitted and fglrx is in use (and provided the
## 'lspci' utility is available on the system). If you wish to force-enable
## this work around, then set the variable below to anything but "auto" (e.g.,
## set it to "x"). If you do not want to have the work around enabled at all
## (i.e. your particular Mobility radeon graphics card is not affected by that
## bug), just comment out the statement below.
#export LL_ATI_MOUSE_CURSOR_BUG=auto

## Everything below this line is just for advanced troubleshooters.
##-------------------------------------------------------------------

I commented out the Nvidia optimizations. A quick test with "export vblank_mode=0" made no difference, so I commented that out. "__GL_THREADED_OPTIMIZATIONS=auto" has an equivalent function in Mesa called "__mesa_glthread=true". The result appears to cause the frame rate to be more stable. I didn't notice any loss in frame rate, but it wasn't more than a 1 or 2 fps improvement at best.

The best way to get higher frame rates is Gamemode by Feral Interactive. My CPU governor is normally set to a balanced profile. Gamemode will set the CPU to performance mode (and any other changes that are configured) while running an application. For example, in the .desktop file I changed the exec path to "Exec=gamemoderun /opt/CoolVLViewer-1.28/cool_vl_viewer" and also set the terminal option to "true". When I exit the program, the computer automatically returns to the balanced CPU profile. I have used Gamemode for a while to run Steam games by adding "gamemoderun %command%" to the launch options. It really makes a difference! :ugeek:


2020-07-26 17:35:19
Profile

Joined: 2009-03-17 18:42:51
Posts: 5523
KJ_Eno wrote:
The best way to get higher frame rates is Gamemode by Feral Interactive. My CPU governor is normally set to a balanced profile. Gamemode will set the CPU to performance mode (and any other changes that are configured) while running an application. For example, in the .desktop file I changed the exec path to "Exec=gamemoderun /opt/CoolVLViewer-1.28/cool_vl_viewer" and also set the terminal option to "true". When I exit the program, the computer automatically returns to the balanced CPU profile. I have used Gamemode for a while to run Steam games by adding "gamemoderun %command%" to the launch options. It really makes a difference! :ugeek:

Err... It is pretty obvious that if you do not run your CPU at its maximum frequency, the performances will suck (big time) !... SL viewers performances are almost entirely bound by the CPU ones (with the exception of super-old systems with a weak GPU but a decent CPU), and the faster the CPU, the better the frame rates (with pretty much a 1 to 1 ratio between CPU frequency and frame rates increases).

Note that the transitions to one C-state to another (past C1) is also extremely detrimental and will incur "hiccups" in the frame rate (because above C1, some functional blocks in the CPU get deactivated, such as caches, which would take a lot of time to re-populate when returning to C0, i.e. to full activity).

Under Linux, there are several ways to set the working frequency of the CPU, including with motherboards which BIOS/EFI do not normally allow overclocking. With modern CPUs, your best bet is to:
  • Set the "energy_performance_preference" to "performance" (echo "performance" >/sys/devices/system/cpu/cpuN/cpufreq/energy_performance_preference, with N=0 to number of cores).
  • Set the "scaling_governor" to "performance" (echo "performance" >/sys/devices/system/cpu/cpuN/cpufreq/scaling_governor).
  • Limit the usable C states to C0 and C1, by disabling C2 (echo 1 >/sys/devices/system/cpu/cpuN/cpuidle/state2/disable). This also can be done with some BIOS/EFI (also disable all P-states there, if possible).
There are quite a few other knobs in /sys, that you can adjust, depending on your CPU brand/model and how the kernel was compiled (you could, for example, disable SMT on a per-core basis thanks to the "CPU hotplug" feature of Linux, when compiled in your kernel).


2020-07-26 18:28:49
Profile WWW

Joined: 2009-03-17 18:42:51
Posts: 5523
KJ_Eno wrote:
"__GL_THREADED_OPTIMIZATIONS=auto" has an equivalent function in Mesa called "__mesa_glthread=true". The result appears to cause the frame rate to be more stable. I didn't notice any loss in frame rate, but it wasn't more than a 1 or 2 fps improvement at best.
I found no reference whatsoever about a "__mesa_glthread" environment variable for Mesa, but I found multiple references on the Web to "mesa_glthread"... But only for Wine games !... Be careful about placebo effects (which would be the case with__mesa_glthread) ! :lol:

See the Mesa environment variables chapter from its official documentation.


2020-07-27 00:09:23
Profile WWW
Display posts from previous:  Sort by  
This topic is locked, you cannot edit posts or make further replies.   [ 21 posts ]  Go to page 1, 2, 3  Next

Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software.