I recently got a Logitech G15 keyboard, and when I'm running a game that doesn't have a program to communicate between the game and the keyboard, I usually run the Logitech Performance Monitor. While playing Crysis recently, I noticed that both CPU cores were maxing around 95/96%, while the memory was relatively stable at around 73%. It's then that I noticed that I have never seen a similar performance meter for a GPU, from any graphics card. Not that it's an essential tool, but I only run the Logitech Performance Monitor on my G15 out of curiousity anyway. Why do you think we haven't seen a GPU meter, for example in benchmarking software such as Sisoft Sandra? It's odd that these programs will measure even hard drive performance but we see no such similar measuring of the GPU.
...and I thought of how all those people died, and what a good death that is. That nobody can blame you for it, because everyone else died along with you, and it is the fault of none, save those who did the killing.
Nothing's forgotten, nothing is ever forgotten
Posts
I found this while digging.
I'm no graphics programmer, but it seems to me that generally the GPU is going to be at 100% (by that metric) since it's such a specialized processor.
The most meaningful measure of GPU performance is Frames per second.
Agreed. In which case, what the OP would most likely want is FRAPS, which does have the capability to output statistics to a G15 display built into it.
Temperature doesn't correlate tightly enough to utilization. There's basically three or four temperature thresholds on GPUs:
Trying to divide it any more than that will significantly exceed the noise levels in the cheap thermal diodes and A/D converters that are used to measure temperatures in commodity computing devices. If you had an extremely high-resolution temperature reading you still wouldn't get much more resolution than that, since the processor load will change much faster than the chip can shed or gain heat.
Pipelines != Cores (CPUs have piplines too), so your metaphor fails (although it really fails on account of not being a metaphor including the use of automobiles).
However, you are correct in that GPU utilization would really need to be granular on a pipeline level, and the chips just don't support it - mostly because it'd require a lot of the chip's die space that's needed to actually do things people buy video cards for just to support a relatively useless feature in order to tell people how busy it is doing it.