Gaming Mode does 'extra stuff' to ProBalance, please elaborate.

Started by Marctraider, February 13, 2017, 10:58:17 AM

Previous topic - Next topic

Marctraider

Ok so I have an Intel 3770k with C3/C6 and C1E disabled in Bios, which means only speedstep is enabled thus I can control my clock speed solely through Windows power plans.

I have a AutoHotKey script running for my game which detects whether it is focussed, and based on that it dynamically adjusts power plans. It would make no sense to run full clock when game is not focussed/minimized right?

(Sadly a function not implemented in Process Lasso, which only supports detection of a game running or not)

Anyway, I read that Gaming Mode, next to changing Power Plans, also does extra 'tweaking' to ProBalance. I quote:

Thus, Gaming Mode will induce this new highest performance power plan, and also make a few tweaks to the behavior of ProBalance - which will keep background processes from interfering with your game play.

So since Process Lasso's Gaming Mode solely based on whether a game is running, and is inferior to my script which only changes plan based on active/inactive window... My question is, what else does gaming mode do to 'Probalance'

And is it possible to manually induce this behavior or how to replicate it?

Obviously this post is also a plea to make Gaming mode Window Active/Inactive based  ;D

Jeremy Collake

That could certainly be accomplished (as you have!), so I'm thinking maybe I should consider adding this in v9 while I"m working on it.

The request is basically: Go out of gaming mode if the game is not in the foreground, then re-enter when it goes back into foreground. Right?

As for what it does NOW, it excludes the process from ProBalance action and adjusts some variables in ProBalance's background activity.

Thanks
Software Engineer. Bitsum LLC.

Tarnak

I thought it was being renamed to 'performance' mode....since some people do not game.  :)

edkiefer

Yes, ver 9.x.x.xx it is or at least it runs BHP power plan.
Bitsum QA Engineer

Marctraider

Quote from: Jeremy Collake on February 14, 2017, 12:06:36 PM
That could certainly be accomplished (as you have!), so I'm thinking maybe I should consider adding this in v9 while I"m working on it.

The request is basically: Go out of gaming mode if the game is not in the foreground, then re-enter when it goes back into foreground. Right?

As for what it does NOW, it excludes the process from ProBalance action and adjusts some variables in ProBalance's background activity.

Thanks

Thats really basically it :-)

Based on game in foreground/background, apply a power profile (Switch from balanced to high performance/Bitsum performance) and back. Preferably you can switch to whatever power plan you want, but for ease of sake)

I mean it makes no sense to run power hungry power plan when game is minimized right? only costs unused watts :) Its still 25~ watts difference mesa think.

This is the very basic script i use in AHK at the moment, just making use of 'powercfg'
http://pastebin.com/cMrfrYWQ

edkiefer

While I do run a modified balance power plan for normal use and jump to HP for power usage.
There no way you can save 25w from balanced to HP, "maybe" a few watts max, at least for normal 4-8core Intel cpu's, probably even AMD stuff too for most cases.
Bitsum QA Engineer

Marctraider

No? I dont know. It was a guestimate.

Its 1600~mhz vs 4500mhz here, 1.08v vs 1.33v


I'd say its more than just a few watts...

Roughly temperatures from balanced to high perf increase by 10 celsius. And that is with minimal load surging through that 4500mhz.

Coretemp shows some 20 watts more usage from idle to full clock without load, which is probably also a questimate based on vcore/clockspeed etc.. but still.

Are you sure your power profile even adjusts and locks coreclock? because it doesnt on 99% of the PC's as first of all Power States in bios are wrongly set (C1E, C3/C6, Speedstep, etc)

Even from auto (enabled) to manual enabled and disabled can work differently on buggy bios ;-)

edkiefer

Quote from: Marctraider on February 15, 2017, 06:50:23 PM
No? I dont know. It was a guestimate.

Its 1600~mhz vs 4500mhz here, 1.08v vs 1.33v


I'd say its more than just a few watts...

Roughly temperatures from balanced to high perf increase by 10 celsius. And that is with minimal load surging through that 4500mhz.

Coretemp shows some 20 watts more usage from idle to full clock without load, which is probably also a questimate based on vcore/clockspeed etc.. but still.

Are you sure your power profile even adjusts and locks coreclock? because it doesnt on 99% of the PC's as first of all Power States in bios are wrongly set (C1E, C3/C6, Speedstep, etc)

Even from auto (enabled) to manual enabled and disabled can work differently on buggy bios ;-)
I was going to ask that next , I keep all C states and speedstep enabled in bios (I have no trouble with voltage droop from idle to load.

So no in both plans, balanced and HP my system idles at 1600mhz, but with HP it goes from 1600 right to max clock, were in balanced it will go in steps upto max (1600>1800>2100 etc).
Still even with that I bet you would only save 5-10w, you can easy monitor it.

I don't want to get OT, I was just making a general statement, it does depend on bios setting and of course what voltage ends up being the steady state.
Bitsum QA Engineer

Marctraider

No worries

I actually primarily disable C1E and C3/C6 to minimize DPC latency 10-40us vs 300/500us, but secondarily as I cant keep my clock at fixed 4500mhz with them enabled, they just seem to override Speedstep / Windows CPU clock management.

The situation you describe where the clock only jumps from 1600 to 4500mhz and back without steps, is actually when speedstep doesnt work at all (at least here!)

It just eventually jumps back to 1600mhz on idle, ignoring any power profiles.

The only way here is to exclusively use Speedstep only. Im sure different cpu/bios/mainboard all have deviations in behavior tho...

I have no wattage meter so I cant check it, but either way it just seems way more efficient to use Active window detection rather than a blunt rule that says 'either the program runs, or it does not'

Process Lasso is a great program, and any addition, even so slightly, that makes it better is a welcoming change imho :)

Would really appreciate this relatively simple addition  8)

chris635

I'm on a AMD rig, overclocked to 4.96 GHZ at 1.59v. I do have all c states enabled and cool n quite (Speed step Intel). In windows my power states are set up as this

Every day use: Balanced 2100 ghz ( idle) at 1.16v to 4.96GHZ with core parking enabled.

Media: Playing High Performance 4.96GHZ with half core parking enabled.

Gaming: Bitsum Highest Performance 4.96 GHZ core parking disabled.

I am using an ASUS motherboard and my voltage is using offset mode (voltage ramps up or down depending on load) with clocks going up and down depending on use.
Using manual mode, the voltage will stay constant but the clocks will go up and down.

If I had c states disabled, I use an extra 30 watts.....but this is AMD...LOL!
Chris

edkiefer

Yeh, if your OC and running high voltage it very well could add up.

I don't see it with stock clocks or with mid OC (my voltage at 4.4gmz is only 1.17, at idle of 1600mhz its 0.935v.

So its going to vary a lot if OC'ed.
Bitsum QA Engineer

buddybd

Sorry for the necro but I didn't think my question warranted a new thread.

There's an application on Steam called CPUCores that makes several enhancements that improves the gaming experience. I believe what it does is, it automatically reassigns various processes to a single Core, temporarily disables some services and assigns High priority to the game. I was wondering if something like that can be incorporated into Process Lasso as well?

A friend of mine owns the application, and he says that while PL kept things fluid, it didn't really boost performance like CPUCores did (in CSGO). He has a 2500K machine. I personally didn't try it yet but don't really want to spend more $ behind process optimization than I did with PL :3

edkiefer

Quote from: buddybd on December 01, 2017, 05:36:19 PM
Sorry for the necro but I didn't think my question warranted a new thread.

There's an application on Steam called CPUCores that makes several enhancements that improves the gaming experience. I believe what it does is, it automatically reassigns various processes to a single Core, temporarily disables some services and assigns High priority to the game. I was wondering if something like that can be incorporated into Process Lasso as well?

A friend of mine owns the application, and he says that while PL kept things fluid, it didn't really boost performance like CPUCores did (in CSGO). He has a 2500K machine. I personally didn't try it yet but don't really want to spend more $ behind process optimization than I did with PL :3
Hi

You have to be careful on micro-managing process affinities, it is possible things could get worse.
For example if you had some services and say a few app running in back ground (browser) and you set all on #1 core.
Then if your dealing with a i5 quad core and are playing a modern game which uses all of them, it might be worse.
I much rather adjust priorities so the cpu time slice is less while still maintaining default affinities. BTW PL does this by default automatically.

On setting priority to high on some processes, you also have to be careful, its much better to lower priorities of back ground processes than to raise a few focused processes.

That said having affinities and priority change  for some processes while running a game or performance process maybe a positive thing, it just depends on usage and HW, that would have to be implimented into PL.
Bitsum QA Engineer

buddybd

Quote from: edkiefer on December 01, 2017, 07:45:12 PM
Hi

You have to be careful on micro-managing process affinities, it is possible things could get worse.
For example if you had some services and say a few app running in back ground (browser) and you set all on #1 core.
Then if your dealing with a i5 quad core and are playing a modern game which uses all of them, it might be worse.
I much rather adjust priorities so the cpu time slice is less while still maintaining default affinities. BTW PL does this by default automatically.

On setting priority to high on some processes, you also have to be careful, its much better to lower priorities of back ground processes than to raise a few focused processes.

That said having affinities and priority change  for some processes while running a game or performance process maybe a positive thing, it just depends on usage and HW, that would have to be implimented into PL.

There doesn't seem to be any real big complaints with CPUCores so I'd be willing to use such functions if they were implemented in PL as well.

I think the only affinity that is problematic is Real Time, anything up to High should be fine. I used to do that before when I had a lower spec PC, never ran into any issues.

edkiefer

Hi, on priorities, there two mindsets, one is which you set what you want to run (whether its foreground or background process) to a elevated priority.
The other way is to lower all processes you don't focus on so the ones in focus (foreground0 get the most CPU time-slice.

Its been my experience overall the second works better, the reason is you need to be very sparingly with the amount of processes set above normal.
By default windows raises priority on focused (foreground) processes, so setting one of these to high "generally" doesn't do much, but there are always exceptions.

PL by default works as lower background priority so foreground gets most out of CPU but of course you can alter things if needed, but the default should work best for 95% of systems.
When you get to servers or systems with large amount of cores and mostly background processes things can change but still defaults should work well.

As I mentioned the feature I noted it down, so it will get evaluated and we will see (basically a watchdog rule change x process affinties/priority when x runs).
Bitsum QA Engineer