So I'll have lasted almost two years at the top of the team. I apparently took the lead on early December 23rd 2013. I should lose it about 8 days short of the two years mark.
I won't upgrade my gears before the 14nm-process-made GPUs become available. That's like 6 months away. Handruin will have the time to build a substantial lead by then. I don't think I'll buy enough GPUs to achieve 1 million ppd either.
So Mark lasted almost ten years. I've lasted almost 2. We'll see how long Handruin's reign last.
In the past few days my normal workstation has been having issues with my display adapters and folding at home. When I get home later in the evening and try using my system the monitor won't wake up even though the machine is up and running. I have to force a restart via the power button. When I check the windows event viewer I'm seeing several notifications of the following message: "Display driver nvlddmkm stopped responding and has successfully recovered.". None of my stuff is overclocked (CPU and GPUs). I've not updated F@H in a while. The GPU drivers were updated a couple weeks ago. I did remove the EVGA Precision X software because it was behaving erratically and replaced it with MSI's Afterburner. I usually keep the fan speed curve pretty tight such that the GPUs don't exceed 60C while F@H is running.
The other factor is I'm running Handbrake throughout the night and day at the same time. Maybe it's consuming too much CPU time? The strange thing is the systems runs all through the night with F@H and Handbrake churning away. When I checked on the machine in the morning it was fine. What I do notice is that those error message happen just after I was done using my system in the morning before heading to work. Almost as if the process of turning off the monitor causes some issue?
edit: I also downloaded a recommended utility called memtestCL-1.00 and ran it against both GPUs to check for memory errors. No errors found on either GPU after 100 iterations of testing.
I had oddities with the F@H client on my main computer (Asus Strix 970 GPU). Any time I would launch Picasa, it would cause the GPU client to start failing and then I would have to reboot the machine after an eventual lockup. Tried reinstalling the drivers 'Clean' but it didn't help. Ended up moving the GPU in my Plex server and it has been running trouble free 24/7 for several weeks now (fresh install of Win10 but without Picasa).
I'm back to folding, after a fashion. The Intel 510 my desktop is pushing isn't recognized by the folding client (I wonder if we'll see the GPUs.txt update again and add support for it?) but the CPU gets me a cool 1k PPD. That's five times as much as I was getting back when I was folding with a P4. I'm not making nearly enough points for it to be worth the power costs on my end unless I can figure out how to get GPU folding going on the integrated graphics. Mostly, right now I'm just doing it to stress test the machine by setting it to run at full-tilt.
I have a cheapo Lenovo server at the office that randomly falls asleep for no apparent reason. Its power management is set to "high performance" in the BIOS and hibernate is disabled in Windows Server. Yet, every few days, the machine falls to sleep. Since this is our FTP server, it is quite annoying.
It takes more time for the sleep state to happen when a session is left opened on the server. I've started a FAH client on it to test if it stops the hibernation from occuring. This is not a powerful system, so don't expect a high contribution from me out of this.
I'd be folding if I had something that'd make it worth the energy cost. As it is only CPU folding isn't anything worth adding to the pool, it'll have to wait until if/when I build a desktop with a real GPU.
I've also been trying to keep utility bills in check. The rates have gone up a bunch in my area. Once it gets a little colder I can fold. My office has electric heat anyway so I may as well just offset some of it with my folding @ home heat.
Totally understandable with regards to the cost of power. I justify it to myself that I just burn less propane in the current conditions. My logic fails in the summer when I pay again to cool the heat with A/C.
Edited to add that at 12 cents per kWh and $1.59 per gallon of propane it's still basically twice as expensive to heat with electric via gpus/cpus as compared to burning propane in the forced air furnace, in my case.
I've been contributing with one machine for awhile, except when I started having issues with nvidia drivers.
When they sort out their driver issues, I'll start folding with my main machine while idle again.
With the migration to a machine with an actual GPU in it, I've taken to folding again. Estimated PPD is somewhere in the range of 1500-2000 due to the weak hardware, but something is better than nothing. Actually wrote my first simple initscript for OpenRC to do it -- nothing more than adding the command to start the FAHClient program, but it still had to be done nonetheless as a OpenRC script for it was nowhere to be found.
With an actual, real GTX-series GPU, I'm making points hand-over-fist compared to how I used to. It just goes to show...
With the Phenom II and the GT730 I was averaging 30k PPD. With this machine I'm clearing 173k. I knew the 960 absolutely wiped the floor with the 730 -- I'd have to be an idiot not to, I just didn't realize the hardware was this much more powerful.