User User name Password  
   
Saturday 11.1.2025 / 20:52
Search AfterDawn Forums:        In English   Suomeksi   På svenska
afterdawn.com > forums > pc hardware > building a new pc > the official pc building thread - 4th edition
Show topics
 
Forums
Forums
The Official PC building thread - 4th Edition
  Jump to:
 
In case you want to ask something like "What components should I pick for my new PC?", start a new topic to our PC building forum.
Posted Message
Senior Member
_
12. July 2012 @ 19:15 _ Link to this message    Send private message to this user   
Will you see any difference, TRULY, between 35fps over non-interlace 60fps? I don't think so, and that is the true argument here. Also I don't think it is double either!
Advertisement
_
__
Member
_
12. July 2012 @ 19:56 _ Link to this message    Send private message to this user   
Originally posted by Mr-Movies:
Will you see any difference, TRULY, between 35fps over non-interlace 60fps? I don't think so, and that is the true argument here. Also I don't think it is double either!
Sandy Bridge has a roughly 50% lead in integer performance per clock and floating point isn't a favorable at all for AMD either, although it's a somewhat better comparison. If FPU performance gets mixed in, then that lead can jump up even higher. There are also other factors to consider. Yes, maybe 35 versus 60 might be a little too much, but it's not nearly double, it's only 70%. Considering both FPU and integer heavy work, that's actually reasonable for some of the more CPU limited gaming workloads. For some, it might be more like 40 versus 60 or so.

I can see the difference between 50FPS and 60FPS. 35FPS looks like crap in comparison. Heck, I even know a few people who complain about motion sickness with very low frame rates such as that when playing high-paced games. Heh, don't get me started on the difference between 60FPS and 90-120FPS on a 120Hz display.

This message has been edited since posting. Last time this message was edited on 12. July 2012 @ 19:57

Senior Member
_
12. July 2012 @ 20:46 _ Link to this message    Send private message to this user   
I've played with both and I'm not seeing it the way you are, sorry. I've had this discussion with Sam and he thinks experience isn't important but benchmarks are. What a joke as I'll take experience any day over BS.

We will have to disagree I think, no offense Blaze...
Member
_
12. July 2012 @ 20:47 _ Link to this message    Send private message to this user   
Originally posted by Mr-Movies:
I've played with both and I'm not seeing it the way you are, sorry. I've had this discussion with Sam and he thinks experience isn't important but benchmarks are. What a joke as I'll take experience any day over BS.

We will have to disagree I think, no offense Blaze...

No offense taken. Agree to disagree.
ddp
Moderator
_
12. July 2012 @ 21:27 _ Link to this message    Send private message to this user   
good.
AfterDawn Addict
_
12. July 2012 @ 22:58 _ Link to this message    Send private message to this user   
Originally posted by sammorris:
Originally posted by Blazorthon:
Originally posted by theonejrs:

Blazorthon,

For me it's a "No Brainer", I buy an FX-8150 drop it in my socket AM3+ 990XA MB, and smile a whole hell of a lot! Total cost for me? $214.49 delivered, and I can do SLI or Crossfire, if I still want to play some more!

Best Regards,
Russ
Problem is that with fairly high end graphics, the 8150 can be a severe bottle-neck for CPU limited games.
Indeed, anything you find yourself needing crossfire for, you'll probably find you need a lot more CPU power than the FX-8150 can provide.

Sam,

I think SLI would Be Better than crossfire, since I can use either. I seem to remember some discussions on the subject, way back. I guess I should have spent the extra $20 for the GA-990FXA, with, 2x16 SLI, instead of the 2x8 SLI of my 990XA. I could buy a dual video card, but I think it will still only run 2x8 on the XA motherboard. I'm in no hurry! LOL!!

Best Regards,
Russ

GigaByte 990FXA-UD5 - AMD FX-8320 @4.0GHz @1.312v - Corsair H-60 liquid CPU Cooler - 4x4 GB GSkill RipJaws DDR3/1866 Cas8, 8-9-9-24 - Corsair 400-R Case - OCZ FATAL1TY 550 watt Modular PSU - Intel 330 120GB SATA III SSD - WD Black 500GB SATA III - WD black 1 TB Sata III - WD Black 500GB SATA II - 2 Asus DRW-24B1ST DVD-Burner - Sony 420W 5.1 PL-II Suround Sound - GigaByte GTX550/1GB 970 Mhz Video - Asus VE247H 23.6" HDMI 1080p Monitor


AfterDawn Addict

4 product reviews
_
13. July 2012 @ 02:57 _ Link to this message    Send private message to this user   


I know I can see the difference between 50fps and 75fps, but then since this is a benchmark, perhaps you wouldn't?



Afterdawn Addict // Silent PC enthusiast // PC Build advisor // LANGamer Alias:Ratmanscoop
PC Specs page -- http://my.afterdawn.com/sammorris/blog_entry.cfm/11247
updated 10-Dec-13
AfterDawn Addict

7 product reviews
_
13. July 2012 @ 03:15 _ Link to this message    Send private message to this user   
I can certainly see a difference between 35 and 60 FPS. Grand Theft Auto IV desperately needs high framerate LOL! I tried playing it on my 8600Gt. It didn't go over too well. My vision isn't what it used to be either. I hope to have lasik eye surgery one day. I'm sure blu-ray will look even more incredible, than it already does ;) I play grid at 80Fps. Smooth as silk. I haven't tried 60 and lower, but I'm sure I'd see a difference. Though I'd be looking for it :p



To delete, or not to delete. THAT is the question!
AfterDawn Addict

4 product reviews
_
13. July 2012 @ 05:24 _ Link to this message    Send private message to this user   
60 is fine, but the moment you dip a single fps below that level it becomes very obvious, simply due to monitor refresh rates. While you'd think the small difference between 59 and 60fps should be negligible, in actual fact because the monitor only refreshes at 60Hz in the vast majority of cases, you're actually seeing a missed frame, once every second or so - and for that missed frame, you've effectively got a very brief instance of 30fps, as it was two 60Hz frames before the image updated, rather than one. That IS noticeable.



Afterdawn Addict // Silent PC enthusiast // PC Build advisor // LANGamer Alias:Ratmanscoop
PC Specs page -- http://my.afterdawn.com/sammorris/blog_entry.cfm/11247
updated 10-Dec-13
AfterDawn Addict
_
13. July 2012 @ 10:46 _ Link to this message    Send private message to this user   
Someone here (oman7 I think) posted a test to see if Cinavia is already on my computer. Would they please repost it! Something strange is going on with the sound.

Thanks,
Russ

GigaByte 990FXA-UD5 - AMD FX-8320 @4.0GHz @1.312v - Corsair H-60 liquid CPU Cooler - 4x4 GB GSkill RipJaws DDR3/1866 Cas8, 8-9-9-24 - Corsair 400-R Case - OCZ FATAL1TY 550 watt Modular PSU - Intel 330 120GB SATA III SSD - WD Black 500GB SATA III - WD black 1 TB Sata III - WD Black 500GB SATA II - 2 Asus DRW-24B1ST DVD-Burner - Sony 420W 5.1 PL-II Suround Sound - GigaByte GTX550/1GB 970 Mhz Video - Asus VE247H 23.6" HDMI 1080p Monitor


AfterDawn Addict

4 product reviews
_
13. July 2012 @ 12:06 _ Link to this message    Send private message to this user   
Originally posted by Mr-Movies:
Will you see any difference, TRULY, between 35fps over non-interlace 60fps? I don't think so, and that is the true argument here. Also I don't think it is double either!
Well again, this is another matter of one man sees it, another man doesn't. You don't see the performance difference between AMDs and Core i7s and I do, and you don't see the difference between two relatively high frame rates when I do - all a question of tolerance. People who are very sensitive to certain fields are far less sensitive in others - I'm very sensitive to minor changes in graphics fidelity in games, desktop images and the likes, but beside a professional audiophile/music engineer, there will be countless variations in the sound quality of different devices that I will struggle to notice even if I'm looking for them, and it's not as if my hearing is bad.

Not everybody perceives the world in exactly the same way.

Russ, SLI or Crossfire, whichever route you decide to take, 16x vs 8x I can assure you counts for basically nothing. I've run a pair of very high-end cards at 8x each, almost certainly higher performance cards than you will be running (no offense intended here, just using it for sake of argument) in the form of the 4870X2 and 6970, and neither of them demonstrate any perceptible difference between 8x and 16x, so you certainly won't see it with lesser cards. A few benchmark studies have been carried out on this, and it really takes top-end gear, a select few particular titles, and 4x link speed to really show any truly tangible (>10%) difference in performance using SLI or crossfire.

There may be other reasons to choose a board with 16x slot support, but for games performance, it's almost irrelevant, at least for now.



Afterdawn Addict // Silent PC enthusiast // PC Build advisor // LANGamer Alias:Ratmanscoop
PC Specs page -- http://my.afterdawn.com/sammorris/blog_entry.cfm/11247
updated 10-Dec-13
AfterDawn Addict

7 product reviews
_
13. July 2012 @ 12:10 _ Link to this message    Send private message to this user   
I'm unaware of testing cinavia on a computer. As far as I'm aware, there is no known infection detection (LOL) for PC's. Although I believe Intel is a strong advocate for cinavia. One day, it will be at the CPU level. If I'm not mistaken...



To delete, or not to delete. THAT is the question!
AfterDawn Addict

15 product reviews
_
13. July 2012 @ 14:58 _ Link to this message    Send private message to this user   
The argument for high framerates can be proven with two words: Motion Blur. Our eyes create it, video games do not. They can emulate it, but they can not create true optical blur. Our brain(or an optical camera) fills in between the series of images creating the visual of smooth, seamless motion. This is why a film can be viewed quite smoothly at 24FPS while a game cannot. Games render everything in individual frames, then display them in a sequence. With proper optical motion blur, they would appear and play quite smoothly. But they do not have it, nor has any game ever had it. The closest games have ever come to real motion blur is Crysis' aed Metro 2033's "per-object blur" which is still a poor quality emulation of the real thing.

I can readily tell the difference between 30, 60, and 90 FPS without much effort. That other people cannot, is definitely a matter of tolerance. Not facts, not experience, not hearsay; tolerance. The human eye is what creates this difference, and like every human, every human eye is different. More sensitive eyes will be more sensitive to changes in framerate. For many, these differences are readily apparent. For others, they are not. That does NOT make either party "wrong". Nor does it make anyone open to ridicule if they have an opinion on it.

Like Sam, I am extremely sensitive to minute changes in image quality, both in dynamic and static imagery. Unlike Sam, I am also very sensitive to audio as well. I can easily point out things like minute distortion, vocal and acoustic reproduction, etc. That doesn't make myself any more intelligent, better researched, better learned. It means we have a difference in tolerances.
-----------------------------------------------------------

Also add another vote for PCIe link bandwidth not mattering. Seen and conducted several tests myself with no real difference. There are a small few titles and even smaller few circumstances within those titles that will show a difference, and that's when the link is reduced to 4x per card. 8x = Zero Difference, tested and proven by multiple sources :)



AMD Phenom II X6 1100T 4GHz(20 x 200) 1.5v 3000NB 2000HT, Corsair Hydro H110 w/ 4 x 140mm 1500RPM fans Push/Pull, Gigabyte GA-990FXA-UD5, 8GB(2 x 4GB) G.Skill RipJaws DDR3-1600 @ 1600MHz CL9 1.55v, Gigabyte GTX760 OC 4GB(1170/1700), Corsair 750HX
Detailed PC Specs: http://my.afterdawn.com/estuansis/blog_entry.cfm/11388

This message has been edited since posting. Last time this message was edited on 13. July 2012 @ 15:29

Member
_
13. July 2012 @ 17:11 _ Link to this message    Send private message to this user   
With cards such as a 7970 or GTX 680, the PCIe x16 versus x8 actually can matter in some games if PCIe 2.0 is being used. Even with PCIe 3.0, the GTX 690 or the dual-Tahiti card (presumably called the Radeon 7990) might have noticeable differences between x16 and x8, at least in a few games. Also, a little off topic, but outside of gaming situations, it can matter a lot. For example, in GPGPU workloads, the 7970 can have a significant difference in performance between PCIe 3.0 x16 and PCIe 2.0 x16 (presumably also PCIe 3.0 x8).
AfterDawn Addict

4 product reviews
_
13. July 2012 @ 17:13 _ Link to this message    Send private message to this user   
In GPGPU environments there can be some limitations. However, I still only recall losing about 5-6% performance when running the bitcoin miner last year versus someone with the same cards but 16x link speed each.



Afterdawn Addict // Silent PC enthusiast // PC Build advisor // LANGamer Alias:Ratmanscoop
PC Specs page -- http://my.afterdawn.com/sammorris/blog_entry.cfm/11247
updated 10-Dec-13
AfterDawn Addict

15 product reviews
_
13. July 2012 @ 19:16 _ Link to this message    Send private message to this user   
One of the weaknesses of GPGPU loads is that it's not really graphics work, which the cards were designed for. It's raw throughput computing, which can show bottlenecks in basically any architecture. Not to say it's not a valid gauge of performance and bandwidth limitations. But video cards were very simply NOT intended for raw throughput nor was the slot interface intended for anything like the processing power of a video card doing sheer numbers calculations.

So while I agree that the differences can be much more readily apparent in GPGPU environments, it's not the best comparison because of the mix of tech. Video cards simply do not face these particular limitations unless used for this purpose, which is very niche.

In actual gaming, where the differences are IMO more important to the end product, slot bandwidth has very little performance impact. When I went from my DFI DK-790FX-M2RS(full 16x PCIe 2.0) to my current board Gigabyte GA-890XA-UD3(8x PCIe 2.0) the actual overall difference was about 2-3% ie within margin of error.

When I followed this article:

http://www.techpowerup.com/reviews/AMD/...xpress_Scaling/

Taking MY cards to 4x each netted about 10% difference overall. A bit more than this article posted. They, of course, tested with a single card whereas I tested with a Crossfire configuration, so take that into account.

What's interesting is that I have yet to see a point where the percentage difference increases with the power of the card vs the bandwidth of the slot. 4870, 5850, 6850, even a 6970 all showing roughly the same percentages in my personal testing. Even more interesting to me is that some games, even very GPU limited ones, showed no difference at all. At the time I was using Battlefield Bad Company 2 for testing and it played happily at 100% performance right down to 4x bandwidth. Other games, like Crysis and Metro 2033, showed the difference quite plainly at 4x, but were completely unaffected at 8x.

I would like to see a more definitive article that tests a wider range of computational loads. I would also like to see it done with more modern, more powerful cards.

My conclusion, unless you have a ridiculously powerful video card, PCIe bandwidth will never matter. 8x is just fine.



AMD Phenom II X6 1100T 4GHz(20 x 200) 1.5v 3000NB 2000HT, Corsair Hydro H110 w/ 4 x 140mm 1500RPM fans Push/Pull, Gigabyte GA-990FXA-UD5, 8GB(2 x 4GB) G.Skill RipJaws DDR3-1600 @ 1600MHz CL9 1.55v, Gigabyte GTX760 OC 4GB(1170/1700), Corsair 750HX
Detailed PC Specs: http://my.afterdawn.com/estuansis/blog_entry.cfm/11388

This message has been edited since posting. Last time this message was edited on 13. July 2012 @ 19:41

Member
_
13. July 2012 @ 20:01 _ Link to this message    Send private message to this user   
Originally posted by Estuansis:

http://www.techpowerup.com/reviews/AMD/...xpress_Scaling/

Taking MY cards to 4x each netted about 10% difference overall. A bit more than this article posted. They, of course, tested with a single card whereas I tested with a Crossfire configuration, so take that into account.

What's interesting is that I have yet to see a point where the percentage difference increases with the power of the card vs the bandwidth of the slot. 4870, 5850, 6850, even a 6970 all showing roughly the same percentages in my personal testing. Even more interesting to me is that some games, even very GPU limited ones, showed no difference at all. At the time I was using Battlefield Bad Company 2 for testing and it played happily at 100% performance right down to 4x bandwidth.

I would like to see a more definitive article that tests a wider range of computational loads. I would also like to see it done with more modern, more powerful cards.
I wasn't using the difference in some GPGPU loads as a gauge for PCIe bandwidth performance scaling as shown in FPS during games, only as a contextual example to say that the PCIe bandwidth can influence some things and I didn't intend to imply that it would be a god gauge for gaming performance differences based on the PCIe bandwidth.

As for the apparent lack of scaling, it's kinda logarithmic and can seem to not be there unless you look more closely or use somewhat extreme examples. Using a much faster card and increasing picture quality greatly in order to compensate in comparison with a much lower end card to a point where they get nearly identical FPS probably wouldn't show much of a difference in most games between different amounts of PCIe bandwidth. If I had to guess, I'd say that this is because although the greater quality needs more GPU performance, more GPU work on the same input data doesn't necessarily mean much more data needs to be shuffled back and fourth through the PCIe lanes. With multiple GPUs on separate graphics cards, this is not quite true anymore because although they usually have a bridge, graphics cards still talk to each other through the PCIe lanes. However, they only need so much bandwidth for this and it probably isn't much of an overhead. Your post gives further evidence that the overhead is fairly minor in comparison to a single GPU, at least with the cards that you're using.

However, the amount of data that moves across the PCIe lanes during gaming does increase somewhat with some aspects of some games and as cards get faster, you get more data shuffling around, especially with multiple cards. Simply using faster cards can also make a greater difference if you don't increase the quality and simply go for higher FPS, especially with systems intended to make good use out of a 120Hz display. It's definitely something worth testing more extensively than what little I've seen first-hand about it. I'm not sure about how much it would affect performance, but I have no doubt that the effect will at least be noticeable in benchmarks if not in real-time play.

On that note, Nvidia cards such as the GTX 680 might not have it as bad as the 7970 may due to their being able to get more bandwidth out of the same PCIe setup (I'd think that the card simply has a more efficient PCIe interface in the card). Well, that's according to tests done by Tom's.

Also, wasn't AMD's GCN architecture specifically designed for compute performance? I can understand that most other video cards and their GPUs aren't, but at least to an extent, several of Nvidia's architectures and AMD's GCN at least had compute in mind during their design, if not specifically designed for it.

This message has been edited since posting. Last time this message was edited on 13. July 2012 @ 20:09

Member
_
13. July 2012 @ 20:07 _ Link to this message    Send private message to this user   
Originally posted by sammorris:
In GPGPU environments there can be some limitations. However, I still only recall losing about 5-6% performance when running the bitcoin miner last year versus someone with the same cards but 16x link speed each.

Remember, the greater PCIe bandwidth only makes much difference on cards that are fast enough to use it. You won't notice as much of a difference on a 6970 or a 5870 as you would on a 7970.

This message has been edited since posting. Last time this message was edited on 13. July 2012 @ 20:07

AfterDawn Addict

15 product reviews
_
13. July 2012 @ 22:15 _ Link to this message    Send private message to this user   
Quote:
I wasn't using the difference in some GPGPU loads as a gauge for PCIe bandwidth performance scaling as shown in FPS during games, only as a contextual example to say that the PCIe bandwidth can influence some things and I didn't intend to imply that it would be a god gauge for gaming performance differences based on the PCIe bandwidth.
Was simply trying to say that while an excellent example of some of the known bottlenecks, just not a fair argument when the large majority is more worried about gaming. Well aware that you didn't intend it to be a challenge :)

Quote:
With multiple GPUs on separate graphics cards, this is not quite true anymore because although they usually have a bridge, graphics cards still talk to each other through the PCIe lanes. However, they only need so much bandwidth for this and it probably isn't much of an overhead. Your post gives further evidence that the overhead is fairly minor in comparison to a single GPU, at least with the cards that you're using.
Having done some research and benchmarking I can pretty firmly say that my CPU would effectively use maybe a pair of 6970s max. Given that I have a fairly high resolution and a fondness for AA, I only need to worry about my CPU maintaining minimum/average framerates, which it can and does. Maximums are another story though and an OC'd i5/i7 could easily best my 955 in that respect.

After some experimenting and testing I've figured out that my current Crossfire scaling is basically at optimum efficiency. My CPU fully feeds these two cards to the maximum of their abilities. I drew my conclusion from two sources: 1) My CPU usage during times of low framerate in my "trouble spot" games. and 2) The fact that I get mathematically perfect gains from overclocking my video cards, and never hit a plateau as high as I go, which has been fairly high.(from 775/1000 to 950/1200. 900/1150 daily usage Furmark stable)

Only mentioning this at all because many have been quick to point out my CPU as a bottleneck, and I previously had myself convinced of the same thing. My CPU is not a bottleneck at all, and is severely under-utilized. I dunno if that's saying a lot about the longevity of the Phenom II, or saying very little of the current state of games development. I'd rather like to think it's a combination of the two. Battlefield 3 being a strong example where my CPU hovers at ~50% usage while both cards are at 100% load and optimal scaling.

Quote:
Also, wasn't AMD's GCN architecture specifically designed for compute performance? I can understand that most other video cards and their GPUs aren't, but at least to an extent, several of Nvidia's architectures and AMD's GCN at least had compute in mind during their design, if not specifically designed for it.
Optimized for it yes, designed for it no. AFAIK the move towards scalar computing is because it is very flexible, and can excel at many different tasks, whether it be content acceleration, complex graphics rendering, etc. It can also be scaled much more readily allowing cheaper manufacturing costs and easier technology implementation. The good raw computing performance is part of a much broader spectrum of abilities.



AMD Phenom II X6 1100T 4GHz(20 x 200) 1.5v 3000NB 2000HT, Corsair Hydro H110 w/ 4 x 140mm 1500RPM fans Push/Pull, Gigabyte GA-990FXA-UD5, 8GB(2 x 4GB) G.Skill RipJaws DDR3-1600 @ 1600MHz CL9 1.55v, Gigabyte GTX760 OC 4GB(1170/1700), Corsair 750HX
Detailed PC Specs: http://my.afterdawn.com/estuansis/blog_entry.cfm/11388

This message has been edited since posting. Last time this message was edited on 13. July 2012 @ 22:16

harvardguy
Member
_
14. July 2012 @ 00:38 _ Link to this message    Send private message to this user   
Wow, you guys really know your hardware, Stevo, Blaze, Sam, Russ, Kevin, Jeff! That was a fascinating couple of pages on FX vs i7/i5 and disabling one core per module for reduced heat, higher overclocking, and jazzing up amd performance - interesting!

Way beyond me. I want AMD to be successful, and I've just bought two of their cards, the 7950s, but I won't be picking up anything other than intel on the computer side for a while.

Even then, with Blaze's encouragement, I'll have the challenge of removing the IHS off the ivy bridge, because it is not super complicated, just tedious to get rid of the "insulating" thermal paste that intel put on to give them a chance to sell their higher end chips.

-------------------------------------------------

Speaking of insulating thermal paste, that reminds me of artic silver 5 vs intel stock hsf cooler paste, on something this past weekend.

Hey - I have a good question for all of you geniuses:

I went up to visit Miles, the major game company modeler and animator. His i7 940 was running hot, and for the first time I heard a fan cycling on at 100% for about 30 seconds, every 90 seconds or so. Whoooosh! It was loud!

I stuck my head behind the case - the Lain Li is not a gaming case and has only one 120mm intake running at about 800 rpm from what I could tell. So the loud noise was from the back exhaust. Speedfan said it was cycling from 1500 to 2600. I have a kaze 3000 rpm - extra thick blades - and that thing is LOUD - actually I have two of them, one in a kama bay, another in a side intake. So the 2600 120mm in the back was LOUD! Not as loud as the kaze, but imagine sitting there and having that blower come on every 90 seconds. Irritating.

I asked Miles if he had cleaned the intake filter. He said yes - and indeed it was clean. He said he looked inside the case, and saw a digital readout showing 95 degrees. I thought that was pretty hot, lol. We downloaded some tools. Core temp showed 75,71, 72, 74, and since my 9450 produces errors above 70, I thought that was super hot.

He gave me permission to clean out his case. I noticed a stock intel cooler, with the top caked in dust. I took off the cooler, took it outside, blew it out almost to the point of fainting, and now the cooler was clean. The local radio shack had artic silver 5, and the cleaning solutions, and a can of compressed air. I buttoned up the machine, the digital readout showed 84, down 11 degrees, corresponding to cpu in bios, and in speedfan, and I forecasted - "Watch your cores - they will be way down."

The new core temps were 99, 98, 100, 99. !!!!!!!!!!!!!!!!!

I looked around quickly for a fire extinguisher.

The computer no longer cycled the hot fan noise. It appeared to be running okay. I ran prime 95 all night and it didn't crash. When running prime it speed stepped down to 1.5ghz, from rated 2.93. With no prime 95 running, in idle, it showed 2.2ghz, then 2.5, then 1.7, jumping around per background tasks.

In all cases the core temps were through the roof. What happened?

He is now telling me that his computer is shutting off suddenly from time to time. I can imagine why.

His company will be sending him a new computer. He wants to know whether I should turn the i7 into his new server. I picked up a freezer pro cooler for him - a simple pop on to replace the stock intel cooler - for $32 from newegg.

Kama bay is no longer available - does anybody know a replacement product?

NOW, KEVIN WILL ASK - WHY ARE YOU SHOUTING RICH? I AM SHOUTING BECAUSE - WHAT THE HECK HAPPENED TO DRIVE THOSE CORE TEMPS THROUGH THE ROOF?

I have a wild theory. What do you guys think happened?

---------------------------------

SSDs
Originally posted by blazorthon:
I'd go for at least a 100GB. Of current SSDs, the Vertex 4 is generally one of the better drives for all-around work and although I may have said this before, I'd get it. The 128GB model (one that I've used) should have a good amount of capacity and high performance, although the 256GB model would be better if you're willing to drop in enough money for it. Samsung's 830 is also good, although it has far inferior write performance. The 830 is better for laptops and such IMO because it uses almost zero power, so it saves a few minutes of battery time compared to to other SSDs and even better, compared to hard drives. I'd say that except for Intel's drives, avoid SandForce like the plague and even with Intel's, well, non-SandForce drives don't rely on highly compressible data to get full performance and can be much more uniform and high performance with all data rather than just some data.
Well, Kevin took that information from Blaze and is checking things out - that 256GB is really down in price at $219. Hmmm. Would I really pick up better crysis performance?

Congratulations Kevin on treating yourself to the SSD, after a good month of earnings!

I am printing out (paperport) most of this discussion between you guys, Blaze, Sam, Kevin and Jeff on SSDs. I guess there is a bunch to know, to prevent a dead drive as Blaze puts it. Sam - yours has been running good for almost 2 years. I wonder if the 256 size is needed. If I want it mainly for crysis, with w7 and the game, and the other version - maybe all the crysis games, would 110 GB be big enough? It would, right? Would the SSD help my performance. I will turn off texture streaming, as Jeff says to do, to prevent stuttering. I guess I don't mind a pause in the action from time to time. What real benefit would I get from an SSD?

--------------------------

HD 7950
On some other news I have been testing these 7950s like mad. I think it is fortunate that I fit into the category that Jeff talked about, those people who are not sensitive to the difference between 30 and 60 fps, never mind 90. That is a good thing because I am getting about 30 fps on Max Payne 3, and yet it feels smooth to me. Max Payne is loading up the 3 gigs of vram, to about 1.75 at times. I have not yet done crossfire.

For now I came off the overclock to just standard clocks. I have been gaming for 30 hours at a stretch, several times.

SPEC OPS: THE LINE
I went through Spec Ops: The Line 3 times, completing fubar. It is a 3rd person shooter like Max Payne 3, but very challenging, with a great compelling story. I thought it was junk at first, then completely changed my mind. The game ends up almost haunting you, like Metro 2033. The main character, Captain Walker, is an actor - not your normal pretty boy. He plays many villains. One review said he was the perfect choice. He is the hero, maybe anti-hero is a better description, in this story. Again, highly recommended single player campaign. If you unlock fubar from suicide, you will be in for a challenge. Doable, but tough. Just don't forget how to sprint, like I did - you hit space while running. Or you won't be able to outrun the helicopter in the 20 second little scene that I had to do on easy to get through using the basic loping slow run style, without the sprint, which I had forgotten about, lol. Just for that fubar was not unlocked, and I played through on suicide one more time. Unlike Operation Flashpoint, this is more Max Payne - virtually non-stop action.


MAX PAYNE 3
I finished fubar, then went back to Payne where I had left him at the boat yard. Right away I noticed how detailed the Payne textures were. The docks were gorgeous! That's the 35 gigs install size at work. That game is a console port, but revamped for PC and it rocks! The favelas were very detailed - all the various parts of the slums highly textured with detailed artwork - very realistic. Awesome game.

The shootdodge Max Payne dive is not always the best move. Sometimes some good counter strike crouch and head shot before they fire, works better. I hit the finale with only two pain pills, and was in the middle of 30 attackers. No way could I make that work even with the two sets of pain pills nearby. You can't move while prone. After 15 fails, I gave up and restarted the entire level. I was playing on Hard. An hour or maybe two later, I had allowed almost no loss of life, and I accumulated the maximum 9 pain pills by the time I hit the finale.

So I started with a Max Payne dive on the floor behind my position, to clear some space between me and the baggage racks that were my defense, so I could get headshots on guys 20 or 30 feet behind, all while lying on my back. I watched my health, popped a pill, and then last man standing kicked in several times - in lieu of dying, it uses up a pill if you kill the guy who just killed you. All the guys dead, the final animation kicked in, and whew! I cleared that madhouse of a checkpoint - only maybe 3 attempts needed - a couple of times I didn't monitor my health closely enough.

Remember - when you are on that last main chapter, you will need lots of pain pills at the end. And keep an assault rifle that holds 30 rounds, not the 20 round rifles - you want to cut down on reload time.

Quad 9450 and 7950 Max Payne and Spec Ops performance
The asus mobo lets me store oc profiles, so I can go back to the overclock in 20 seconds. I am running gpu fan speeds at 100%. The HIS IceQ that I first got was almost too big - it will fit in only slot 3, not slot 6 as it hits the bottom of the case - it is virtually 3 slots wide. In slot 3 it hits the hard drive cage, but fits. It is a work of art, and almost cannot get hot. I took that out after two weeks, and the power color that arrived a few days ago, mounted in slot 6, seems to be running okay, but a couple of days ago did freeze up after about 4 hours - so I now run the fans at 100%, about 3500 rpm for the two of them - it is the power color dual fan slightly overclocked model. I am running it at around 900 mhz, just below the 7970 clock of 925. It is not yet stressed. Similarly I dropped the memory clock way down close to standard.

My cpu load shows on msi afterburner at about 70% average on all cores. I get cpu measurements by starting msi, then starting riva, and the cpu core temps etc from riva display on afterburner. I had to add core load on core 0, so I reloaded the 8800gtx, nvidia driver, fixed rivatuner, took out the driver, and put the 7950 back. I could post the picture - but my OSD shows me time of day, then it shows me gpu load, clock, temp, memory clock, vram usage, and for the cpu side, I put cpu clock for all four cores, so I can look past that, currently 2660, and see the cpu load per core all stacked on top of each other, and as I say, all at around 70% for Max Payne 3.

Like I say, 70% cpu load, roughly the same on all cores, and 70% gpu load, running max payne, with 1.75 gigs of vram usage. FPS shows at about 30. Maybe that is incorrect, as the game is very smooth with no perceptible lag at any time.

I noticed yesterday that there was a bit of glistening, pixel flashing, not in-game, but in the main game menu. I hadn't noticed that with the HIS IceQ card, so I will reload that tomorrow and see if I see that card doing the same. I hope so because I don't want to return the power color - taking a $60 restocking hit - but if I end up doing that - I will probably get the sapphire. Some of the sapphire reviews showed a DOA card, but the 5 power color reviews were uniformly 5 eggs. If I keep the power color, I will be ready for crossfire in about 2 or 3 days. Then it's on to BF3. And then I'll probably turn the overclocking back on, for the cpu and the 7950s depending on performance and load.

Rich
AfterDawn Addict

15 product reviews
_
14. July 2012 @ 00:59 _ Link to this message    Send private message to this user   
Haha right on Rich! Just remember to enable Crossfire in the graphics control panel and make sure the Crossfire bridge is on tight.

Also remember to OC one thing at a time. Take some time with your CPU first, and make sure your settings are rock solid stable and your temps are agreeable before OCing video cards. Again 3.4-ish would be a worthy goal that shouldn't be too hard to obtain or kill your temps.

Also keep an eye on video card temps. Single slot on an overclocked card, even dual fan, isn't entirely a good idea. It might be better in the long run to downclock the second card to match the stock one.

For the record I use Sapphire Trixx for video OCing. Least BS, no crapware, very resource light, very powerful. AMD cards can be buggy with certain programs and some aren't even supported. Sapphire Trixx is the only exception I've found that isn't poorly programmed.

Many have been praising MSI AfterBurner but it's woefully out of date and requires driver-level support, so hasn't worked with AMD cards for months and months. AfterBurner is also loaded with bloatware and requires jumping through hoops to actually OC. Trixx is hardware based.

Also, when OCing video cards I'd say raising voltage is a big nono unless you want dead video cards. The coolers on my 6850s are above average and my cards still get uncomfortably hot in FurMark when OC'd above their current settings as shown in my sig.

Likewise you should never really need 100% fan speeds unless playing a game. Your cards will downclock automatically when in 2D mode so you should be safe to leave the fan at a much lower speed. Maybe set up a more aggressive fan profile that still allows lower idle speeds? Trixx comes to mind here. I set mine up so the idle fan speeds are the same as stock but MUCH more aggressive with temps. My cards basically run as cool as they did stock, but with a very respectable OC.



AMD Phenom II X6 1100T 4GHz(20 x 200) 1.5v 3000NB 2000HT, Corsair Hydro H110 w/ 4 x 140mm 1500RPM fans Push/Pull, Gigabyte GA-990FXA-UD5, 8GB(2 x 4GB) G.Skill RipJaws DDR3-1600 @ 1600MHz CL9 1.55v, Gigabyte GTX760 OC 4GB(1170/1700), Corsair 750HX
Detailed PC Specs: http://my.afterdawn.com/estuansis/blog_entry.cfm/11388

This message has been edited since posting. Last time this message was edited on 14. July 2012 @ 01:06

harvardguy
Member
_
14. July 2012 @ 01:18 _ Link to this message    Send private message to this user   
Thanks Jeff,

I'm going to print that out and study it carefully. I only use that rig for gaming. So when you say don't run 100% fans unless gaming - I AM GAMING! LOL Otherwise I'm on this computer which is a dell 3ghz p4 with a Raid mirror.

You and Sam are the crossfire men - maybe Blaze too - so thanks for the Trixx tip, I will have to download it and check it out.

Originally posted by Jeff:
Also keep an eye on video card temps. Single slot on an overclocked card, even dual fan, isn't entirely a good idea. It might be better in the long run to downclock the second card to match the stock one.
When you say reduce clocks on the power color, the card that doesn't have the full IceQ cooler, and which will be kind of squeezed in between the case bottom and the HIS IceQ above it, when you are in crossfire, don't you have to run both cards at the same clocks?

Rich

This message has been edited since posting. Last time this message was edited on 14. July 2012 @ 01:20

AfterDawn Addict

15 product reviews
_
14. July 2012 @ 02:25 _ Link to this message    Send private message to this user   
Quote:
When you say reduce clocks on the power color, the card that doesn't have the full IceQ cooler, and which will be kind of squeezed in between the case bottom and the HIS IceQ above it, when you are in crossfire, don't you have to run both cards at the same clocks?
Sometimes having differently clocked cards can cause stability and crossfire scaling issues, as the clocks aren't always read correctly. It's supposed to default your performance to that of the lowest card but that doesn't always work properly. It's as simple as setting the highest clocked card to the same settings as the lowest clocked one or vice versa. Most OCing utilities will have the option to sync alike cards. Remember to keep an eye on temps because video cards tend to have a "snapping point" where the heat output will overpower the cooler and cause much higher temps from a very small change. Good example being my 6850s which run fine right up to 920MHz on the core, but skyrocket in heat at 940MHz. Have settled on 900MHz for both stability reasons and as the best mix of temps and OC percentage.

Generally you want your cards to stay synced. Again Trixx is good for this and is basically tailor-made for Crossfire users. Other programs can do the same things but normally have some sort of catch.

One of the biggest flaws in many OCing programs is that they force a constant clockspeed and do not allow basic Powerplay(clock throttling) to be used. This causes undue heat, stress and fan noise as video cards normally throttle down to an idle speed when in 2D mode. Trixx is currently the ONLY utility I've found that factors for this correctly, being tailor-made for AMD cards. Most other utilities have an option for setting 3D and 2D profiles, but simply do not allow you to drop your clocks low enough to match stock idle speeds, and do not allow the card to throttle voltage either so you still have extra heat being generated.

Some basic tips for OCing on AMD cards:

1)Disable ULPS
2)Disable ULPS
3)Disable ULPS

ULPS(Ultra Low Power State) generally interferes with video OCing. This is separate from Powerplay and normally only kicks in during hibernate or sleep. If you do use hibernate or sleep, you might see a small jump in power consumption at the wall, but the overall effects on heat and longevity are largely negligible. Many power users and performance enthusiasts(like myself) do not allow hibernate or sleep modes at all for performance reasons. I personally disabled both features alongside ULPS as they use CPU cycles, hard drive space, and really do not play well with Gigabyte hardware. To each his own :P

Also, if planning to use OCing software, disable AMD Overdive in the control panel BEFORE you even install the new software, and no matter what it says do not play with Overdrive at all as long as you intend to use said software. Overdrive tends to argue very badly with OCing software.



AMD Phenom II X6 1100T 4GHz(20 x 200) 1.5v 3000NB 2000HT, Corsair Hydro H110 w/ 4 x 140mm 1500RPM fans Push/Pull, Gigabyte GA-990FXA-UD5, 8GB(2 x 4GB) G.Skill RipJaws DDR3-1600 @ 1600MHz CL9 1.55v, Gigabyte GTX760 OC 4GB(1170/1700), Corsair 750HX
Detailed PC Specs: http://my.afterdawn.com/estuansis/blog_entry.cfm/11388

This message has been edited since posting. Last time this message was edited on 14. July 2012 @ 02:29

AfterDawn Addict

4 product reviews
_
14. July 2012 @ 06:13 _ Link to this message    Send private message to this user   
Originally posted by Blazorthon:
On that note, Nvidia cards such as the GTX 680 might not have it as bad as the 7970 may due to their being able to get more bandwidth out of the same PCIe setup (I'd think that the card simply has a more efficient PCIe interface in the card). Well, that's according to tests done by Tom's.

That may be true but Tom's is basically a wholly owned subsidiary of nvidia, so you cannot treat any comments like that as fact.

Techpowerup have released an article on the HD7970 vs GTX680, and even now, PCI express link is completely irrelevant for performance, right through from 1024x768 up to 3240x1920. This does not consider the additional implications of using crossfire/SLI on the interface of course, but if a card runs fine at 4x single, crossfire with 8x per card will no doubt be fine.



Afterdawn Addict // Silent PC enthusiast // PC Build advisor // LANGamer Alias:Ratmanscoop
PC Specs page -- http://my.afterdawn.com/sammorris/blog_entry.cfm/11247
updated 10-Dec-13
Advertisement
_
__
 
_
Member
_
14. July 2012 @ 09:29 _ Link to this message    Send private message to this user   
Originally posted by sammorris:
Originally posted by Blazorthon:
On that note, Nvidia cards such as the GTX 680 might not have it as bad as the 7970 may due to their being able to get more bandwidth out of the same PCIe setup (I'd think that the card simply has a more efficient PCIe interface in the card). Well, that's according to tests done by Tom's.

That may be true but Tom's is basically a wholly owned subsidiary of nvidia, so you cannot treat any comments like that as fact.

Techpowerup have released an article on the HD7970 vs GTX680, and even now, PCI express link is completely irrelevant for performance, right through from 1024x768 up to 3240x1920. This does not consider the additional implications of using crossfire/SLI on the interface of course, but if a card runs fine at 4x single, crossfire with 8x per card will no doubt be fine.
Tom's often gets more flamboyant about Nvidia cards when they win, but to be fair, most of their recommended graphics cards at almost all price points are AMD/Ati cards and have been for a while now. They also had an article showing how the Radeon 7970 GHz edition with Catalyst 12.7 did beat the GTX 680 with it's drivers for the time (and supposedly still does with Nvidia's most recent drivers, but not by much), even in their slightly Nvidia favoring game selection.

This message has been edited since posting. Last time this message was edited on 14. July 2012 @ 09:31

 
afterdawn.com > forums > pc hardware > building a new pc > the official pc building thread - 4th edition
 

Digital video: AfterDawn.com | AfterDawn Forums
Music: MP3Lizard.com
Gaming: Blasteroids.com | Blasteroids Forums | Compare game prices
Software: Software downloads
Blogs: User profile pages
RSS feeds: AfterDawn.com News | Software updates | AfterDawn Forums
International: AfterDawn in Finnish | AfterDawn in Swedish | AfterDawn in Norwegian | download.fi
Navigate: Search | Site map
About us: About AfterDawn Ltd | Advertise on our sites | Rules, Restrictions, Legal disclaimer & Privacy policy
Contact us: Send feedback | Contact our media sales team
 
  © 1999-2025 by AfterDawn Ltd.

  IDG TechNetwork