|
The Official Graphics Card and PC gaming Thread
|
|
AfterDawn Addict
4 product reviews
|
20. August 2009 @ 20:36 |
Link to this message
|
Point A: Not so, look at reviews. i7 idle power is just as bad as load
Point B: Exactly why there are so many i7s in use in that environment.
|
Advertisement
|
|
|
AfterDawn Addict
7 product reviews
|
21. August 2009 @ 00:23 |
Link to this message
|
I was under the impression that the i7's were more electrically efficient? When I was comparing them(phenom/i7), I meant that the i7 would save money on the electrical bill. As marginal as it probably is...
To delete, or not to delete. THAT is the question!
This message has been edited since posting. Last time this message was edited on 21. August 2009 @ 00:25
|
Member
|
21. August 2009 @ 07:11 |
Link to this message
|
I wasn't saying it was a bad voltage. I was saying it was concerning that my CPU voltage had been increased over the recommended amount without my doing.
I wonder how efficient my CPU is in comparison with my parents 95nm P4...
I could put something funny here but I cant be arsed. Now GO AWAY!
|
AfterDawn Addict
4 product reviews
|
21. August 2009 @ 08:01 |
Link to this message
|
Ooh no no not at all, the i7 uses much more power than a Phenom II, probably more than an original Phenom as well. The i5s however, will be very efficient.
keith: Lol, most 90nm P4s were 90-95W TDP, so that's 95W for one core, even the i7s are only around 38-40W per core! Intel's low energy quads are only 16W per core.
|
Member
|
21. August 2009 @ 08:55 |
Link to this message
|
I'm forever indebted to Omega for saying about XBMC a while back (I'm not sure it was this thread...) I'm currently working my way through 5 series's (what's the plural of series) of Peep Show in lovely upscaled HD.
I should get used to watching DVD's on the PC its evidently not doing any gaming in a while :-( Crashed on FUEL the other day...
I could put something funny here but I cant be arsed. Now GO AWAY!
This message has been edited since posting. Last time this message was edited on 21. August 2009 @ 08:56
|
AfterDawn Addict
4 product reviews
|
21. August 2009 @ 09:03 |
Link to this message
|
So rather than fixing the problem you're just not going to use it? :S
|
Member
|
21. August 2009 @ 09:04 |
Link to this message
|
I can't afford to fix it :'(
I could put something funny here but I cant be arsed. Now GO AWAY!
|
AfterDawn Addict
4 product reviews
|
21. August 2009 @ 09:05 |
Link to this message
|
What needs replacing, did you decide? (Apart from the obvious)
|
Member
|
21. August 2009 @ 09:17 |
Link to this message
|
Just my PSU nothing else seems damaged/bad. What I did was I kept Everest open while playing Crysis in a window and my 5v dropped even more just before the PC crashed.
So yeah its all good fun. Shouldn't be too long before I can afford a nice Corsair unit mind. I've somehow accidentally without realising built up a small IT call out business. Mind you if our lass* wants to go anywhere (which she's been hinting at) I may be further away then I think.
EDIT: *That's Yorkshire speak for girlfriend. I should probably stop typing in Yorkshire for the benefit of all.
I could put something funny here but I cant be arsed. Now GO AWAY!
This message has been edited since posting. Last time this message was edited on 21. August 2009 @ 09:20
|
AfterDawn Addict
4 product reviews
|
21. August 2009 @ 10:23 |
Link to this message
|
I know what a lass is :P - besides, I spend more than half the year in Yorkshire anyway. Since a system like that will run fine off a 400W CX, it's only £40 to fix the issue.
|
Member
|
21. August 2009 @ 11:50 |
Link to this message
|
The CX is what I was looking at. My little summaries of Yorkshire-ness is more for the benefit of the Americans.
I could put something funny here but I cant be arsed. Now GO AWAY!
|
AfterDawn Addict
7 product reviews
|
21. August 2009 @ 13:19 |
Link to this message
|
LOL! Wonder how I came to that conclusion. Thanks for clearing that up. I've ran some searches, and it WOULD appear that the i7's boast some higher wattage's.
Sorry about your PSU trouble keith. Hope she's up and running soon :(
To delete, or not to delete. THAT is the question!
|
Member
|
21. August 2009 @ 15:13 |
Link to this message
|
Originally posted by omegaman7: Sorry about your PSU trouble keith. Hope she's up and running soon :(
Don't be. I brought it upon myself. I had several opportunities to buy a proper one and I had been warned on multiple occasions both verbally and by the same unit blowing up for other people.
That sig's definitely the best so far Omega.
I couldn't resist...
I could put something funny here but I cant be arsed. Now GO AWAY!
This message has been edited since posting. Last time this message was edited on 21. August 2009 @ 15:15
|
AfterDawn Addict
7 product reviews
|
21. August 2009 @ 16:21 |
Link to this message
|
Thanks buddy :) I thought I would go with a different color. I also like a teal side of colors. And the way the lighting works is awesome as well. More than likely, i'll run this one for a while :D
You should have seen it when it was 2500 x 1000 pixels ;)
To delete, or not to delete. THAT is the question!
This message has been edited since posting. Last time this message was edited on 21. August 2009 @ 16:27
|
harvrdguy
Senior Member
|
22. August 2009 @ 08:25 |
Link to this message
|
Sorry guys, I got carried away on this post, again. Another freakin ESSAY as Shaff likes to say. And which Omega used to love to complain about, lol. Just when I think I have the new build all planned out, Sam throws a monkey wrench into everything talking about Raid 5 and hardware controllers, lol. Now I might have to discard my beloved DFI Lanparty H8 for the new build.
Anyway, I've been gone almost a week.
By the way, speaking about you, Sam, you were right. Again :P The DFI does not offer 16x/16x/16x - it's either 16x/16x, or 16x/8x/8x - my mistake.
Originally posted by jonathon: Talking of Rich, any reason you downloaded and re-uploaded my image to your Imageshack, what was wrong with copying the url? I ought to sue your ass
Hahahaha. Keith, I was thinking the same thing (that you might sue me - you fiend.) Since I was plagiarizing, I decided to reprint your image at 50% size - that's why I downloaded and reloaded, lol. Then in court I could plead whatever Perez Hilton pleads to keep those paparazzis off his butt!
Originally posted by keith's sig: Dear God please grant me patience, I need it RIGHT NOW!
Hahaha - I just noticed your slogan - good one!!
And I see that Omega has a newer sig - with the teal colors. Looking good Kevin!
I'll tell you guys where I've been:
I am re-installing all my software after having an intermittent ide cable failure. Luckily I had bought a new "round-type" ide cable for the new build and I had that handy. Unluckily I didn't try that first before coming to the conclusion that I had a disk failure, duhhh. However, wiping the drive appears to have gotten rid of several xp corruptions - I now have IE7 with tabs and it would never install before, and I have instant LAN networking without the annoying 20-second recognition pauses when I tried to copy or move files between my gaming computer and my everyday business machine.
Anyway, every time I reload a game, I start the game to see if it works, and also set my keyboard control settings, like Q for changing guns, etc. So I recently re-installed Battlefield 2, and stayed up all night playing it, lol. I haven't played it in a long time!
Originally posted by Rich discovers the engineer class on BF2: I was just playing single player, the map I know best, Gulf of Oman. On the hardest setting they stack the teams, 7 on my team, the marines, 9 on the Mec team. I was trying all the weapon classes, and happened to play the engineer. I discovered land mines, lol. Although I hate the shottie, I discovered that the pistol is not too bad, even at 100 yards, and occasionally the shottie will take out 2 mecs at the same time, which was fun.
But the land mines turned out to be critical if I wanted to win. Finally what I ended up doing is spreading out the 5 land mines, and then grabbing anybody's kit in the 20 seconds before the kits disappeared, Mec or Marine, in order to pick up a decent smg. Often coming back a second time after re-spawn, some of the landmines would still be there, and I could mine the main road in.
The land mines, spread out on the road down to the second flag, and if I had enough, spread out on the main road, kept the Mec tanks from crossing the railroad tracks. That left Mec infiltration confined mostly to foot soldiers. I think an engineer can disable a mine, and I saw a tank once shoot a mine to blow it up, but pretty much the tanks began to wise up and stay away from the mines. That turned the battle for us countless times. I'll have to remember that when I eventually go back online with BF2.
RAID, RAID CONTROLLERS, AND RAID 5 VERSUS RAID 0
Sam, I am ignorant about Raid for sure. I am trying to avoid buying too many high-priced hard drives for the new build. Help me understand the following:
Originally posted by Sam: Avoid RAID0 at all costs, it might seem a good idea at the time, but it has no effect on loading times, and is downright risky.
I know that Anandtech wrote a review about Raid 0 saying it did not help, but I read another review that said that Anandtech used questionable assumptions, and that Raid 0 DID help for gaming.
As I understand it, Raid 0 stripes the two drives. When you read your data, you take a slight access time hit as the Raid controller figures out where the stripes are. But then the two drives are sending the data - one stripe from one drive, the second 64kb stripe from the second drive - the third from the first drive, and so on. That's two heads and two disk controllers shoving data through the pci-express bus which I have read will significantly increase data transfer rates. (But as I said, one guy said to make sure the Raid Controller is not a pci-e 1x controller, which will limit data transfer to a max of 256mB/sec. Do you see any problem with that - or is 256mB/sec fast enough?)
Here's another review I just read: Velociraptor in Raid 0 written last year.
His test system used an expensive Areca ARC-1231ML Raid controller costing around $600 - yikes!! Here are the results he posted:
..............
I have to admit that the results from the top part, gaming, surprised me - no real advantage to Raid 0. But the bottom results, application loading, show significant improvements - faster level loading, unless I am missing something.
But the AR controller this guy was using is pricey as hell, like I say, around $600. And apparently choice of controllers matters - the other controller with an Nvidia chipset, doesn't perform quite as well.
I wonder how the on-board DFI controller would perform? I looked up the specs on the DFI, and the onboard JMicron JMB363 controller is a pci-e one lane controller - therefore max transfer speeds 256mb/sec - do you see any problem with that, Sam? However, being a chip on the motherboard - versus an actual controller card - I assume it steals cycles from the CPU. Where?s that quote of yours?
Ah-hah!! I found your original quote:
Originally posted by Sam: The downside is that RAID is yet another demand on your system's CPU in software. The performance benefits overall are astronomical if you upgrade to a genuine hardware RAID controller, in effect a miniature PC on a PCI express card.
So the cheap on-board Jmicron controller on the DFI board steals cycles from the CPU - ah hah, I knew it was too good to be true! So I looked up raid controllers on newegg. In fact, they sell a really cheap one that is based on the Jmicron 363 controller, the Syba Sy-JM363 sata controller, for only $21.99.
But for a true hardware controller - I had to ratchet up to the $150 range on Newegg. That's a lot more, but still way less than $600!
RAID HARDWARE CONTROLLER THAT WILL FIT PCI-E 1X SLOT
Newegg has a "true hardware" pci-e 1 lane raid controller for $152.99, the Highpoint Rocket Raid 3120. It only supports two sata drives though, for Raid 0 or Raid 1. One reviewer said: Originally posted by reviewer of $152.99 true hardware Highpoint Rocket Raid 3120: Additional thoughts: this card combined with a pair of raptors made for an exquisitely fast system, load times were phenomenal, and I was seeing about 98% of the theoretical bandwidth from both drives in raid 0...
So my question is: why are you saying that Raid 0 will have no effect on loading time?
Please keep in mind that I care only about disk READ throughput for gaming. I don't care about redundancy (to protect myself from having to re-install my games again like I am doing now, I will clone the whole raid array to a large sata or ide from time to time.)
It looks like Raid 5 is very similar to Raid 0. Here is what Wikipedia says:
Quote: The read performance of RAID 5 is almost as good as RAID 0 for the same number of disks. Except for the parity blocks, the distribution of data over the drives follows the same pattern as RAID 0. The reason RAID 5 is slightly slower is that the disks must skip over the parity blocks.
So, I'm just interested in READs - it looks like Raid 0 is good. But if you absolutely hate Raid 0, then do you like Raid 5 in a 3-disk format?
Sam, on a 3-disk Raid 5, what data size does one end up with, two times the smallest size is it? And are there certain disks you shouldn't use. One guy on newegg said "do not use desktop drives, like the seagate barracuda series - they weren't meant for Raid and you'll risk losing your data." Why is a drive not meant for Raid? If I were to use 3 VelociRaptors at 300 gigs each, on a Raid 5, how much data storage would I have, 600 gigs?
I know you get redundancy with Raid 5. But if I get those pricey VelociRaptors I'd rather keep it down to only 3, that's why I like Raid 0 with only 2 - like the guy in the above said, "a pair of raptors."
But let me ask you this, Sam. If you do have 4 drives in Raid 5, let's say all of them 300 gigs, when you READ I assume you are reading from all 4 drives. So even if you are having to skip parity stripes, that's still 4 heads shoving data down the line - one would think you could get throughputs of 300 to 400MB/sec, would you agree?
Do 4 drives in a Raid 5, give you higher sequential read speeds of large amounts of data, like texture loads, or game level loads, than two of the same drives in a Raid 0?
If your answer is yes, can you link me somewhere to show me? If your answer is yes, then I'm beginning to understand why you like Raid 5. Do you still like Raid 5 with the minimum 3 drives?
So if three VelociRaptors at 300 gigs each, gives me 600 gigs of data and programs, plus redundancy - then I wouldn't have to worry about cloning from time to time - I would have the built-in redundancy and the operating system would tell me when one of my disks had died. All for only $720 worth of hard drives ($1000 for four of them) lol!
Also, from the same wikipedia article, I found:
Quote: Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a host server attached to storage, and server's processor must dedicate processing time to run the RAID software. This is negligible for RAID 0 and RAID 1.
That perhaps is another argument for using Raid 0 and forgetting about a hardware Raid controller.
One more thing about a separate Raid controller ? hardware or software based ? if I get one I have to ditch my original idea of using a DFI board. In looking at the DFI board layout - with three graphics boards in slots 2, 4, and 6 ? there are NO slots available for a Raid controller. (Note: This is also true with the Biostar Tpower X58 board.)
So with the DFI setup, I cannot install any kind of Raid controller. Perhaps the DFI offers onboard Raid 5 support ? I read that an earlier version did. But even if it does, maybe an add-on board handles Raid 5 much better.
So all of a sudden I am thinking of moving away from the DFI, which I have fallen in love with, to a card with a hot slot at position 7, since my case has 8 expansion slots to support a dual slot graphics card in slot 7. In fact the very reason I traded this case for the new Antec 900 I had bought 30 days earlier from microcenter on sale, was for the 8th expansion port, thinking of the Asus Rampage Extreme motherboard.
Now, because of your discussion about Raid controllers, Sam, I'm back to that Asus motherboard. The Asus x58 Rampage II Extreme, using slots 3,5 and 7 as the hot slots (offering 16x/8x/8x when three graphics cards are installed just like the DFI) has slot 2 available as a pci-e x1 slot. (Sam, does that steal 1 lane from the first graphics card, dropping it to 15x?)
That would allow me to use that slot 2 on the Asus motherboard for a Raid controller card - but since it's a 1 lane slot, I would be limited to 256MB/sec, the max bandwidth of each pci-e lane. You mentioned that crysis needs 150MB/sec. Since there is a bottleneck of the 1x pci-e slot, at 256MB/sec, if I were to use high performance RAID 5 drives, like the VelociRaptors, I don't know that there would be a point in using more than three of them, which should max you at the slot limit of 256MB/sec. Does that make sense to anybody? Any way you break it down, 256 MB/sec is decent sequential read throughput for gaming, right?
But here?s something interesting ? THERE ARE NO RAID 5 HARDWARE CONTROLLERS THAT WILL FIT A 1X SLOT. The only Raid controllers I have found with more than 2 ports, which will fit in a pci-e 1x slot, all lack their own on-board I/O controllers. They are not hardware raid controllers, they use cpu cycles. How much? One guy says 4% cpu overhead. Hmmmm. Maybe 4% can be lived with on a highly overclocked 4 ghz 920 d0 stepping. After all there are 8 cores, hyperthreading down to 4 final threads ? should be some availability to do some parity calculations. (Just not when running GTA4.)
BEST SOFTWARE RAID 5 CONTROLLER FOR PCI-E 1X SLOT
For example, the Highpoint Rocket Raid 2640 Raid controller, with 4 ports, at newegg for $129.99, is in the 2000 family of controllers from Highpoint, which do not have their own onboard I/O controllers, unlike the 3000 family, which are true hardware Raid controllers. But here is what one newegg owner said about his software-based 2640 Raid controller:
Quote: Other Thoughts: Whoever said this card doesn't work on Vista x64 is wrong. It works fine. Also, for those complaining that certain drives don't work with RAID -- here's the problem...NOT ALL DRIVES ARE MEANT TO BE USED IN A RAID CONFIGURATION! If drives are labled as "Desktop" then they are not meant to be used in a RAID, and will most likely die quickly. A good example of this are the Seagate Barracuda SATA drives -- totally not meant for RAID. Even though they are more expensive, you are better off going with Enterprise level SATA or even SAS drives if you are going to use them with a RAID card. Do yourself a favor and do not use cheap desktop SATA drives like the Barracudas...chances are they will not perform well and become degraded after a few weeks. They are not meant to be used in that kind of configuration, and you will risk losing your data. This card was very easy to install -- I have 3 1TB Seagate SAS drives in a RAID5. I get around 220 MB/sec and that's good enough for me :).
Oh, that's the guy who was saying you have to have expensive drives if you use Raid - and the whole name of Raid means "Redundant Array of Inexpensive Drives" The bastard! LOL
Of course "inexpensive" is a relative term, and maybe he considers the VelociRaptors at $239 each fairly inexpensive compared to SAS drives of the same capacity at about $339. He is using the Raid 5 on three 1-terabyte SAS drives - I was thinking 10,000 rpm but I looked them up - the SAS seagates are still only 7200 rpm. So this guy is getting 220MB/sec with three 7200 rpm drives - damn good - close to the max of 256MB/sec.
So if that is true (Seagate SAS drives are 7200rpm from what I see - unless I'm confused again) then shouldn't I certainly be able to max out at 256MB/sec with three VelociRaptors? That would give me 600 gigs of data and programs, and no worry about cloning - I would have redundancy on top of it.
Or what if I just used four of the regular 750 gig sata Seagate Barracudas at the same 7200 rpm - I should also be able to max out that way, and I would have 2250 gigs of storage plus redundancy. Four times $80 is $320, $400 less than the cost of three VelociRaptors, at about $720.
He's saying "no no, you can't use those desktop 7200 rpm seagate barracuda drives" but is he right, or is that just his opinion? I looked up 1 terabyte sas drives, and as I mentioned, seagate has them for about $220, but they are still 7200 rpm, so what does the sas interface give you at the same rpm - nothing that I can see.
Are they built to be more heavy duty? They say you can run them 24/7 - but so what - if you run Raid 5 you can keep going when one breaks - your applications won't know, and your operating system says "Hey, replace such and such drive!" Is there a big MTBF difference between the regular sata seagates at $139, and the pricier sas seagates at $220 which newegg doesn't carry? Probably Newegg doesn't carry them, because they really don't represent that much added quality for the extra price.
Here's a guy who bought that same Highpoint Rocket Raid 2640 Raid controller, with 4 ports, at newegg for $129.99, and instead of three sas drives, he set up two Raid 0 arrays, with one pair of 15000 rpm sas drives, and one pair of 7200 rpm sata drives. Interesting concept!
Quote: Pros: This card went right in and worked perfectly. The drivers for Vista 64 bit work fine, and I was able to setup 2x Fujitsu 15k rpm drives in RAID 0 to boot from and hold the OS, as well as 2x 7200rpm SATA drives in a second RAID 0 for data. The management software loaded without issue in less than 5 minutes. HD Tach shows a sustained data read of over 164MB on the SAS drives, and 80-90Mb/sec on the SATA. Both use less than 4% cpu.
Cons: If I wanted to go to 3 or 4 SAS drives in RAID 0, then the 1x PCE-E slot would max out at its limit of 250Mb/sec. If you want to do that, there is a nearly identical 4x card.
As with any RAID controller, boot time is a bit longer due to drive/stripe detection, but 5-10 seconds in BIOS is more than made up for once the OS starts loading.
Other Thoughts: I only had a 1x slot free or I would have opted for the 4x card. For the combined price of the SAS drives and this controller you would be hard pressed to match this level of performance. Of course RAID 0 is not for critical apps, but as a gaming/video editing box that has regular backups this is a very solid, fast way to go.
This card is replacing the only other 1x SAS RAID controller I could find here. That card was very problematic when booting. It is SUCH a relief to have this card which works every time.
Note that for SATA drives in a 2 disk RAID 0 array you won't see a lot of improvement over most MB implementations. This card needs more SATA drives or faster drives (such as SAS) to really shine.
Hmmm. That is very interesting. Helluva fast 15000 rpm drives just for the operating system. Newegg has SAS 15,000 rpm 73gig Fujitsu drives for only $139. I have to ask again - if I have three operating systems, and I give each one a 25 gig partition, can I take two Fujitsus like that, and create a Raid 0? Will the partitions prevent the computer from building the Raid? Can you only Raid single-partition physical drives?
Or am I saying it backward? Do you start out and create the Raid, and then partition that volume that the operating system sees as one drive? That's the way you do it, right? In fact, I think that's what I did after I created my Raid 0 with the two big seagates that I have. I added the partitions later. The partitions will be striped across the two Raid 0 physical drives constituting one Raid volume. I will have no redundancy, but cloning the whole thing to a slightly larger single sata will preserve the whole array, or disk.
Is there a point to what he did - if you have a lot of memory - lets say at least 6 gigabytes, are you still having a lot of operating system disk access as you play a game?
Notice that he says that for just two drives in raid 0, you might as well just use the motherboard raid implementation. In that case, if that is true, one would not even need this controller card. The guy who had the $152 hardware based controller card for two VelociRaptors might have noticed the same 98% of total throughput, just using the motherboard Raid controller without forking out the additional $152.
Anyway, this guy is using 4 drives, and with his 4 drives, he put together two separate Raid 0 arrays, using this software-based raid controller, burdening the cpu only 4% according to his HD tach results. I know you don't like Raid 0 Sam, so you probably don't like what he did. Any thoughts?
Neither Highpoint nor anybody else that I can find makes a pci-e 1x hardware Raid controller that handles more than 2 ports, so this $129 controller card is probably the best bet for 3 or 4 drives. I read on another forum that a gamer's DFI motherboard had built-in support for Raid 5 - I wonder if that built-in controller, burdening the cpu, worked as well as this controller card.
I don't know if the new DFI motherboards support Raid 5. I'll have to check. If they do, maybe I'm back to DFI after all, since nobody's hardware controller supporting Raid 5 will fit in a 1x slot - and that's all I've seen available - and only on the Rampage Extreme board - if the i7 system is going to handle 3 graphics cards. Maybe there are some other mobos that I'm not aware of.
Also, if the DFI onboard controller supports Raid 5, what speed data throughput will it support - does it set aside one lane of the pci-e - or is the data coming in another way - what maximum data throughput will an on-board controller supporting Raid 5 allow? Is it a three-drive Raid 5, a four-drive Raid 5, or either?
I am not too impressed with his disk throughput on his data - where all his games are stored. You said, Sam, that Crysis really needs 150MB/sec data transfer, as I recall. He is only showing 80-90. But what size 7200 drives did he use?
300GB VELOCIRAPTOR VERSUS 1500GB SEAGATE
The reason I ask, what size drives, is that I read something a while back that said you can get some amazing throughput by using HUGE drives.
I remember reading an article about throughput that discounted buying high-rpm drives. The guy said in effect, "Put your data out on the fast edge of a huge 7200 rpm drive, for amazing throughput!"
Let's say in my case, my gaming data is really only about 300 gigs or so. Let's say that I did like the one guy, and had a couple of 15000 rpm very small drives for my operating systems. Then for my data, let's say I did like he did and used 7200 rpm drives, but not just regular sized ones - let's say I used GIANT ones.
Let's talk about only one disk for now:
Let's say that I used a 1.5 terabyte disk, and put my 300 gigs of games on a partition out on the edge of the disk. That means I am on the outer 1/5th of the disk. The average radius of the outer fifth of the disk must be at least 3 or 4 times the radius of the inner fifth, for example, meaning the circumference is also 3 or 4 times greater, so data is spinning by, and can be accessed, at least three times faster than data on the inside fifth of the disk.
From what I read, an inexpensive LARGE 7200 rpm disk, putting a small amount of data on an outside partition like this, could theoretically outperform a much more expensive 10000 rpm drive.
I can get a 1 terabyte 7200 rpm seagate with 32mb cache for $139 at newegg, and for the same $139 price, I just found out tonight from the seagate website, that I can get 1.5 terabytes at Costco. So if I have 300 gigs of data on a 1.5 terabyte disk, I am on the outer 1/5th of the disk drive in regard to surface area.
Let?s talk hypothetically. I can google this and read some more articles about it. Sam, help me on this.
Let?s say the spindle on a disk drive is about an inch wide. Probably it?s quite a bit smaller than that, but it's getting late, so let?s just use that number for now.
If I have 300gigs of gaming, that takes up a whole VelociRaptor $229 drive. My data will be spread from a radius of ½ inch to 2.5 inches, for an average radius of 1.5 inches. Because more data is packed on the outside where more surface area is, the average radius of all my data is probably more like 2? ? more weighted toward the higher number. With the circumferance equal to 2piR, that gives me 4pi, or 3.14 x 4, or about 12.5?, going at 10,000 rpm. So data is streaming past the head at 125,000 inches per minute (which equates to over 100 miles per hour, lol.)
On the 3.5? 1.5 terabytes seagate, my data would not take up the whole drive, my 300 gigs data takes up only 1/5 of the drive. The surface area of the disk for data, using my assumption about spindle area, runs from ½ inch to 3.5 inches. But my data, taking up only 1/5th of the drive, probably has an average radius of 3.25 inches, let?s say 3 inches for the fun of it. Circumference equals 2piR, in this case 6 x 3.14, or about 19 inches using round numbers. The disk is spinning at 7200 rpm. So data is spinning by at 136,000 inches per minute on the $139 Seagate, which is faster than the 125,000 inches per minute that data is spinning by on the $229 VelociRaptor.
So instead of the guy who was getting only 80-90 MB/sec on his 7200 rpm sata data Raid 0 array, maybe the idea would be to have two of those giant 1.5 terabyte seagates in a sata Raid 0 array, giving the throughput of two VelociRaptors, which should push the throughput near 150MB/sec.
If I had two of these in a Raid 0, once I created the array, I would make three 25gig operating system partitions, and then a 525 gig program and data partition. I assume the 600 gigs of partitions would be spread across the two drives, starting on the outer edge of each, 300 gigs per drive. I could leave the rest unpartioned space. Then from time to time, I could hopefully clone that total 600 gigs of partition, to a 640 regular sata disk - hopefully the clone software wouldn't bother itself with the unpartioned 2.4 terabytes of data, hahaha.
Two of them would cost me $280, not much more than 1 VelociRaptor, for faster data transfer on the outer edge, pushing me toward 150MB/sec as I mentioned.
But why bother with the Fujitsu fast operating system drives - just add a third giant drive, each drive performing like a VelociRaptor, because no matter what I do, I'm limited to the 256MB throughput of the pci-e 1x slot. Three of those would cost me $420, and should give me the throughput of three VelociRaptors, at $720.
Or I could buy 4 for a total of $560, and have a four drive Raid 5 - is that the Raid 5 you like, Sam? Would there be a benefit from the fourth drive? Better performance if one drive fails? Or would the fourth just be overkill, given that we are pci-e 1x bottlenecked?
So, forgetting about the Fujitsus, if I took 4 1.5 terabyte disks at $139 each, in Raid 5, that would give me 4.5 terabytes of space. I would create 3 25 gig partitions, for XP, Vista, and Windows 7, and then 525 gigs for the fourth data partition where my games would go. That would last me for years worth of games. The 600 gigs of stuff would be spread out among 4 drives, right? in other words 150 gigs on each 1500 gig drive - right out on the outer tenth of the drive - the very fastest moving part of each platter. I could partition the remaining 3.9 terabytes for whatever, movies, backup for my business files, who knows. And I would have data redundancy - lose a disk and keep running - replace the disk and let the system work it in over a day or so.
I could do this and get the read throughput of the 10000 rpm VelociRaptors, at about $560, instead of $1,000.
If I found a motherboard with a slot for a hardware controller, with all my data out on the very edge of the platters, I could get those 400mB/sec transfer times you were talking about, right Sam? I would have to have a controller that supported 4 devices, not the $152 hardware Raid controller I mentioned before, and not the $129 4-port software Raid controller. To get the data into the CPU I would need a pci-e 4x controller - in my searching I have found those controllers - for about $325. So far I don't know what motherboard would make available a 4x pci-e slot, and still run 3 graphics cards.
But let's say I found the mobo. Let me ask you, if such a motherboard would run three graphics cards, let's say three 5870s, at 16x,8x, and 8x, and then allow adding a 4x Raid controller, would it take the 4 lanes away from the 16x, giving you 12x, 8x and 8x for the graphics cards? If such a motherboard existed, would the instruction manual get into those details? Again, how does the asus do it if I add a 1x raid controller? Does it drop down to 15x, 8x and 8x?
SEEK TIME
Anyway, what I haven?t factored in, talking about those giant disks, is seek time - head movement ? time to get out to that outer edge of the platter, the outer tenth where my data is. I assume the head would sit out on that outer edge all the time, but maybe it parks in between disk reads/writes. Interesting question. Where does the head go in between reads ? does it coast above where it last read ? or does it go to the park position?
And if the answer is that it parks, then ...... where is park? Is parked out beyond the outside of the drive, or in, at the spindle point? If it comes all the way in to the middle, the spindle, to park, then my head movement is going to take much longer than for the VelociRaptor since all my data is out at the edge of the platter. If it sits on the edge where it last read/wrote, then I'm in great shape, or if it parked on the outside, just beyond where my data is, I'm also in great shape, and my head movement is going to be very fast, probably faster than the VelociRaptor.
Do you know those answers, Sam? Maybe I have to do a lot more reading about hard drive design.
Rich
This message has been edited since posting. Last time this message was edited on 22. August 2009 @ 14:54
|
AfterDawn Addict
|
22. August 2009 @ 10:20 |
Link to this message
|
i think you win the award for the longest post on teh intenets EVER
MGR (Micro Gaming Rig) .|. Intel Q6600 @ 3.45GHz .|. Asus P35 P5K-E/WiFi .|. 4GB 1066MHz Geil Black Dragon RAM .|. Samsung F60 SSD .|. Corsair H50-1 Cooler .|. Sapphire 4870 512MB .|. Lian Li PC-A70B .|. Be Queit P7 Dark Power Pro 850W PSU .|. 24" 1920x1200 DGM (MVA Panel) .|. 24" 1920x1080 Dell (TN Panel) .|.
|
harvrdguy
Senior Member
|
22. August 2009 @ 14:17 |
Link to this message
|
Yay, I won something! :P
|
Member
|
22. August 2009 @ 14:25 |
Link to this message
|
Sorry a 4 word post on the same page cancels it out you now owe your state $67.95 :P
-I set my 'slogan' after seeing Omegas I'm suprised he hasn't said anything yet...
Personally I fail to see the point in any complex RAID speak these days with SSD getting cheaper by the second (I'm quite probably immensely wrong for reasons that will soon become clear).
I could put something funny here but I cant be arsed. Now GO AWAY!
This message has been edited since posting. Last time this message was edited on 22. August 2009 @ 14:28
|
AfterDawn Addict
7 product reviews
|
22. August 2009 @ 14:41 |
Link to this message
|
LOL! You'd think I would look at peoples sigs more often. Given I change mine fairly frequently. The image however will remain for a while :D I will however continue to change my..."saying" every once in a while.
I believe Patience is within everyone. I simply have an overabundance of it. Rather unnatural actually LOL!
One day, I will try raid 0 Again. And next time, it will be hardware, NOT software. When windows 7 is more mainstream that is. I had trouble setting one up last time, because the vista driver wouldn't work in 7.
To delete, or not to delete. THAT is the question!
|
harvrdguy
Senior Member
|
22. August 2009 @ 15:25 |
Link to this message
|
Originally posted by keith: (I'm quite probably immensely wrong for reasons that will soon become clear).
Hahahaha! That's the funniest thing I've heard all week. Keith you definitely have a way with words - same as your humorous twist on Kevin's sig.
Omega, according to wiki (what! you didn't read all 7.4 million words in that post!!!) the overhead for a Raid 0 is "negligible." Meaning - like not really worth worrying about, right? So you might as well do a software-based Raid - on the mobo - or use one of those $20 Raid controllers, lol.
But I think Sam is going to have strong evidence that Raid 5 beats the cr*p out of Raid 0. However, I must say that it appears to me that a lot of gamers use Raid 0 - I keep coming across it everywhere. So I am VERY interested in finding out what Sam has gathered from all his research on the subject.
|
AfterDawn Addict
4 product reviews
|
22. August 2009 @ 15:59 |
Link to this message
|
Holy crap, 5500 words Rich!
Your IDE cable woes come at an interesting time, having just had to replace a S-ATA cable for my own OS drive due to crashes and failed boots. Like you, I thought it was the drive, not the cable first, and I had similar issues to you before I noticed.
Anandtech do indeed use questionable assumptions, but in general they are right about RAID0. What RAID0 does is increase data throughput, and that's it. It weakens redundancy (now you lose even MORE data if a drive fails), implements controller-dependancy (plug your drives in a different pc and all your data is gone), massively hikes up CPU demand so you get slower performance, and increases seek latency, so your drives actually perform worse outside MB/s environments (i.e. pretty much everything except HDTach and copying large files). It's just so many negatives for one very minor positive.
With all drives, the controller is everything. Basic SSDs are useless due to the controller, Samsung's HDD failures were all down to the controller on the drive, and yet another downside to nforce boards is the appallingly unreliable S-ATA and RAID controller.
Highpoint RocketRAID cards, though you might think they are, are not hardware RAID, they are a hardware card, that uses the CPU to process. Better than onboard, as you can take the card with you into another PC and keep your data , but still not hardware RAID.
RAID0 has almost no effect on loading time simply because loading time is based off CPU performance and seek performance, something that RAID worsens. The point of RAID is not for performance, it is actually for redundancy.
The guy saying Barracudas aren't meant for RAID doesn't know what he's talking about. ALL HDDs can be RAIDed, whether 5400rpm GreenPowers, 10k Velociraptors or SSDs.
If you have four drives in RAID5 you will get better performance than a normal drive, miles better, whether it's better than two drives in RAID0 I'm not sure, I suspect it will be. (Remember 4 drives in RAID5 still gives you 3 drives worth of storage, not just 2)
As you spotted, most boards do not allow triple graphics with the use of a PCIe 1x slot as well, hence an argument for going with two X2s rather than three single cards. Also remember that with three cards bunched up like that, heat is going to be a big problem unless you want to use 100% fan speed continuously (not recommended). Also, the Rampage II board, while it may appear to support the use of a 1x slot, I believe it will only work for 4x length cards. Full length cards with a 1x connector like a RAID card I don't think will fit due to the cooler.
The Max bandwidth of a 1x slot on PCIe is 500MB/s, not 256MB/s, so is ample for RAID.
Hardware RAID cards, as far as I can see, will fit a 1x slot, they're just designed for more, you can leave the rest of the slot hanging. This certainly works for other cards.
As for the quote from the guy, the Seagates degraded because newer Seagates are just crap drives. Simple as. High MTBF drives are a big con, nothing more to say about it. Buying from currently reliable brands like WD (well, only really WD at the moment) you really don't need them, the standard retail drives will do fine.
A 25GB partition will not be enough for Windows 7 or Vista, this will be XP only, and you will need Vista or 7 if you want to use triple crossfire as XP does not support it.
I'll be honest, I got bored of reading your post at this point. What else did you want answered?
|
AfterDawn Addict
7 product reviews
|
22. August 2009 @ 16:26 |
Link to this message
|
Originally posted by harvrdguy: the overhead for a Raid 0 is "negligible." Meaning - like not really worth worrying about, right? So you might as well do a software-based Raid - on the mobo - or use one of those $20 Raid controllers, lol.
LOL! Your probably right! :P Windows 7 seemed to handle it well. I'll likely do it again, for my gaming drives. I didn't get to play with it much. The other twin drive went back to its rightful owner :( Which is probably fine and good. The two (5000AAKS) together were barely better then the Veloci raptor at read speeds. Come to think of it, I think they were damn close! Not sure what I want to do now. Being on the eve of a new trend (SSD disks).
I suppose I could Raid my FALS drives together, and see how they perform. But I have A LOT of data to back up before I can do that!!! About 800GB worth.
To delete, or not to delete. THAT is the question!
This message has been edited since posting. Last time this message was edited on 22. August 2009 @ 16:28
|
harvrdguy
Senior Member
|
22. August 2009 @ 20:53 |
Link to this message
|
Wow, Kevin, you've already got some good Raid experience - I'm a complete and utter newbie. I'll have to hit you up also for some pointers.
That's interesting Sam, that we both had hard drive cable failures at the same time. One almost never suspects the cable.
Originally posted by Sam: ... so your drives actually perform worse outside MB/s environments (i.e. pretty much everything except HDTach and copying large files).
Hahahaha. So I needn't have bothered downloading HD Tach?
Originally posted by Sam: Highpoint RocketRAID cards, though you might think they are, are not hardware RAID
Wow, those big liars! Check out their listing in newegg:
I fell for it.
But now, finding the two REAL hardware controllers down below, I see that each explicity describes the on-board I/O processor and memory - something the 3120 did not describe. What a real bunch of liars!!
Regarding Raid0 those were some very interesting observations.
Quote: It's just so many negatives for one very minor positive.
Hmmmm. Hadn't really thought about those things: decreases reliability, induces controller dependancy, increases latency.
Now, for Raid5:
Originally posted by Sam: If you have four drives in RAID5 you will get better performance than a normal drive, miles better, whether it's better than two drives in RAID0 I'm not sure, I suspect it will be. (Remember 4 drives in RAID5 still gives you 3 drives worth of storage, not just 2)
Ok, so your preference clearly is a 4-drive Raid5 setup. Good. Was I correct in thinking that with a 4-drive Raid5, on a sequential read, with the right controller, you'd have four drives shoving data down the sata lines?
Quote: ALL HDDs can be RAIDed
That sounds more like it - I wondered what the guy could possibly have meant by saying "not all drives are suitable for a Raid array" - it's not like we're saying we need a special array that will work reliably on the next space shuttle launch. All hard drives are reasonably reliable - not perfectly reliable - but reasonably so, some better than others of course.
Ah hah, the way to make the Rampage Board work:
Quote: Hardware RAID cards, as far as I can see, will fit a 1x slot, they're just designed for more, you can leave the rest of the slot hanging. This certainly works for other cards.
Well now, if that is so, we're back in business. But what do you actually mean "leave the rest of the slot hanging." Are we saying the board will physically fit, but the pins won't exactly line up - and you're saying that will still work???
Quote: Also, the Rampage II board, while it may appear to support the use of a 1x slot, I believe it will only work for 4x length cards. Full length cards with a 1x connector like a RAID card I don't think will fit due to the cooler.
Ok, so if I am understanding correctly - we don't want a full-length card due to the cpu cooler. A 4x card is generally shorter and will work. But the 4x pin set is wider than the 1x pin set - so we leave the rest of the pins hanging. That is some very interesting stuff. I did not look at the physical dimensions of the cards. The 4x controllers I saw were in the $325 price bracket - for example the Areca line of controllers. I suppose one could always order and give it a try, as long as there is some type of 30 day warranty.
Here is a real hardware controller - running around $350 to $400, with a genuine processor and 128MB of memory onboard:
Originally posted by "The Adaptec ASR-3405 Raid4 controller has the following specs:" : Although a junior model in the series, the ASR-3405 is quite advanced in its characteristics. It has a PCI Express x4 interface, SAS and SATA (NCQ + 3Gbps) interfaces for the HDDs, an Intel 80333 processor clocked at 500MHz, 128 megabytes of DDR2 SDRAM, and an internal mini-SAS SFF-8087 connector. It allows to install a battery to power the cache memory chips and supports the following RAID types: 0, 1, 5, 6, 10, 50, and 60. So, it has everything you may want from a RAID controller. You?ll see soon how this new controller compares with the others in terms of performance.
But that card looks really long. The dimensions are: 2.5" H x 6.6" L (64mm x 167 mm). Hmmm, 6.6 inches long. Will that fit, Sam?
Another Raid controller is the $260 3ware 9650SE, with 4 ports, pci-e. The physical dimensions are: 7.3" long x 2.73" high, or .7 inches longer than the adaptec, and .23 inches taller.
Originally posted by The 3ware 9650SE specs say:: On-board I/O RISC processor and RAID offload provides true hardware RAID.
The 3ware 9650SE PCI Express to SATA II RAID controller pushes LSI established leadership in RAID controller performance to new heights.
While 3ware controllers continue to lead the market in RAID 5 performance, the 3ware 9650SE outshines the competition as the
new standard bearer for RAID 6 performance, delivering over 700MB/s RAID 6 reads and 600MB/s RAID 6 writes.
I am happy with Raid 5, but they talk here about Raid 6. There is a performance penalty on writes, but not reads. Additionally, total array size is equal to the sum of 2 drives, not three.
Originally posted by Rich on what 3ware said about Raid 6: As I understand it, Raid 6 involves computing two parity writes at the same time. The minimum Raid 6 array is 4 drives, and the size of storage is the combined total of two drives, not three. They say that since rebuilding an array can take 12 to 24 hours, this protects against a non-recoverable read error that prevents rebuilding a particular stripe (in addition to protecting against a complete failure of a second drive during the rebuild process.)
Raid 6 reads are not affected by the extra parity information, but writes usually take a 30% performance hit - however new algorithms are reducing that penalty. I don't care about writes - I pretty much only care about reads. If I put together 4 $80 750gig seagate barracudas for $320 total, and I can get 700MB/sec Reads as they say, limited to 500MB/sec that you say is the bandwidth of 1x pci-e, then that sounds like a pretty fast reading system. I would assume that if they can deliver 700MB/sec in Raid 6 reads, that they would deliver the same 700MB/sec in Raid 5 reads, but I would have 3 x 750 = 2.25 terabytes of data. I don't care - if Raid 6 is worth it, I'm happy with 1.5 terabytes of data. I already have two of the drives ? one is a 640 and the other is a 750, so I guess the array of 4 would give me 3 x 640 space, or 2 x 640 space = 1.3 terabytes ? plenty for me - depending on Raid 5 or Raid 6. I only need three more to have a total of five drives, meaning 3 x $80 = $240.
If I use the GIANT disk situation, to put my data on the edge of 4 1.5 terabyte drives, then I'm looking at 4 x $140 = $560 in drive expense, plus one extra drive as a hot swap if ever needed = $700. That?s a much higher price than $240 for sure, but I might cut my average read time in half on random small reads, by doing that. I would also end up with giant storage, 3 terabytes on Raid 6, and 4.5 terabytes on Raid 5. I would be happy with Raid 5, or with Raid 6, since both deliver the same read performance as I understand - I guess both are pulling data from all four spindles.
So, Sam, any thoughts about Raid 6 or do you think that is taking it a little too far for a gaming system?
So to wrap it up, I guess I have three remaining comments or questions:
1. Taking into consideration the new info on maybe two controllers above that might work, the dimensions being about 7" long and 2.6 inches high, and both are pci-e 4x, do you think they'll actually go into the first 1x slot of the asus rampage extreme and really work and not interfere with the cpu cooler?
2. Secondly, if the new build is going to have three graphics cards, and the fans have to run at 100% all the time - that doesn't bother me because I don't game 24/7 - if I can ratchet it back to just Wednesdays for about 12 hours per week, that will be fine, and I can continue to wear headphones for the foreseeable future.
The main thing is performance to be able to play crysis at the full 2560x1600 with decent settings. So in your opinion, will the asus rampage extreme work with three graphics cards on i7, or should I think about something over in the amd family of motherboards - or is there one more I haven't considered?
Or should I just go back to the DFI and drop the idea of a hardware controller and just go for on-board Raid 0 with two 300 gig velociraptors, despite the dangers, solely to pump up the MB/sec, and clone my 600 gig array once in a while to one of the cheap satas so I don't have to go through all the game reloading I am doing now.
3. Third and final question: I took note of your advice that I need Vista or Windows 7 for triple crossfire - and you say 25 gigs is not big enough. How big should each of those partitions be?
Rich
|
AfterDawn Addict
|
22. August 2009 @ 20:57 |
Link to this message
|
pssht rich, to play crysis maxed, get an i7 with 2 GTX 295s oe 2 4870x2s and clock the isht out of the i7 and/ GPU!
MGR (Micro Gaming Rig) .|. Intel Q6600 @ 3.45GHz .|. Asus P35 P5K-E/WiFi .|. 4GB 1066MHz Geil Black Dragon RAM .|. Samsung F60 SSD .|. Corsair H50-1 Cooler .|. Sapphire 4870 512MB .|. Lian Li PC-A70B .|. Be Queit P7 Dark Power Pro 850W PSU .|. 24" 1920x1200 DGM (MVA Panel) .|. 24" 1920x1080 Dell (TN Panel) .|.
|
Advertisement
|
|
|
AfterDawn Addict
4 product reviews
|
22. August 2009 @ 20:59 |
Link to this message
|
I may be about to eat my words for that, but looking at it, I'm not convincedit is true hardware RAID, and notice only 3 of 5 eggs. That's poor for 25 reviews...
What you've got there below looks more like a hardware RAID controller. As for the PCIe slots, you can do it that way, but i think sometimes it requires modification of the PCIe slots, which is a little on the risky side I think you'll agre.
RAID6 owns RAID5 most of the time from what I can see, double redundancy instead of single, and better performance. My friend who uses RAID has RAID6, off an Areca 8-port card, with 8 7200.11 1.5TB drives (almost as if to spite that other guy - lol)
I would never, however, consider RAID6, or even RAID5 for a gaming system, this is file server stuff. I have to admit, I wasn't entirely seriosu when I suggested RAID as the answer to laggy Crysis. I think you're much better off probably RAIDing a couple of SSDs.
I would recommend at least 40GB per partition if Vista, at least 30-32GB for Win7.
|
|