BOINC Blaster
|
|
Member
|
22. September 2009 @ 01:41 |
Link to this message
|
|
Advertisement
|
|
|
AfterDawn Addict
1 product review
|
22. September 2009 @ 02:14 |
Link to this message
|
Are you going to use this computer for anything other BOINC? If not, much of it could be built a lot cheaper than how it is...
You can get a micro-atx board with onboard video, a 400W power supply, a DVD drive, and a 320GB hard drive...this will save a lot of the price, and will not decrease the performance of BOINC. Also, it is a good idea to get a nice CPU cooler when running stuff like BOINC or SETI for months on end @ 100% CPU load.
|
Member
|
22. September 2009 @ 02:58 |
Link to this message
|
Originally posted by KillerBug: Are you going to use this computer for anything other BOINC? If not, much of it could be built a lot cheaper than how it is...
The system will certainly be used for other applications, but those apps could easily be handled with a simple dual-cored P35-based system with on-board video and a bunch of mass storage. What I am trying to do is take a rather ho-hum system and just for the fun of it putting another $500 into it to supercharge it for BOINC applications.
I am particularly intrigued by the SETI performance enhancement provided by a single GeForce 8600 operating in CUDA mode in one of my other systems. From a SETI point of view, that one small GPU blows the doors off a 3-GHz dual-core CPU. I really want to see what four-cores and two medium-sized GPUs will do.
Originally posted by KillerBug: Also, it is a good idea to get a nice CPU cooler when running stuff like BOINC or SETI for months on end @ 100% CPU load.
Absolutely, and this is another aspect of this project I would like help on. I had already planned on adding a fourth 120 mm fan to the left side of the Cooler Master 690 case as well as a ARCTIC COOLING Freezer 7 Pro 92mm CPU Cooler for the CPU. Will these extra fans be enough or should I be looking at auxiliary fans for each GPU, too?
This system will be running 24/7 with all processors at 100%. Is the PSU I picked large enough, will air flow be an issue and are there any other easily-forgotten-by-new builders issues that I haven't thought of or addresses properly?
Thanks for the help KillerBug. It is most appreciated.
Dick
|
Member
|
22. September 2009 @ 20:18 |
Link to this message
|
Originally posted by KillerBug: Are you going to use this computer for anything other BOINC?
Hi again, Mr. Bug (or can I call you Killer?) :)
All I'm looking for is some help with a reality check that the components I have chosen will play together without self-destructing. In other words, will the PSU support an Intel P45 mobo with a quad 9650 and a pair of GeForce 9800s? Will four 120 mm fans in a CM 690 case and a third-party fan on the CPU be sufficient for running at 100% 24/7 or am I asking for trouble?
This is only a $1200 project, but I would hate to waste that much money out of my own ignorance on what is supposed to be a fun project.
Thanks to you or anyone else who can tell me if I'm on the right track for my first build.
Dick
|
gera229
Member
|
23. September 2009 @ 00:55 |
Link to this message
|
A same price hd 4770 would outperform the 9800 GT
Or the alot cheaper hd 4650 or 4670 would be about the same in performance as the 9800 GT. Peace.
|
Member
|
23. September 2009 @ 01:32 |
Link to this message
|
Originally posted by gera229: A same price hd 4770 would outperform the 9800 GT
Or the alot cheaper hd 4650 or 4670 would be about the same in performance as the 9800 GT. Peace.
Thanks. That's the kind of information I'm looking for. I appreciate you're taking the time to help.
Dick
|
AfterDawn Addict
4 product reviews
|
29. September 2009 @ 13:31 |
Link to this message
|
I'd strongly advise against an Intel own brand motherboard, they're as primitive and overpriced as boards get.
If BOINC genuinely does use CUDA (I don't use it so I'm not sure) then you need to stick with nvidia really and avoid the HD4770. Rather than use two 9800GTs, which is a complex and potentially problematic way of doing it, buy something like a GTX260. They're not great value in the grand scheme of things (few nvidia cards are of late) but they're your only choice if you rely on CUDA.
The PSU is way overkill, get something like a 520W HX or 550W VX.
|
Member
|
29. September 2009 @ 13:56 |
Link to this message
|
I was first bit by the CUDA bug with BOINC when I discovered that my current NVIDIA 8600GT just needed a firmware upgrade to play ball. It was soon running rings around my dual core E6850 and my idea for a BOINC Blaster was born.
I was looking at the Intel board because my current P35 board has operated flawlessly for the three years I've had it. Which mobo is currently at the top of the Hit Parade for Intel Core 2 quad processors? There are so many to choose from that straying from something you know almost seems like a crap shoot.
Fortunately, nothing in my proposed system is so expensive it can't be swapped out for something else if it winds up at the bottom of a smoking hole. :)
So, for this application, with mobo would you recommend for a Q9650?
Dick
|
Member
|
3. October 2009 @ 23:33 |
Link to this message
|
Originally posted by k7vc: So far my Newegg shopping list consists of:
Intel BOXDP45SG LGA 775 Intel P45 ATX Intel Motherboard
Intel Core 2 Quad Q9650 3.0GHz 12MB L2 Cache LGA 775 95W Quad-Core Processor
CORSAIR DOMINATOR 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMD4GX3M2A1600C8
(2) EVGA 512-P3-N973-TR GeForce 9800 GT 512MB 256-bit GDDR3 PCI Express 2.0 x16 HDCP Ready SLI Supported Video Card
(2) Western Digital Caviar Black WD5001AALS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive
COOLER MASTER RC-690-KKN1-GP Black SECC/ ABS ATX Mid Tower Computer Case
CORSAIR CMPSU-750TX 750W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS Certified Active PFC Compatible with Core i7 Power Supply
I built this system with two additions:
1. ARCTIC COOLING Freezer 7 Pro 92mm CPU Cooler
2. Western Digital VelociRaptor WD1500HLFS 150GB 10000 RPM 16MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive
I'm using the Raptor for my OS and the two Caviar Blacks in a RAID-1 configuration.
Everything went together perfectly with no surprises. The two 9800 GT's are playing nicely together and with the addition of two more 120mm fans (one inlet in the side and one exhaust in the top, the CM-690 case is quiet and keeping everything cool in spite of all four cores and GPUs running at 100% 24/7.
I am very happy with the Intel mobo. The BIOS seems ideally suited for OC with just about every parameter available for modification.
The only problem I had was actually with the Windows XP install. Using the Intel-supplied RAID F6 disk with an external USB floppy, XP Setup actually forgot where the floppy was later in the install. Once I built an XP install CD with the RAID drivers already in place, the install went smoothly.
The system has been burning in for the past four days without any indication of trouble. I was careful from the very beginning to use the most current firmware and drivers for everything, especially for the BIOS, RAID and graphics cards (both happily SLI'ing together).
gera229 suggested some of the ATI cards, but in looking at the reviews for some of the latest CUDA-compatible cards, I got the feeling some of their drivers weren't quite ready for Prime Time.
sammorris was concerned about the Intel mobo, dual 9800 GTs and PSU. For $110 the mobo seems to be a good match for my application, for purely non-technical reasons I felt more comfortable with the two single-slot 9800's than one big honkin' GTX-260 and I like the idea of having a PSU that's "too big." :)
This has been a big learning exercise for me and I am happy with the way it has turned out. My next project will be OC'ing this one, but that will be later with a lot more research.
To all who offered their opinions and advice: Thank You! Even if I didn't always heed your advice, each comment made me do a lot more research which in the end made me a little more knowledgeable.
Dick
|
AfterDawn Addict
4 product reviews
|
4. October 2009 @ 06:55 |
Link to this message
|
There's nothing wrong with ATI drivers per se, but ATI cards don't have CUDA support. You would still have been much better off with a single GTX260 or 275 as assuming both 9800GTs scale together 100% (very rare) they would barely match the 260.
|
gera229
Member
|
4. October 2009 @ 12:45 |
Link to this message
|
What is CUDA? Do nvidia cards have this? What is it used for?
|
AfterDawn Addict
4 product reviews
|
4. October 2009 @ 12:47 |
Link to this message
|
CUDA is a technology employed by nvidia graphics cards that allows them to process CPU-style applications with amazing speed due to the parallel performance of GPUs.
|
Member
|
4. October 2009 @ 14:56 |
Link to this message
|
Originally posted by sammorris: You would still have been much better off with a single GTX260 or 275
Since I still have plenty of room, power and cooling capacity, that might be an interesting upgrade for me to make after awhile. Thanks for the suggestion.
Dick
|
gera229
Member
|
4. October 2009 @ 23:38 |
Link to this message
|
"due to the parallel performance of GPUs."
What do you mean?
So nvidia is better?
What are some examples of those applications? Thanks.
|
Member
|
5. October 2009 @ 00:32 |
Link to this message
|
Originally posted by gera229: "due to the parallel performance of GPUs."
What do you mean?
So nvidia is better?
What are some examples of those applications? Thanks.
Let me take a crack at this. You are familiar with dual- and quad-core CPUs where each of them can operate in parallel with the other. What if you had over a hundred cores, all capable of operating in parallel. That is exactly what is inside of a typical GPU.
If you can divy your project up into small executable pieces (or "work units"), you can hand each of these pieces off to an separate GPU core to crunch on while all of the other cores (and the CPUs) are off doing their own thing. If you can break your project up into a hundred separate work units, all of them can be processed simultaneously by a typical GPU.
In my case I have two Nvidia 9800 GT processors and I can process over 250 work units at the same time without putting hardly any significant load on my CPU. (The CPU is only involved with the loading of the GPU cores and the collection of the results.)
Typical applications are in the scientific area where a given project can be divided up into a lot of little pieces and processed in parallel. SETI is a good example of this.
I am not familiar with ATI's capabilities in this area.
Dick
This message has been edited since posting. Last time this message was edited on 5. October 2009 @ 00:39
|
gera229
Member
|
5. October 2009 @ 01:30 |
Link to this message
|
So a GPU is a better CPU?
Well isn't it what DX 10.1 is for in ati cards? It operates like CUDA doesn't it. Thanks.
|
AfterDawn Addict
4 product reviews
|
5. October 2009 @ 08:52 |
Link to this message
|
No, that's not at all what it means gera.
CPUs are designed to get the maximum performance they can out of as few cores as possible - your average CPU will have 2 or 4 processing cores, but still be incredibly fast at performing certain tasks. A GPU might have 400 processing cores. Each one of these cores is really slow and pretty useless, but the difference is, when rendering graphics for a game, you can use as many cores as you like, all 400 can work together in unison on rendering this picture, and thus it comes out really quickly.
The reason why GPUs don't do everything in a computer is because most programs won't allow the use of 400 cores at once, typically they're limited to 2-4, often only one.
CUDA has absolutely nothing to do with graphics for games. What CUDA does is string the 400 cores on a graphics card together to work on an application a CPU would normally work for. The only reason why CUDA isn't the most amazing thing ever is that CUDA doesn't work with all programs. The software has to be written with CUDA in mind for it to work.
Right now, ATI doesn't really have anything to rival CUDA - thus, if you've got a bit of software that works with CUDA, you should definitely buy an nvidia card. However, if the software isn't compatible, it doesn't matter what GPU you have, the CPU does all the work.
DirectX 10.1 is much the same as DirectX10, but whereas every version of DirectX that comes out focuses on getting new, better graphics in games, DirectX 10.1 was essentially like an 'update' for DirectX10. It doesn't make games look any prettier, but it makes them run much better. As with all DirectX versions, the game, your operating system and your graphics card all have to support it for it to work. Basically, if you play a DirectX10.1 game, it won't look any prettier than a DirectX10 game, but if you have a DX10.1 card (i.e. ATI HD4000 series) it will run much faster.
|
AfterDawn Addict
4 product reviews
|
5. October 2009 @ 09:00 |
Link to this message
|
However, had you said DirectX11 it would be different. DirectX11 is meant to have code included in it for GPUs to execute computer code (i.e. CPU stuff, not graphics) regardless of which GPU it is that's running. How effective that will be remains to be seen.
|
gera229
Member
|
5. October 2009 @ 18:40 |
Link to this message
|
So CUDA supported programs will work with the 400 processing cores together fast? Is it faster to have a cpu or a gpu 400 processing cores?(I'm just asking this to know the speed of increase it would do even though it's not used just pretend it was). Thanks.
|
gera229
Member
|
5. October 2009 @ 18:42 |
Link to this message
|
"DirectX 10.1 is much the same as DirectX10, but whereas every version of DirectX that comes out focuses on getting new, better graphics in games, DirectX 10.1 was essentially like an 'update' for DirectX10. It doesn't make games look any prettier, but it makes them run much better."
Howcome 10.1 wasn't an updated if it acted the same as dx 11? What I mean is DX 11 runs faster so does dx 10.1 than dx 10. Dx 11 runs faster than dx 10.1 but why do you say dx 11 is an updated? Is there something else different? Thanks.
|
AfterDawn Addict
4 product reviews
|
6. October 2009 @ 07:57 |
Link to this message
|
It is much faster to use the processing cores of a GPU for a task, if you can use them in unison, which is the whole point of CUDA.
DirectX10.1 and DirectX11 are completely different. DirectX10.1 is the same as DirectX10 but with better performance.
DirectX11 is a completely new design, it does not provide better performance, but it allows new features to be used. Not only does it allow for better graphics to be used in games, it also implements new code that allows GPUs from any manufacturer, not just nvidia with CUDA, to run code in the same way that CUDA does.
|
gera229
Member
|
7. October 2009 @ 10:02 |
Link to this message
|
So dx11 is the same speed as DX10.1
|
AfterDawn Addict
4 product reviews
|
7. October 2009 @ 10:06 |
Link to this message
|
Not really. DX11 is slower than DX10.1 at the expense of better graphics. It is also worth noting that at least presently (though this may change with newer drivers) DirectX10.1 graphics cards (i.e. the HD4800 series) provide better performance in DirectX10.1 titles than DirectX11 graphics cards (i.e. the HD5800 series) will, for the same power.
|
Member
|
10. October 2009 @ 04:01 |
Link to this message
|
Originally posted by k7vc: My next project will be OC'ing this one, but that will be later with a lot more research.
My first baby steps at OC'ing this build has been to OC the 333 MHz bus clock to 400 MHz. With a 9.5 multiplier, this keeps the Q9650 running steadily at 3.8 GHz and the memory at exactly 1600 MHz at its rated 1.65v.
With all cores at 100% 24/7, CPU temps have stayed in the green at 34DT with I believe is between 65C-70C--this is roughly a 5C increase from prior to OC.
I do have one question regarding memory specs before I try to OC it any further. My current DDR3 sticks have latency specs of 8-8-8-24.
CORSAIR DOMINATOR 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMD4GX3M2A1600C8
I have seen numerous DDR2 sticks with latencies as low as 3-3-3-5. It doesn't seem intuitive to me that the supposedly faster DDR3 sticks would have what appear to be relatively slow latency times.
Can someone briefly summarize why this is and why the numbers seem contradictory to me?
Thanks.
Dick
|
Advertisement
|
|
|
AfterDawn Addict
4 product reviews
|
10. October 2009 @ 06:44 |
Link to this message
|
The Q9650 has a 9x multiplier, not 9.5, so you're actually at 3.6Ghz.
DDR3 memory has high latencies because the actual memory chips themselves are no faster than even old DDR1, memory technology has not progressed at all in the last few years. The only change is the speed at which the memory interfaces with the CPU. High memory clock speed is a good thing as it allows for higher bandwidth, but because the limitation is the memory chips themselves, a job can only be completed so fast.
Old PC3200 DDR1 (400mhz) can do a job in just 2 clock cycles (CAS2), which is 5ns.
PC6400 DDR2 (800mhz) can do a job in 4, and PC8500 (1066mhz) in 5 - this is 5ns and 4.7ns.
PC13333 DDR3 (1600mhz) can do a job in 8, which again, is 5ns.
|