User User name Password  
   
Saturday 11.4.2026 / 06:05
Search AfterDawn Forums:        In English   Suomeksi   På svenska
afterdawn.com > forums > announcements > news comments > researchers use integrated gpu to boost cpu speed
Show topics
 
Forums
Forums
Researchers use integrated GPU to boost CPU speed
  Jump to:
 
The following comments relate to this news article:

Researchers use integrated GPU to boost CPU speed

article published on 10 February, 2012

Researchers at North Carolina State University have found a way to improve CPU performance more than 20 percent using a GPU built on the same processor die. "Chip manufacturers are now creating processors that have a ?fused architecture,? meaning that they include CPUs and GPUs on a single chip," said Dr. Huiyang Zhou, who co-authored a new paper based on the research. He explained, "Our ... [ read the full article ]

Please read the original article before posting your comments.
Posted Message
Senior Member

4 product reviews
_
11. February 2012 @ 01:30 _ Link to this message    Send private message to this user   
interesting, however not ideal as there general purpose is in laptops, or other low profile machines that lack the ability to expand graphics power due to either cost cutting measures or design issues.

Powered By

Advertisement
_
__
Senior Member

1 product review
_
11. February 2012 @ 13:39 _ Link to this message    Send private message to this user   
OK, not being a design engineer with all the math degrees & such for any of this... purely coming from the background of implementation & end user dynamics:

So much of this particular technology has started to rely heavily on the cooperation of the GPU in doing a lot of number crunching. Granted, I've seen a bunch of it being driven in the 3D graphical interfaces & those 'physical' environments, but if memory serves, I would have thought some universities have been looking into folding some of the math of proteins as well.

So I offer this... If the graphic processors are pounding out the numbers in comparatively/drastically higher numbers than regular CPUs why aren't manufacturers virtually using these processors as the basis of their design (as of resent)?

Seems to me this would be the next step in the evolution. But I have missed things along the way too.


Senior Member

4 product reviews
_
11. February 2012 @ 14:47 _ Link to this message    Send private message to this user   
Originally posted by LordRuss:
OK, not being a design engineer with all the math degrees & such for any of this... purely coming from the background of implementation & end user dynamics:

So much of this particular technology has started to rely heavily on the cooperation of the GPU in doing a lot of number crunching. Granted, I've seen a bunch of it being driven in the 3D graphical interfaces & those 'physical' environments, but if memory serves, I would have thought some universities have been looking into folding some of the math of proteins as well.

So I offer this... If the graphic processors are pounding out the numbers in comparatively/drastically higher numbers than regular CPUs why aren't manufacturers virtually using these processors as the basis of their design (as of resent)?

Seems to me this would be the next step in the evolution. But I have missed things along the way too.


Most do. there Called RISC Processor (SPARC PPC ARM MIPS)(the Masters of there Designated Task) such as RADARS, Laser Guidance systems, and many scientific purposes, including Protein folding, and Prediction Branching)

your general Population uses CISC (x86, x64, Intel AMD etc)(The jack of all but Masters of none) often found in environments where each piece of hardware does not have its own hardwired chip.





Powered By

Senior Member

1 product review
_
11. February 2012 @ 16:51 _ Link to this message    Send private message to this user   
Originally posted by DXR88:
Most do. there Called RISC Processor (SPARC PPC ARM MIPS)(the Masters of there Designated Task) such as RADARS, Laser Guidance systems, and many scientific purposes, including Protein folding, and Prediction Branching)

your general Population uses CISC (x86, x64, Intel AMD etc)(The jack of all but Masters of none) often found in environments where each piece of hardware does not have its own hardwired chip.


You're not making much sense. RISC hasn't been made/used for about 12-15 years & your CISC analogy doesn't hold water as it washes back into itself as being the argument for being today's current computing technology.

I'm talking about completely incorporating the different architecture of Tegra or Fusion(?) into just that, rather than the complaints of a stalemate of 'no further gains' in current CPU technology.

Which by the way, GPUs are being currently used for some scientific purposes, just limited. So I'm saying why hasn't every bit of this been pushed over the hump yet?

Either I wasn't clear or you clarified what I already said with dead & redundant equipment I also knew about.

This message has been edited since posting. Last time this message was edited on 11. February 2012 @ 16:56

AfterDawn Addict

1 product review
_
12. February 2012 @ 01:04 _ Link to this message    Send private message to this user   
I think the main problem is that developers just don't always take advantage of everything that is available simply because it isn't always available. For instance, many apps use nVidia CUDA yet refuse to use ATI cards that can do essentially the same thing simply because they are not specifically designed for it (slower workstation cards from ATI ignored). Other apps use neither, as CUDA isn't always there...or simply because they were too lazy to add CUDA support. ATI has something like CUDA on their workstation cards too (I forget the name at the moment)...but it is virtually unused simply because it is not on the run-of-the-mill desktop cards.

As an i5 owner with a dedicated video card, I would love to see apps using the integrated GPU that I have no use for currently...but I honestly don't know how much of a boost I would see considering that most apps don't even bother to use CUDA.


Senior Member

1 product review
_
12. February 2012 @ 11:15 _ Link to this message    Send private message to this user   
Originally posted by KillerBug:
As an i5 owner with a dedicated video card, I would love to see apps using the integrated GPU that I have no use for currently...but I honestly don't know how much of a boost I would see considering that most apps don't even bother to use CUDA.
I agree. But I think Cuda & Stream (the ATI equivalent you were looking for) are more of a software 'switch' (if you will) to get the GPU involved in helping with CPU processing.

Don't get me wrong, like the cereal, "it's great!", but they only seem to want to use it for video processing in the consumer market. And not that Cuda or Stream are the only viable options, they just seem to be the only two out there at the moment.

So my blathering is about some Tucker or Tesla upstart taking (say) a Cuda & building a quad core CPU layered off it's foundation. If engineers are already writing code for GPUs to fold/unfold protein DNA my feeble brain doesn't see the reason why it can't direct a little traffic on a motherboard.

Thus killing this supposed stagnation of CPU processing speeds for a while. Granted, I'm leaving myself open for ridicule that these GPUs use a multi-processor approach to their ability to do their 'thing', thus giving the illusion of a higher mhz rating, but then equally, shouldn't we be able to have similar computers with similar CPUs?

Thus the reason for the question to keep coming back on itself & the risk of me sounding like a crack smoker.

ddp
Moderator
_
12. February 2012 @ 15:28 _ Link to this message    Send private message to this user   
LordRuss, risc still in use & production. http://en.wikipedia.org/wiki/RISC
Senior Member

1 product review
_
12. February 2012 @ 15:54 _ Link to this message    Send private message to this user   
Originally posted by ddp:
LordRuss, risc still in use & production. http://en.wikipedia.org/wiki/RISC
OOookay... I wasn't right. But Motorola was the biggest manufacturer of the processor. Now Quaalcom is in the game. But your link is saying "for all intents & purposes" the RISC-CISC lines are all but blurred. And if they're still in production, why aren't they calling them RISC processors? In your article they want to refer to them as ARM.

I don't mean it necessarily like AMD's FX chip being renamed the AM(whatever). I mean the RISC seems to have died somewhere along the way, obviously not in the server market, but it somehow lived under a highway for a long time & now wants to live in the middle of Beverly Hills again.

Besides, even 'they' adopted engineering rules of the x86 architecture, just like everyone else did. It's the micronization elements where we're starting to see a resurgence of all this.

Still doesn't change the fact all these guys need to do whatever the video card guys are doing in processing technology & twist their nuts on a bit tighter.

ddp
Moderator
_
12. February 2012 @ 15:59 _ Link to this message    Send private message to this user   
was using them on sun microsystem boards at celestica back in 1998-2000 before transfered to other site to build nokia cell phones & later cisco boards.
Senior Member

1 product review
_
13. February 2012 @ 12:38 _ Link to this message    Send private message to this user   
I forgot about Sun... I knew they were still using them after CrApple dropped them like rats on the proverbial sinking ship... I just lost track after the whole Motorola thing. Then started mumbling to myself when I heard voices around the processors in Blackberries & such (or so I thought).

I figured it was a bad drug induced flashback. Who knew?

ddp
Moderator
_
13. February 2012 @ 13:21 _ Link to this message    Send private message to this user   
the processors on the sun boards where made by TI & did not have pins but pads like the socket 775 & up.
Senior Member

1 product review
_
13. February 2012 @ 13:41 _ Link to this message    Send private message to this user   
Could I assume they were the trend setter for the x86 market going pinless then? Or were they just the first to be the first to naturally migrate?

Advertisement
_
__
 
_
ddp
Moderator
_
13. February 2012 @ 14:07 _ Link to this message    Send private message to this user   
were the 1st as intel didn't go pinless till socket 775.
afterdawn.com > forums > announcements > news comments > researchers use integrated gpu to boost cpu speed
 

Digital video: AfterDawn.com | AfterDawn Forums
Music: MP3Lizard.com
Gaming: Blasteroids.com | Blasteroids Forums | Compare game prices
Software: Software downloads
Blogs: User profile pages
RSS feeds: AfterDawn.com News | Software updates | AfterDawn Forums
International: AfterDawn in Finnish | AfterDawn in Swedish | AfterDawn in Norwegian | download.fi
Navigate: Search | Site map
About us: About AfterDawn Ltd | Advertise on our sites | Rules, Restrictions, Legal disclaimer & Privacy policy
Contact us: Send feedback | Contact our media sales team
 
  © 1999-2026 by AfterDawn Ltd.

  IDG TechNetwork