Main Menu
The PC Doctors
Online Menu
  • Guests: 11
  • Members: 0
  • Newest Member: Ashley
  • Most ever online: 196
    Guests: 195, Members: 1 on 07 Jun : 10:06
recent additions
Search PC Doctors Online Technical Support
Welcome
Username or Email:

Password:



[ ]
[ ]
[ ]
Chatbox
You must be logged in to post comments on this site - please either log in or if you are not registered click here to signup



stasja
2 weeks ago




Morgue[FLB]
1 year ago
:P


stasja
1 year ago
!wave


Morgue[FLB]
2 years ago
sa


Morgue[FLB]
2 years ago
!cold


RSS Feeds
can be syndicated by using these rss feeds.
rss2.0
atom
Site Stats
View your mail online
Chat
News by month 2017
Counter
This page today ...
total: 1
unique: 1

This page ever ...
total: 1
unique: 1

Site ...
total: 1
unique: 1
Poll

Is Windows 8 worth it?


Windows 8 is great!

Back to Windows 7!

What is windows 8?



Posted by Christo [PCD]
Votes: 109
Previous polls

Hardware

Tuesday 07 December 2010
Intel and NVIDIA Prepare to Kiss and Make up With Settlement
Christo [PCD] , Tuesday 07 December 2010 - 11:32:55 //

Union may represent effort to hold off surging AMD

NVIDIA has new GPUs (the 500 series) -- but so does AMD. And AMD is currently beating NVIDIA in sales of discrete GPUs.

Likewise Intel, long having dominated the netbook/light laptop market with its Atom process is concerned about AMD's new "Fusion" accelerated processing unit, which packs a better integrated GPU than atom. Intel's primary hope to hang on to its market share involves pairing Atom with NVIDIA's ION lightweight GPU at an affordable price. But Intel and NVIDIA have been involved in a bitter long-standing feud that has resulted in Intel making ION offerings more expensive than its own inferior chipset.

But much like Lex Luthor and Superman occasionally do in the comics, these bitter enemies have found cause to try to set their difference aside, while facing a common threat. The pair was set to go to battle with each other in a trial starting Dec 6 in Delaware's Chancery Court. NVIDIA and Intel, though, have asked the court to postpone the trial concerning licensing issues to 2011, buying time for a settlement.

Nvidia CEO Jen-Hsun Huang coyly commented, "We’re always in talks. Our two companies are always in talks."

The settlement would be advantageous to both firms. Both have grown weary during the long legal campaign, which has stretched over six years, since being filed in 2004. The legal battle has been filled with suits and countersuits, with both chipmakers trying to deny each other access to their respective technologies, and alleging breaches of contract.

While Intel is the largest CPU chipmaker and NVIDIA is the world's second largest graphics chipmaker, both companies have missed out on potential revenue that could have come from joint products.

If they can reach a settlement, the quality of desktop hardware could be boosted. By allowing NVIDIA the right to make chipsets for its new CPUs, something that Intel has currently rejected, consumers could gain access to faster gaming and productivity offerings. And in the netbook sector the pair could at last offer an affordable ION+Atom platform that would mark a true competitor to AMD's dual-threat "Brazos" Fusion chip.

Is NVIDIA finally ready to put away its "can of whoop-[censored]"? We should have an answer to that in weeks or months to come.

Intel and NVIDIA have a common enemy -- the resurgent AMD. The pair are reportedly in talks to settle a long-standing lawsuit and increase their cooperation. (Source: Anandtech)


[Submitted by Christo [PCD]]

1 1291714314 Intel

Click to discuss this news item in the forums


Wednesday 09 June 2010
NVIDIA Brings Out the Big Guns -- a Dual Fermi GPU 430W Beast
Christo [PCD] , Wednesday 09 June 2010 - 13:49:01 //

Somewhere in Taiwan a power supply is weeping

Fermi was long delayed, but it is finally hitting the market and reminding ATI that it hasn't totally won the graphics war, even if it did get quite the head start on DirectX 11. At Computex in Taiwan NVIDIA unveiled an impressive portfolio of upcoming products that looks to jump start the struggling GPU maker. Highlights included upcoming Fermi mobile GPUs and a mid-range GPU, the GeForce GTX 465, which is based on GF100 chips binned aside for minor defects.

But NVIDIA saved perhaps the best for last. Today, along with board partner Galaxy it unveiled a beastly dual-GPU Fermi board designed to make even the toughest gaming rigs weep over some incredible high framerates -- and power draws.

The board carries two GTX 470 chips and draws over twice what a single Fermi draws -- which means that it sucks down loads of power, falling inside a massive 430 W TDP. It requires two (!) 8-pin connectors to feed its mighty cities of transistors.

Power supply manufacturers can breathe a brief sigh of relief, though; there's no word when or if the card will be officially released. The card is thought to carry 3 GB of GDDR5 memory -- matching NVIDIA's current Quadro card, which primarily retails for commercial use. And it can likely double for a portable space heater in a pinch.

The dual GPU spotting in the wild confirms months of rumors. Many rumors point to an upcoming release of a dual chip version called the "GTX 490". While there's no word from NVIDIA on whether this is indeed the official title, we can at least take the dual core Fermi off the list of mythical monsters, leaving behind dragons, sasquatches, and, of course, the ever-popular Kraken.


[Submitted by Christo [PCD]]

1 1276083956 NVIDIA

Click to discuss this news item in the forums


Saturday 23 January 2010
Hack Brings Multitouch to Nexus One Browser
MaTiCa , Saturday 23 January 2010 - 21:23:18 //

Google’s Nexus One phone has gained kudos for its vivid OLED screen and slim design. But the lack of multitouch support for its gorgeous display has left some users frustrated.

Now there’s a hack for it. A developer has modified the Android 2.1 operating system running on the Nexus One to enable multitouch for the device. Though it enables the feature for the Nexus One browser, for now, it is likely to soon become a part of other applications, such as maps.

Earlier this month, Google launched Nexus One as the first smartphone that would be sold by the search company itself, rather than a manufacturing or carrier partner. The Nexus One is designed by HTC and is currently available on T-Mobile’s network for $180 with a two-year contract with T-Mobile. An unsubsidized version of the phone costs $530.

But the lack of multitouch on the Nexus One has left many users puzzled. Nexus One has a touchscreen but users can only tap on it with one finger. So none of the two-finger pinch-and-zoom gestures that are popular among iPhone users are available. Google has said it will consider adding the feature in future updates.

The Android community, though, isn’t holding its breath. Steve Kondik, a developer who goes under the nickname Cyanogen, has offered a few files and instructions on code to add multitouch to the device.

“You will initially lose your bookmarks and browser settings by doing this,? he warns. Hacking the phone could also void its warranty.

But as this video shows, getting multitouch in the Nexus One browser could just be worth it.

SOURCE: Wired.com

660x440


Click to discuss this news item in the forums


Thursday 01 October 2009
Nvidia unveils Fermi GPU Architecture
MaTiCa , Thursday 01 October 2009 - 12:44:47 //

Nvidia has unveiled next generation computational graphics processor architecture codenamed Fermi. Jen-Hsun Huang, Nvidia CEO, stated that Fermi GPU architecture represents the foundation for the world's first computational graphics processors in Nvidia GPUs family - GeForce, Quadro, Tesla. Fermi architecture incorporates three billion transistors and 512 CUDA (Compute Unified Device Architecture) cores to deliver super computing features.

Nvidia's next generation 40nm process Fermi architecture also carries codenames of GT300 aka GF100 that succeeds GT200 GPU architecture. GT stands for GeForce Tesla while GF is presumed to be GeForce Fermi. With Fermi architecture, Nvidia confirms its interest in High Performance Computing, GPU Stream Computing and Super Computing business.

Huang said, "The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry."

Fermi architecture based GT300 GPU has three billion transistors and has 512 shader processing cores which are organized as in 32-core configuration in 16 shader processors. For memory, Nvidia GT300 GPU will have six channels of 64-bit GDDR5 memory controllers with totally 384 bit memory bus width and supporting up to 6GB of video memory. Entire GPU has 768KB L2 cache and acts as common bridge between 16 stream multiprocessors.

Nvidia remains quiet about the power requirement of this Stream processing monster and probably working out to make it less power hungry. Being manufactured from 40nm manufacturing process, this GPU would be cheaper and we can expect to consume as much energy or little more than GT200 chips.

Touted as world's first computational GPU, Fermi architecture is designed to support current programming tools in Fortran, C, and C++ for optimal performance to be extracted in parallel computing tasks. And with Nexus, Nvidia brings Visual Studio 2008 computing application development environment.

500

Nvidia also took a big leap ahead by adding memory technology such as ECC memory support

The new set new technologies that arrive in Fermi architecture are:

* C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute
* ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
* 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
* 8x the peak double precision arithmetic performance over NVIDIA's last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
* NVIDIA Paralle DataCache - the world's first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
* NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)
* Nexus, the world's first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio

106666 Dieshot


Nvidia didn't share any performance levels of Fermi for leaving its existing graphics cards business unaffected. Now, the question that rises here is about competing with AMD's freshly announced Radeon HD 5800 series DirectX 11 supporting graphics cards. However, Huang emphasized that Nvidia had little interest in replacing the CPU (DirectX 11 based) and is more focused on advancing the computing industry.

Huang said, "No one likes it when competition has a product out in the marketplace. But we have a different vision and it is not only about market share. Of course, we don't want to keep our fans and enthusiasts waiting for the next generation GPU. But we want to take the industry forward and won't change the way we are doing things even by a little iota. We want to make sure the product is ready when it ships in a few months. And it will be - modular, flexible and powerful."

Chart

We aren't sure when Fermi architecture based consumer graphics card would come out. Though the company refused to comment on the availability of Fermi graphics cards, it's said that GT300-based Tesla cards would be out in first quarter of 2010. In another note, a Wikipedia entry about GeForce 300 series reveals some GeForce 300 graphics models but there's no authentic information in it. Hence, we count it as speculation.

SOURCE: techtree.com

Click to discuss this news item in the forums


Apple and Intel develop 10 Gbps optical data technology - www.mybroadband.co.za
MaTiCa , Thursday 01 October 2009 - 11:30:33 //

From www.mybroadband.co.za

Apple’s intends to offer an interoperable connectivity standard, that handles all major input/output on a single port

At the Intel Developer Forum held last week, a hot topic was Intel’s unveiling of the Light Speed optical cable technology. The thin optical cable can transfer data at speeds of 10Gb/s, and Intel claims it will deliver speeds of 100Gb/s within 10 years. The data can be transferred over a cable of up to 100 meters in length.

Aside from the technology unveiling, those attending the event couldn’t help but notice that Intel was demonstrating the product on a “hackintosh? – an unbranded PC running a patched Mac OS.

Thanks to Engadget it emerged over the weekend that there was an explanation for this. Apple had approached Intel back in 2007 to create a single interoperable standard which would “replace the multitudinous connector types with a single connector (FireWire, USB, Display interface).?

Apple plans to introduce Light Peak as a new standard for its systems around the American fall of 2010 (South African spring). There are plans to follow up with a low-power variation in 2011, aimed at handhelds and cell phones. A single universal port would be extremely useful in small devices, such as the anticipated Mac tablet PC.

If the timing for the introduction of Light Peak holds true, it would be in direct competition with USB 3.0. The ability to offer a superior 10Gb/s over USB 3.0, which operates at 3.2Gb/s, raises the question as to whether Apple will simply skip USB 3.0 in favour of Light Peak.

[Submitted by Enigma_2k4]



Click to discuss this news item in the forums


Go to page  1 2 3 4 5 6 7 8 9 10  last
News Categories