Useful Tips

Nvidia GeForce GTX 560 Ti: a hero of our time

If you've been following the recent history of 3D hardware graphics, you should know that the GeForce GTX 480 wasn't born easily. But the GeForce GTX 580 and GTX 570 graphics cards based on it perform very well in performance tests, both synthetic and gaming.

It was clear, however, that Nvidia would not stop there in replacing the GeForce 400 series with a GeForce 500. The GeForce GTX 460 was the next replacement candidate. It should be noted that the products based on the GF104 turned out to be more successful than the older models of the GeForce 400 series with the GF100 chip. The GF104 is simpler and cheaper to manufacture, and is capable of filtering FP16 textures at full speed.

As a result, the GF104 in GeForce GTX 460 cards was equipped with 768 MB and 1 GB of internal memory and happily rested on its laurels without any competition in the $ 199-229 category. The presence of numerous versions with factory overclocking clearly indicated the high potential of the GF104 chip, which, by the way, has one secret. It has only seven active multiprocessor clusters, although it actually contains eight. That is, the GF104 has 336 active ALUs and 56 TMUs, although there are 384 ALUs and 64 TMUs physically present.

The GF104 does not suffer from the GF100 issues, but Nvidia should be committed to bringing the new chip to market as soon as possible and ensuring high yield by simplifying configuration. Many reviewers speculate that the stripped-down GF104 will be based on the full-featured version, but that never happened during the lifecycle of the GeForce 400 series.

On January 25, Nvidia unveiled the successor to the GF104 along with a new mainstream card. This is the GF114 chip. Nvidia ranks it in the class of affordable but high-performance solutions like "Hunter", but we'll try to draw an analogy with tanks. The flagship models, priced at over $ 250, can be called heavy tanks with the most powerful weapons, which, however, cannot achieve an overall victory due to their limited number. Numerous production vehicles win the battle because they combine simplicity and acceptable technical parameters.

The GeForce GTX 560 Ti is Nvidia's new main tank. The return to the use of suffixes in the names of video cards came as a surprise to many. "Ti" obviously means titanium, implying excellent consumer properties of the new product, but the use of various prefixes and suffixes brings us back to the distant 2001, when this Ti suffix was first used in one of the GeForce 2. We assume the name "GeForce GTX 560 "will be enough.


Architecture

As with the GeForce GTX 580 and 750 cards, the GPU structure has not changed. The GF114 is actually a GF104 chip optimized for high clock speeds with low power consumption.

Two computing clusters include four multiprocessor units each. Each block consists of 48 streaming cores for a total of 384 and serves eight texture processing blocks. A total of 64 TMUs are contained in the new chip. The TMU architecture has remained untouched since the days of GF104. They can perform FP16 filtering at full speed. FP32 textures are rendered at only one quarter of full speed. As with the GF104, the L2 cache is 512KB.

Each multiprocessor unit also includes a PolyMorph engine, making the GF114 superior to any AMD solution in terms of geometry handling and tessellation performance. Even the two 3rd generation tessellation units in AMD's Cayman processors can hardly match the eight PolyMorph engines in the GF114. The rasterization subsystem of the new chip remains the same with 32 blocks. It is directly related to the memory subsystem, so that the latter still includes four 64-bit controllers. The memory bus connecting the GPU to the graphics memory is 256 bits.

There are no notable innovations in the multimedia department, but they are hardly necessary. The GF104 already knew how to do everything that modern users demand of them, offering hardware support for decoding high-definition video in H.264 and VC-1 formats and protected audio tracks for multi-channel audio output in Dolby TrueHD and DTS-HD Master Audio formats.AMD boasts one advantage in this area, their solutions support hardware DivX decoding.

Overall, the GF114 looks like a well-balanced solution with sufficient functionality. It supports all modern visual and multimedia technologies, but does not have the rarely demanded ability to connect six monitors at the same time, for example. Nvidia's new GPU takes the ideas of its predecessor GF104 to the next level. Let's now take a look at how the GF114 based video card is positioned among its predecessors, rivals and older cousins.

Positioning

Just as the GF110 is a "patched" version of the GF100, the new GF114 is what the GF104 was meant to be from the start. Nvidia has managed to make the new GPU stable in its full configuration with eight active multiprocessor units carrying a total of 384 ALUs and running at higher frequencies than the GF104.

Thus, the new GeForce GTX 560 Ti compares to the GeForce GTX 460 1GB in the same way as the GeForce GTX 580/570 compares to the GeForce GTX 480/470. There is nothing wrong with the new version being developed from the old GPU. Fermi's architecture is good enough, except for its texture processing subsystem, which is worse than AMD solutions.

Thus, we see that the GPU frequencies have increased: over 800 MHz for the main domain and over 1.6 GHz for the shader domain. This is an achievement for Nvidia given the architectural features of their solution. Such frequencies can be conquered by individual samples of GeForce cards with factory overclocking. AMD has something to worry about because even the core areas of the GF114 run at a higher clock speed than the entire GPU in the Radeon HD 6950. In addition, the latter only has 352 VLIW4 processors, not 384 scalar ALUs in the GF114.

AMD graphics architecture does not perform at its best in all cases. Their new incarnation of the Cayman also lacks ALUs for complex instructions, so such instructions must be executed with four simple ALUs, which is not efficient. Thus, in some cases, the Radeon HD 6950 will be much slower than the GeForce GTX 560 Ti with 384 stream processors running at over 1600 MHz. As for the Radeon HD 6870, it is inferior to the GeForce GTX 560 Ti in all parameters, including the performance of the texture processing units.

The GeForce GTX 560 Ti memory specs have improved over their predecessor, but the peak 128Gbps bandwidth is not as impressive compared to AMD's Cayman and even Barts solutions. It is unclear why Nvidia was unable to increase the memory frequency to at least 1125 (4500) MHz. GF110 solutions are justified by their wide bus. This serves not only to ensure stable operation of memory chips at high frequencies when the memory bus is 320 or 384 bits. The GF114, on the other hand, has a 256-bit bus, so the GeForce GTX 560 Ti is possibly equipped with faster GDDR5 chips. Thus, memory bandwidth is not a strong point of the new card. It is inferior in this respect even to the Radeon HD 6870, not to mention the Radeon HD 6950.

The TMU subsystem has ceased to be a bottleneck of the typical Fermi architecture with the unlocking of the eighth multiprocessor unit and an increase in the GPU clock frequency to 820 MHz. With 64 active TMUs, the peak texture mapping performance is 52.6 Gigatexels per second, which is even higher than the older GeForce 500 series products. An AMD Cayman with 96 TMUs (88 in the Radeon HD 6950) may perform even better, but this is unlikely to have a radical effect in modern applications. Benchmarks show that the GeForce GTX 580 and GTX 570 do not suffer from low texture rendering rates, while the new GeForce GTX 560 Ti graphics cards are even better in this regard.

The rasterization performance of the new card is also high due to the increased GPU frequency. The peak fill rate of the new GeForce GTX 560 Ti is slightly lower than the Radeon HD 6870, which has a core clock speed of 900 MHz, but higher compared to the architecturally more advanced Radeon HD 6950. The new card from Nvidia should not feel dips due to the lack of rasterization even at high resolutions with anti-aliasing enabled in full screen mode.

The rest of the features remain the same as in the predecessor of the new card. Thanks to the optimization of the GF114 chip, its TDP is only 10W higher than that of the GF104.Again, the new card is actually an improved version of the GeForce GTX 460 1GB with eight MPUs and increased GPU and memory frequencies. The predecessor is still a good product and a great choice for gamers who don't want to spend more than $ 250 on a graphics card. The successor has inherited its best features and offers them only $ 20 more.

With a MSRP of $ 249, the new GeForce GTX 560 Ti fits between the Radeon HD 6870 and Radeon HD 6950 and has a chance to outperform both opponents. Let's now take a look at the new map.


PCB design and specifications

The new GeForce GTX 560 Ti is 2 centimeters longer than its predecessor, the GeForce GTX 460 1GB (we mean the reference version of the latter). This is 23 centimeters long, so the card should fit into most system units without too much difficulty.

You may experience some problems when installing the card in the short system unit, because the power connectors are located on the short side of the PCB. This is not a problem with the reference GeForce GTX 460, but an extra couple of centimeters can get in the way. As such, you must make sure the GeForce GTX 560 Ti will fit into your case prior to purchasing it.

The cooling system is held in place with regular screws, not Torx T2 as in the GeForce GTX 570. Hopefully Nvidia will continue to use the more popular type of screws.

The PCB layout of the new card hasn't changed much compared to its predecessor. One of the memory chips is still located to the left of the GPU, while the other seven are to the right. The new PCBs seem to borrow some of the PCBs from the GeForce GTX 460. The power supply is in a 4 + 1 configuration. The overclocking of the GF114 makes the GPU voltage regulator gain justified.

The GPU power is still based on ON Semiconductor's NCP5388 controller. The memory voltage regulator is based on the Richtek RT8101 chip located below the main power supply chokes. Like its predecessor, the GeForce GTX 560 Ti has two 6-pin PCIe 1.0 connectors with a maximum load of 75W. The PCB design does not allow for an 8-pin PCIe 2.0 connector.

The new card is equipped with the popular Samsung K4G10325FE chips. You can see them aboard the GeForce GTX 580 as well as many other cards. This is GDDR5 memory. Chips with a capacity of 1 GB (32 MB x 32), the HC04 suffix tells us about the nominal frequency of 1250 (5000) MHz. The card can reduce the memory frequency in two power saving modes to 324 (1296) MHz or 135 (540) MHz. The memory chips are connected to the GPU via a 256-bit bus and have a default speed of 1002 (4008) MHz, providing a peak bandwidth of 128.3 Gbps.

The new chip looks no different from the GF104 and has the same dimensions. According to Nvidia, the footprint is 360 sq mm, but we cannot verify this because it is covered with a protective heat spreader. The GPU mark indicates that this sample is revision A1. It was manufactured in the 46th week of 2010, at the end of November, when the GF114 was already mass-produced. The middle number 400 indicates a high frequency potential, since the core of the GeForce GTX 560 Ti should be able to operate at frequencies of 800/1600 MHz and higher.

The latest version of GPU-Z does not yet reveal all the features of the GeForce GTX 560 Ti. It does not report on such parameters as manufacturing process, die size, number of transistors, and production date. DirectX support and texture fill rates are also empty. Also, the utility incorrectly says that the GPU does not support PhysX. On the other hand, key functions such as ALU number and frequencies are named correctly.

Like other solutions from Nvidia, the new card supports two power saving modes. When decoding HD video, it lowers the GPU clock to 405/810 MHz. When the GPU is in standby mode, frequencies drop to 51/101 MHz.

The reference design of the GeForce GTX 560 Ti does not include DisplayPort, although the GPU itself supports this interface. The card has the same connectors as the older models of the series: two DVI-I and one HDMI 1.4a port. There is a ventilation grill in the upper part of the card bracket to remove hot air from the cooler. There is one MIO connector on board, so you cannot build an SLI configuration with more than two GeForce GTX 560 Ti cards. More advanced SLI systems can be built from older GeForce 500 series products.

This choice of interfaces is not as great as the Radeon HD 6000 series, but few users really feel the need to connect six monitors at the same time. Most gamers use one, sometimes two, DVI monitors. Large panels are connected via HDMI. Thus, the GeForce GTX 560 Ti offers a reasonable minimum in this regard. Its GPU supports DisplayPort 1.1, so some custom designed GeForce GTX 560s may come with a display port.

The reference GeForce GTX 560 Ti does not have a vapor chamber in the cooling system, although this was embodied in the GeForce GTX 580 and 570. The new card uses a more classic cooler with a round center radiator, which resembles a boxed cooler from Intel. The central part is connected to two additional curved radiators with three heat pipes. An axial fan blows air down and also to the side to cool all radiators. Part of the hot air will flow to the right, inside the system unit.

The base of the cooler is quite standard. The aluminum frame serves as a heat distributor for memory chips and power elements of the circuit, absorbing heat from them using elastic thermal pads. A layer of dense gray thermal paste is applied between the main heatsink and the GPU.

The use of this cooler is rather strange, especially since Nvidia has developed a very efficient cooling system for the older GeForce 500 series products. The only reason why Nvidia does not use a vapor chamber is that the cost of such a cooler would be too high. But, perhaps, this solution is quite effective. Let's check it out.


Power consumption, noise

Despite the increase in the frequencies of the GF114 chip, Nvidia claims that its TDP has increased from 160 to 170 watts compared to its predecessor. Thus, we can expect the GeForce GTX 560 Ti to have roughly the same power consumption.

Of course, the additional active units and higher frequencies affect the power consumption of the new card, so it needs 160W for 3D applications, while the GeForce GTX 460 1GB only needs 140W. But this is an acceptable price to pay for a significant improvement in performance. In addition, the new card is even more economical in desktop mode. When processing video, the card does not give up its frequencies immediately. When connecting two monitors with different resolutions at the same time, the GeForce GTX 560 Ti will not switch to 51 / 101MHz mode using 405 / 810MHz mode.

Interestingly, in normal mode, each 6-pin connector has a load of 5.5 to 5.9 amps, or no more than 70 watts. So there really is no need for an 8-pin PCIe 2.0 power connector.

Like the older models in the series, the GeForce GTX 560 Ti can control electrical current in 12 power lines. If the load is too high, as is typical for OCCT stress tests: GPU or FurMark, the GPU is put into a degraded mode as a protective measure. This feature is optional and can be disabled on the custom designed version of the GeForce GTX 560 Ti. Such protection is advisable, since the unrealistic load from FurMark can damage the video card.

Overall, the new card from Nvidia is quite competitive in terms of power consumption. Although not nearly as good as the economical Radeon HD 6870, it will be faster in games. The card looks good compared to the Radeon HD 6950. The GeForce GTX 560 Ti continues the good tradition of the power efficiency of the GeForce GTX 580.

To check the temperature of the card, we used a second sample of the GeForce GTX 560 Ti with the original thermal interface. At a room temperature of 25-27 ° C, the GPU temperature of the card is 78 ° C. This is a very good result, which testifies to the efficiency of the cooler.

In terms of noise figure, the card produces roughly the same amount of noise in 2D and 3D modes, because the fan runs at 40% in the first mode and 45% in the second, despite using Crysis Warhead to load the card. The noise level meter barely registers the difference at a distance of 5 cm from the stand. At a distance of 1 meter, the noise level is only 1 dB higher than the 38 dBA noise level of the system unit. In other words, the video card was not audible against the background of noise from other system components.

Summing up this section of the review, we can say that the GeForce GTX 560 Ti has a balanced combination of electrical, thermal and acoustic characteristics.


Nvidia GeForce GTX 560 Ti and HD video

When Nvidia designed its GeForce GTX 5-series GPUs (Fermi 2.0), they were trying to lower their power consumption and increase computing performance rather than adding new features. As a result, the GeForce GTX 560 Ti (GF114) has the same PureVideo unit as its predecessor.

As you know, the latest version of NVIDIA PureVideo supports all modern video formats such as MPEG2-HD, MPEG4, MPEG4-AVC, MPEG4-MVC, VC-1, WMV-HD, Adobe Flash 10.1, etc., as well as compression lossless audio for decoding to an external receiver. Unlike modern Radeon HD 6800 graphics cards, the new GeForce GTX 560 Ti does not support DivX / XviD hardware and MPEG2 entropy decoding, but this is hardly a major issue.

The size and power consumption of Nvidia's new card is unlikely to be considered an HTPC solution. However, it fits perfectly into the Antec Fusion HTPC chassis. Thus, the GeForce GTX 560 Ti can really be used in a computer system for gaming as well as video playback.

Let's see how well the GeForce GTX 560 Ti can play Blu-ray content and how much load it can take from the CPU when decoding high-definition video.

Play video. Test system and methods

We're going to investigate the decoding and playback performance of the Nvidia GeForce GTX 560 Ti and other test takers today on the following platform:

Intel Core 2 Duo E8500 CPU (3.16 GHz, 6 MB cache, 1333 MHz PSB);

Gigabyte EG45M-DS2H board (Intel G45 chipset);

OCZ Technology PC2-8500 memory (2x1 GB, 1066 MHz, 5-5-5-15, 2T);

Western Digital hard drive (640 GB, SATA-150, 16 MB buffer);

Antec Fusion 430W case;

monitor Samsung 244T (24 ", 1920x1200 @ 60Hz maximum resolution);

optical drive LG GGC-H20L (Blu-Ray, HD DVD, DVD);

driver for ATI Radeon ATI Catalyst 10.9 / 10.10;

Nvidia ForceWare 258.96 / 260.63 / 260.99 / 260.56 for Nvidia GeForce

CyberLink PowerDVD 10

CyberLink PowerDVD 10 for GeForce GTX 460

Microsoft Windows Performance Monitor

Microsoft Windows 7 64-bit

The following video cards and integrated GPUs took part in our tests:

ATI Radeon HD 6800

ATI Radeon HD 5700

ATI Radeon HD 5600

ATI Radeon HD 5500

Nvidia GeForce GTX 560

Nvidia GeForce GTX 460

Nvidia GeForce GTS 450

Nvidia GeForce GT 240

We used the following tools to evaluate the quality of video playback in standard (SD) and high definition (HD) definitions:

IDT / Silicon Optix HQV 2.0 DVD

IDT / Silicon Optix HQV2.0 Blu-Ray

Driver settings were not changed. However, in accordance with HQV requirements, noise reduction and detail level in the drivers were increased to an average value (50% -60%), which, however, does not affect the results.

Bearing in mind that all tests will be performed under Windows 7 without disabling background services, peaks in CPU usage should not be considered critical. Much more important is how long it takes a processor on average to complete a task. Note that the 1% -2% difference does not indicate any advantage of certain graphics accelerators over a competitor.

Video playback quality

HQV 2.0 tests are a means of subjectively assessing the quality of certain video processing operations performed by a video card.

HQV 2.0 DVD

Today, few people watch DVD movies on TVs and monitors in the native DVD resolution. Most users prefer large screens with Full HD (1920x1080) resolution. Thus, the main goal of any video processor is not only to display video content correctly, but also to be able to perform motion correction, reduce noise, improve detail, etc.

As you would expect, the newcomer is on par with the GeForce GTX 460. You can ignore the slight difference in the score: some benchmarks are reproduced at subjectively lower quality, and some at subjectively better quality. In any case, we cannot recommend watching 480x320 videos on monitors that support 1920x1080 / 1920x1200. At the same time, a good DVD will play very well on the more expensive GeForce GTX 560 Ti graphics cards.

HQV 2.0 Blu-ray

As with the HQV 2.0 DVD, the HQV 2.0 Blu-Ray benchmark suite subjectively evaluates the video processor at high resolutions.

As in the previous case, we do not see any serious disagreements with the predecessors, and this is good in general. Although the ATI Radeon HD rivals are slightly ahead of our today's reviewer, it hardly follows that Blu-Ray movies will play at lower quality.

When analyzing HQV test results, you should keep in mind that the calculation method is highly subjective. Therefore, a small difference in the overall rating of various video cards is hardly considered critical.

Blu-ray playback

Let's see how well hardware decoding processors can free the processor from processing video playback and how much this will reduce power consumption.

The GeForce GTX 560 performs even better than its predecessor, which may result from improved drivers as well as a higher clock speed of the chip. However, the average CPU load time is so short and so close for the 460 and 560 models that we can hardly speak of any noticeable difference.

When playing MPEG4-AVC / H.264 video codecs, the beginner uses the processor in the same way as the GeForce GTX 460.

When it comes to MPEG2-HD content that is almost completely obsolete, the GeForce GTX 560 Ti does a very good job as well. The slightly increased maximum processor utilization in this case is mainly determined by software, not hardware reasons.

Summary

With minor improvements to the GF104 version, the GeForce GTX 560 demonstrates similar playback quality and similar video decoding performance. If you already have a GeForce GTX 460 in your HTPC system, Nvidia's new solutions will not bring benefits beyond better standby power efficiency.

Like its predecessors and competitors, the Nvidia GeForce GTX 560 Ti supports hardware decoding of almost all popular formats, including Blu-Ray 3D, high definition audio via HDMI bitstreaming 1.4a, and other advanced features. Nvidia's new card does not support DivX hardware decoding, is generally slower in HQV benchmarks than its competitors, and requires a dedicated driver from Nvidia to play movies and games on a stereo 3D HDTV. However, the GeForce GTX 560 Ti will be a good choice for multimedia PCs.

Being a fairly fast gaming graphics accelerator, the GeForce GTX 560 Ti draws up to 160W of power and is quite bulky. Of course, the GeForce GTX 560 Ti is not the HTPC solution as it is positioned by the developer. Therefore, if you decide to use it in an HTPC, you must make sure you have enough cooling space inside to avoid possible overheating.


Performance in synthetic and semi-synthetic tests

Futuremark 3DMark Vantage

We minimize the CPU impact with the Extreme profile (1920x1200, 4x anti-aliasing and anisotropic filtering).

As you'd expect, the GeForce GTX 560 Ti outperforms the Radeon HD 6870, but it takes 1045 points to catch up with the Radeon HD 6950. The 9519 point score is excellent for the price range where Nvidia's new card is positioned.

Futuremark 3DMark 11

We also use the Extreme profile here. Unlike 3DMark Vantage, 3DMark 11 profile runs at 1920x1080.

We see almost the same picture in 3DMark 11, although the actual numbers are different here. The GeForce GTX 560 Ti is not inferior to its main rival in this test, but the Radeon HD 6950 also remains ahead.


Conclusion

The new GeForce GTX 560 Ti did not disappoint: it really deserves to be a worthy member of the GeForce 500 family. In addition, we can confidently call it a true "weapon of mass destruction". Indeed, there is nothing surprising: while the recommended retail price of the new GeForce GTX 560 Ti is only $ 249, it successfully opposes not only its direct competitor - the Radeon HD 6870, but also the more powerful and more expensive Radeon HD 6950 ...

Indeed, the new GeForce GTX 560 Ti did the same for the GeForce GTX 470 that the GeForce GTX 570 did for the GeForce GTX 480. At this point, it’s hard to say what will happen to the GeForce GTX 460 1GB. It can be replaced with a new solution on the GF114 with a stripped-down configuration and without the "TI" suffix in the model name, and the GeForce GTX 550 with GF116 will replace the GeForce GTX 460 768 MB.

If we take a closer look at the performance of the video card, we cannot confidently say that the GeForce GTX 560 Ti outperforms the Radeon HD 6950 in all tests: the Cayman is originally designed for more expensive solutions. Nevertheless, the results obtained are really impressive.

For example, the GeForce GTX 560 Ti beat its higher-end competitor in 10 tests out of 18 at 1600x900 resolution, and showed a 20% average performance advantage over the Radeon HD 6870. Although video cards of this class are usually purchased for use in high resolutions, this is a very good start.

The 1920x1080 resolution is one of the most popular today because of the HD format. Here the GeForce GTX 560 Ti is more difficult to compete with the Radeon HD 6950, as the latter has faster memory and boasts 88 texture units. However, our newcomer does not lose ground so easily: he wins or keeps on an equal footing in 8 game tests. GeForce GTX 560 Ti outperforms its main competitor by 15% on average, and in some cases runs up to 60% faster!

Typically, GeForce GTX 560 Ti or Radeon HD 6870 cards priced at $ 250 or less are not used at 2560x1600. That's $ 300 for cards. So it makes sense that the GeForce GTX 560 Ti outperforms the Radeon HD 6950 in most benchmarks. Nevertheless, it retains its lead over the Radeon HD 6870, although its performance advantage drops to 12-13%.

Summing it all up, we can conclude that Nvidia's new solution turned out to be very successful. With outstanding specs and a price tag of just $ 10 more than the AMD Radeon HD 6870, it is tearing its competitor to shreds. Not surprisingly, Advanced Micro Devices' graphics division is seriously concerned about this aggressive move by Nvidia, and plans to strike back by lowering the price of the Radeon HD 6870 and releasing a cheaper version of their Radeon HD 6950 equipped with 1GB of local video memory.

With the 160W power consumption of the new GeForce GTX 560 Ti in mind, it's hard to recommend this card as a home theater PC solution if you don't need high gaming performance. In other words, if the acoustics and dimensions of your HTPC system are secondary, then only the GeForce GTX 560 Ti is worth a look.

Today, the GeForce GTX 560 Ti has every right to be considered the best gaming graphics accelerator in the under $ 250 price range. There is hardly any other choice for those who are not ready to invest in a GeForce GTX 570 or Radeon HD 6970. Time will tell how competitive the upcoming more cheap solution Radeon HD 6950.

Pros :

Best-in-class performance

Can compete successfully with Radeon HD 6950 2GB in some benchmarks;

High performance with tessellation enabled;

Wide range of supported full-screen anti-aliasing modes;

Minimal impact of FSAA on performance;

Full HD hardware video decoding;

Support for exclusive NVIDIA PhysX and 3D Vision technologies;

Wide range of GPGPU applications designed specifically for NVIDIA CUDA;

Low noise level;

Highly efficient cooling system.

Minuses:

No major flaws were found.

$config[zx-auto] not found$config[zx-overlay] not found