NVIDIA GeForce GTX 1060 6 GB in 16 Benchmarks ⁜ The best video card for gaming and mining

When buying the best graphics card for gaming, here are a few factors to consider. The manufacturer is NVIDIA. The date of release is 19 July 2016 (5 years ago). The clock speed of a graphics core is 1506 MHz with a turbo-boost of 1709 MHz. The capacity and base clock speed of the memory is  GDDR5 6 GB / 8000 MHz. The RAM Bandwidth is 192 Bit. TDP level is about 120 Watt. The process technology of the chip production is 16 nm. We have checked NVIDIA GeForce GTX 1060 6 GB and offer you to learn its testing in 16 benchmarks.

Also consider the following:
- Is it risky to buy used graphics card;
- Should I buy a refurbished GPU;
- Is it worth it to buy two graphics cards.

When you think, "Should I buy a graphics card NVIDIA GeForce GTX 1060 6 GB for my computer/gaming laptop?", you can finally buy a graphics card NVIDIA GeForce GTX 1060 6 GB  thanks to falling GPU prices. Use our service to choose the right one.

General Information

Here is the position of NVIDIA GeForce GTX 1060 6 GB in the performance ranking – the less the metric is, the higher the place in the raking is – and useful information, such as the segment of usage, code name, starting sales price.

Place in performance rating: 126
Value for money (0-100): 43.49
Architecture: Pascal
Code name: GP106
Type: Desktop
Release date: 19 July 2016 (5 years ago)
Launch price (MSRP): $299
Price now: $297 (1x MSRP)
Value for money: 38.34
GPU code name: GP106
Market segment: Desktop

Technical Specifications

Technical specs of NVIDIA GeForce GTX 1060 6 GB point to its performance in the game, mining and work apps. The higher the metric (except for process technology) is, the more effective and faster the GPU operates in any apps.

Pipelines: 1280
Core clock speed: 1506 MHz
Boost Clock: 1709 MHz
Transistor count: 4,400 million
Manufacturing process technology: 16 nm
Power consumption (TDP): 120 Watt
Texture fill rate: 136.7
Floating-point performance: 4,375 gflops
Pipelines / CUDA cores: 1280
Boost clock speed: 1708 MHz
Number of transistors: 4,400 million
Thermal design power (TDP): 120 Watt

Dimensions and Compatibility

If you pick up the GPU to an already available body, the information may be useful as some models do not fit the standard E-ATX formfactors. Check which interface and connectors NVIDIA GeForce GTX 1060 6 GB has.

Interface: PCIe 3.0 x16
Length: 250 mm
Supplementary power connectors: 1x 6-pin

Graphics memory (technical specifications)

Here is the second important block of NVIDIA GeForce GTX 1060 6 GB specs. The GPU memory uses for fast proceeding, storage and data calculation. The speed and efficiency of the GPU depend on the standard memory GDDR, its capacity, width and clock speed.

Memory type: GDDR5
Maximum RAM amount: 6 GB
Memory bus width: 192 Bit
Memory clock speed: 8000 MHz
Memory bandwidth: 192.2 GB/s
Shared memory: -

Port and Display Support

Let’s look at the NVIDIA GeForce GTX 1060 6 GB ports and learn how many displays and which connectors it has to be used to connect.

Display Connectors: 1x DVI, 1x HDMI, 3x DisplayPort
G-SYNC support: +
HDMI: +

Technologies

Technology, which NVIDIA GeForce GTX 1060 6 GB is equipped with, can be different from analogues of competitors because of completely different products. This feature does not influence the total performance ranking.

VR Ready: +
Multi Monitor: +
CUDA: 6.1
Multi-Projection: +
G-SYNC: +

API support

NVIDIA GeForce GTX 1060 6 GB supports DirectX, OpenGL, OpenCL and other standard complexes for developing 3D models. Learn their supported versions.

DirectX: 12 (12_1)
OpenGL: 4.6
Vulkan: 1.2.131
Shader Model: 6.4
OpenCL: 1.2

NVIDIA GeForce GTX 1060 6 GB Testing in Benchmarks ✧ What are the three benchmarks tests

It is our favourite section, NVIDIA GeForce GTX 1060 6 GB. Testing in Benchmarks. Benchmarks are a powerful development tool. You may want to consider the answers to the following questions: - What is meant by benchmarking? - How do I benchmark a graphics card NVIDIA GeForce GTX 1060 6 GB? - How do I test a video card NVIDIA GeForce GTX 1060 6 GB in a benchmark? - What is a video game benchmark test? A GPU benchmark is a test that helps you to compare the speed, performance, and efficiency of the video card.

GPU Benchmark performance

3DMark Ice Storm GPU

3DMark Ice Storm is a modern synthetic benchmark for GPUs, used for evaluating performance in rendering. In 3DMark Ice Storm, you can check how fast a GPU of the old generation is and make conclusions about its modernization.

3DMark Cloud Gate GPU

As 3DMark Cloud Gate Benchmark supports DirectX 10, its evaluation is rather suitable for GPUs of a medium level or the ones released 4 years ago.

3DMark Fire Strike Score

3DMark Company has created a benchmark for GPUs called "Fire Strike Score", which points to the definite scenarios for fulfilling different variants and conditions of GPU running. The higher the evaluation of the 3DMark "Fire Strike Score" is, the more performing and faster a GPU is.

3DMark Fire Strike Graphics

3DMark Fire Strike is a benchmark of GPUs, which reflects a GPU's testing results under factory conditions without overclocking, making it the most realistic in contrast to the custom dimensions of performance.

3DMark 11 GPU Benchmark

Being one of the most popular benchmarks for GPUs by far, 3DMark 11 is the authority among other programmes evaluating running or gaming performance. It supports tessellation, various calculating scenarios and account multithreading in GPUs testing.

3DMark Vantage Performance

3DMark Vantage Performance Benchmark accounts a GPU performance in multithreading that completely reveals all the potential of a graphics core and memory. Check the results to understand which GPU is better in the benchmark.

SPECviewperf 12 - Solidworks

The benchmark by the SPECviewperf 12 Company with Solidworks version allows evaluating GPU performance in the rendering of Audi R8, Black Owl, Digger, Ferrari, Menjac, SpaceShipCrawler and a supercar. Check benchmark results to make the right decision.

SPECviewperf 12 - Siemens NX

SPECviewperf 12 Benchmark with the number of scenarios Siemens NX operates on the version of the apps snx-02 and is used for testing GPU in Car Engine 3D Modelling. Learn the results.

SPECviewperf 12 - Showcase

SPECviewperf 12 Benchmark with the scenario of Showcase tests a GPU in the modelling of models taking into account using various effects of blackout and shadowing.

SPECviewperf 12 - Medical

SPECviewperf 12 Benchmark with the scenario of Medical is one of the most complicated GPUs because of using complex 4D medical graphics models upon testing. It was developed by the Department of Radiology at Stanford School of Medicine. It is very suitable for testing modern GPU models.

SPECviewperf 12 - Maya

SPECviewperf 12 Benchmark with the special set of Maya scripts is a universal evaluating tool for calculating GPU power in 3D Modelling. The different visual effects involving memory and graphics processor are applied in the benchmark.

SPECviewperf 12 - Energy

The benchmark by SPECviewperf 12 was released under the code name Energy. In the benchmark, apart from testing desktop GPUs, you can see the mobile ones installed in laptops.

SPECviewperf 12 - Creo

SPECviewperf 12 Benchmark has been created with the code name Creo, which tests GPUs' performance in Cars and Aircraft 3D Modelling.

SPECviewperf 12 - Catia

The benchmark of SPECviewperf 12 with the code name Catia tests a GPU in 3D Modelling using various visual effects. The objects of rendering are aircraft, an SUV and a car.

Passmark GPU

It is professional testing of a GPU in the authoritative benchmark Passmark, which millions of users around the world trust. Only Passmark is able to give the complete picture of GPU performance in our rating, which you can see in this section.

Unigine Heaven 3.0

Unigine Heaven 3.0 is an earlier version of the benchmark of the third generation, which allows using all the GPUs subsystems to depict the real level of performance upon high load. It is perfectly suitable for comparing the performance of mobile and desktop GPUs.