Nvidia GeForce RTX 3080: A Disappointing Hype for Gamers and Designers

RTX 3080 had created a lot of hype amongst the consumer market which has pumped up the adrenaline rush in the blood of gamers, reviewers, as well as content creators and it had almost given a shock to the owners of the RTX 2080ti which is, of course, a generation old now.

The release of the detailed benchmarks of Nvidia GeForce RTX 3080, however, gave a sigh of relief to the RTX 2080ti gamers, the hype just got busted and ended up being a marketing strategy by Nvidia. So the big question that arose is whether RTX 3080 will be a worthy improvement over RTX 2080 and RTX 2080ti or will it end up just being a minor upgrade?

Nvidia GeForce RTX 3080: Quick Look

RTX 3080

GeForce RTX 3080

Rated By OrbitGadget: 4.8 out of 5

8704

CUDA CORES

1.71 GHz

BOOST CLOCK

10 GB

MEMORY SIZE

GDDR6X

MEMORY TYPE

GeForce RTX 3080 has definitely got a boost over the performance of RTX 2080 and RTX 2080ti. Nvidia has incorporated their latest memory technologies and cores to the latest RTX 3080. It has almost got a climb of around 25 to 30% in terms of practical performance. Also, the support of higher resolution displays have kicked in and made things interesting.

Pros and Cons

Pros

  • Good performance boost.
  • Competitive pricing.
  • Double CUDA Cores.
  • GDDR6X memory is efficient.

Cons

  • Performance did not match the claims.
  • Not optimized for 3D modelling and rendering.

What’s New in RTX 3080?

  • Ampere based architecture instead of turing.
  • Samsung 8nm process instead of older 12nm.
  • GDDDR6X instead of GDDR6 memory type.
  • Much competitive pricing, starting at ₹71,000.

What is Ampere Technology?

Ampere is the codename for a graphics processing unit(GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures, officially announced on May 14, 2020. It is named after the French mathematician and physicist André-Marie Ampère. Nvidia implemented the Ampere technology in the RTX 30 series of GPUs.

Claims of Nvidia

The heart of the world’s highest-performing, Elastic Data Centers is now in their GPU which means they are going to have a ground-breaking price to performance ratio. It will not only excite gamers but also the existence of new generation cores will take them ahead of “Team Red” or AMD admirers in game rendering and stuff which in turn will attract the content creators towards it. This is all for what Nvidia created a heap of hypes and made themselves fall flat on their own claims.

A. Samsung’s 8nm Process

Samsungs 8nm Technology
Image Credit: Nvidia

All of you might be wondering what does the 8nm process mean. It’s actually the process in which the cores are packed inside a processor or a GPU. The lower the size of the core and the die and the more will be the density and more will be the performance. It will also reduce the power consumption at the same time. So Nvidia has collaborated with Samsung Electronics for the production of the cores based on Ampere architecture and 8nm process from Samsung which is an interesting move by Nvidia in an answer to the AMD’s GPU market.

B. Latest Generation Tensor Cores

RTX 3080 Tensor Cores
Image Credit: Nvidia

First introduced in the NVIDIA Volta™ architecture, NVIDIA Tensor Core technology has brought dramatic speedups to AI, bringing down training times from weeks to hours and providing massive acceleration to the inference. The Nvidia’s Ampere architecture is built upon these innovations with new precisions—Tensor Float (TF32) and Floating Point 64 (FP64)—to accelerate and simplify AI adoption and extend the power of Tensor Cores to HPC.

TF32 works just like FP32 while delivering speeds upto 20X for AI without requiring any code change. Using Nvidia’s Automatic Mixed Precision, researchers can gain an additional 2X performance with automatic mixed precision and FP16 adding just a couple of lines of code. And with support for bfloat16, INT8, and INT4, Tensor Cores in Nvidia’s A100 Tensor Core GPUs create an incredibly versatile accelerator for both AI training and inference. Bringing the power of Tensor Cores to HPC, A100 also enables matrix operations in full, IEEE-certified, FP64 precision.

Benchmarks

A. Gaming

Gaming Benchmarks

When it comes to gaming, gamers won’t like to compromise on any terms with the raw performance and RTX doesn’t disappoint them at all. Higher the framerates, advantageous and better the experience becomes, ultimately leading to a enjoyable gaming setup. RTX 3080 is just a beast, when it comes to raw performance and fps delivery. Being the latest generation flagship offering from Nvidia (which was already the crown holder last year with RTX 2080ti), Nvidia is currently competing with their own technologies, being several steps ahead of the competitors it also outclasses the rivals i.e. AMD GPUs by a huge margin.

B. Content Creation

Content Creation Benchmarks

The only sector in which the AMD GPUs were ahead of the Nvidia was the Content Creation with their power efficient GPUs, AMD found their ways to the MacBook and gradually became the favourites of dedicated content creators. After Nvidia’s ampere technology being introduced this legacy is not going to last long as the ampere technology is seen to work so well in areas other than gaming as well. The above graph shows the performance of the latest RTX 3080 in comparison to the older RTX 2080ti and RTX 2080 which were already the top-notch GPUs and the RTX 3080 easily beats them all in every terms.

C. 3D Modelling and Rendering

3D Modelling Benchmarks

In the areas of gaming and content creation, RTX 3080 came out to be an excellent option with the latest ampere technology. But the performance in the software of 3D designing and engineering applications like Maya® and Creo® seems to be unoptimized and lagged behind the older RTX 2080ti by a considerable margin in some cases.

The above chart shows the performance of the latest GPU in some of the major 3D design and modelling applications. The above results were measured using the SPECviewperf® which basically measures the 3D rendering performance of the systems running under the OpenGL. It calculates the performance by measuring parameters like 3D primitives, lighting, alpha blending, fogging, anti-aliasing and depth buffering for better and accurate results.

D. V-Ray

V Ray

V-Ray GPU is a separate render engine, that utilizes GPU hardware acceleration. It depicts the raw rendering power of the GPU with various settings and tweaks in texture rendering and utilisation. If you plan to build a workstation focusing on the 3D modelling and rendering then checking the V-Ray scores and benchmarks for the CPU and GPU beforehand can save you from a lots of troubles. RTX 3080 caught the right track in the V-Ray benchmarks and outperformed all of the last generation GPUs by a considerable margin.

Is it worth considering RTX 3080?

Well the answer might be yes or no for different users. If you are considering an update to RTX series from the GTX series then RTX 3080 can easily satisfy your needs in gaming and content creation. But for the purpose of 3D modelling and rendering RTX 2080ti is still the winner as it delivers the optimized output, and similar performance is expected from the RTX 3090.

Now if you are already resting with the likes of RTX 2080 or RTX 2080ti then considering another update to the RTX 3080 seems illogical since Nvidia fell flat on their claims of doubling the performance with RTX 3080. You can consider the upgrade to some other upcoming version of GPUs from Nvidia until you are too excited for the RTX 3080, thanks to the claims by Nvidia.

Conclusion

We definitely saw a huge leap in performance using RTX 3080 which is obvious as it is the latest technology by Nvidia. But the thing that matters is the real jump over the last generation technology. Unlike the claims by Nvidia, the performance does not get doubled with the upgrade but the performance saw a boost of around 20% to 25%. Huge boosts are limited to the content creations as the Blender performance of the latest RTX 3080 is awesome.

As we mentioned earlier, an upgrade from GTX series to the RTX 3080 is worth considering because of the aggressive pricing of the latter. Also the inclusion of support for HDMI 2.1 makes RTX 3080 capable of handling 4K at 120fps, but the question is will it be able to handle the games at 4K with constant 120 fps delivery?

The answer is a bit mixed. Well for some AAA titles with simpler and lighter graphics it can perform quite efficiently even at 4K but for most of the resource hungry games it struggles to deliver 60fps at 4K settings.

Hope you liked our review of the recently launched RTX 3080. You might like the guides for best laptops for content creation and best PC builds for gaming.

If you have any queries or suggestions for us, let us know through the comments section below. You can also reach out to us on FacebookTwitter and Instagram, we are quite active on social media.

2 thoughts on “Nvidia GeForce RTX 3080: A Disappointing Hype for Gamers and Designers”

  1. Yes because people are buying things like GPU with a reason:) LoL. This is a luxury product, a toy for entertainment. Do we ‘need’ 4k 60′ smart TV over 42′ FHD? Of course we don’t. Be we are buying it anyway. It makes us feel better. Even if we are watching the same program with the same emotions. Similarly People can play any game in FHD on 4 years old GPU,or play 1440p on 2x series. But that’s not the point. 3x is new, desirable, and we need to feel better (many people are depressed by pandemic situation).

    1. Hey Robert!
      It’s a pleasure to see you feel happy for others(very few feel the same). Every product has its own variety of use cases, some might use GPUs and RGBs to put on a show of their expensive gaming rig but professionals including content creators might actually need the horsepower GPUs can provide. Well, technology is meant to develop and as you said we should be happy always taking a step further, shortcomings have prevailed since the beginning and that definitely shouldn’t be the barrier in progress.

Leave a Reply

Your email address will not be published. Required fields are marked *