News this week talks about a few product launches -- some not coming to the West -- and new tech demos for PCIe Generation 5 and CXL. We also cover Intel's ongoing battles with marketing, the Threadripper 3 rumors of incompatibility with X399, and advancements in reverse-engineering silicon products.

Show notes will continued after the embedded video, as always.

We’ve previously found unexciting differences of <1% gains between x16 vs. x8 PCIe 3.0 arrangements, primarily relying on GTX 1080 Ti GPUs for the testing. There were two things we wanted to overhaul on that test: (1) Increase the count of GPUs to at least two, thereby placing greater strain on the PCIe bus (x16/x16 vs. x8/x8), and (2) use more powerful GPUs.

Fortunately, YouTube channel BitsBeTrippin agreed to loan GamersNexus its Titan V, bringing us up to a total count of two cards. We’ll be able to leverage these for determining bandwidth limitations in supported applications; unfortunately, as expected, most applications (particularly games) do not support 2x Titan Vs. The nature of being a scientific/compute card is that SLI must go away, and instead be replaced with NVLink. We must therefore rely on explicit multi-GPU via DirectX 12. This means that Ashes of the Singularity will support our test, and also left us with a list of games that might support testing: Hitman, Deus Ex: Mankind Divided, Total War: Warhammer, Civilization VI, and Rise of the Tomb Raider. None of these games saw both Titan V cards, though, and so we only really have Ashes to go off of. It goes without saying, but that means this test isn’t representative of the whole, naturally, but will give us a good baseline for analysis. Something like GPUPI may further provide a dual-GPU test application.

We also can’t test NVLink, as we don’t have one of the $600 bridges – but our work can be done without a bridge, thanks to explicit multi-GPU in DirectX 12.

It’s time to revisit PCIe bandwidth testing. We’re looking at the point at which a GPU can exceed the bandwidth limitations of the PCIe Gen3 slot, particularly when in x8 mode. This comparison includes dual Titan V testing in x8 and x16 configurations, pushing the limits of the 1GB/s/lane limits of the PCIe slots.

Testing PCIe x8 vs. x16 lane arrangements can be done a few ways, including: (1) Tape off the physical pins on the PCIe foot of the GPU, thereby forcing x8 m ode; (2) switch motherboard PCIe generation to Gen2 for half the bandwidth, but potentially introduce variables; (3) use a motherboard with slots which are physically wired for x8 or x16.

Keeping marketing checked by reality is part of the reason that technical media should exist: Part of the job is to filter out the adjectives and subjective language for consumers and get to the objective truth. Intel’s initial marketing deck contained a slide that suggested their new X-series CPUs could run 3-way or 4-way GPUs for 12K Gaming. Those are their exact words: "12K Gaming," supported by orange demarcation for the X-series, whereas it is implicitly not supported (in the slide) on the K-SKU desktop CPUs. Not to speak of how uncommon that resolution is, this also isn’t a real resolution. Regardless, we’re using this discussion of Intel’s "12K" claims as an opportunity to benchmark two x8 GPUs on a 7700K with two x16 GPUs on a 7900X, with some tests disabling cores and boosting clock. We have also received a statement from Intel to GamersNexus regarding the marketing language.

First of all, we need to define a few things: Intel’s version of 12K is not what you’d normally expect – in fact, it’s actually fewer pixels than 8K, so the naming is strongly misleading. Let’s break this down.

In this week’s episode of Ask GN, we go over a few final Ryzen questions prior to the imminent launch and reviews. We also cover some thermal questions, SSD endurance questions, and compatibility basics for PC hardware.

Of course, the looming news item is still Ryzen and its eventual review. The processor will ship on March 2, at which time it would be safe to assume reviews should be live. We already posted coverage of the AMD Ryzen tech day (thus far) in both video and written formats, if you’d like to get up to speed. Our AM4 chipset comparison is also live over here.

This is a test that's been put through the paces for just about every generation of PCI Express, and it's worth refreshing now that the newest line of high-end GPUs has hit the market. The curiosity is this: Will a GPU be bottlenecked by PCI-e 3.0 x8, and how much impact does PCI-e 3.0 x16 have on performance?

We decided to test that question for internal research, but ended up putting together a small report for publication.

Over the course of our recent GTX 980 Ti review, we encountered a curious issue with our primary PCI Express port. When connecting graphics cards to the first PCI-e slot, the card wouldn't detect and resolution would be stunted to lower values. Using one of the other slots bypassed this issue, but was unacceptable for multi-GPU configurations – something we eventually tested.

The first consumer-priced PCI-e SSDs are finally trickling to market. OCZ's RevoDrive was one of the only consumer-facing PCI-e SSDs, priced out of range for most gamers and facing somewhat widespread endurance and stability issues as the device aged. During a period of SandForce domination, the industry waited for the third-generation refresh of the SF controllers to introduce widespread PCI-e SSDs. The third gen controllers promised what effectively would act as an interface toggle, allowing manufacturers to purchase a single controller supply for all SATA and PCI-e SSDs, then “flip the bit” depending on demand. Such an effort would reduce cost, ultimately passed on to the user. This controller saw unrelenting delays, giving rise to alternatives in the meantime.

Then M.2 became “a thing,” bringing smaller SSDs to notebooks and desktops. The M.2 standard is capable of offering superior throughput to SATA III (6Gbps) by consuming PCI-e lanes. Pushing data through the PCI-e bus, M.2 devices circumnavigate the on-board SATA controller and its abstraction layers, responsible for much of the overhead showcased in peak 550MB/s speeds. The M.2 interface can operate on a four-lane PCI-e 2.0 configuration to afford a maximum throughput of 2GB/s (before overhead), though – as with all interfaces – this speed is only awarded to capable devices. Each PCI-e 2.0 lane pushes 0.5GB/s (GT/s). Some M.2 devices utilize just two PCI-e lanes, restricting themselves to 1GB/s throughput but freeing-up the limited count of PCI-e lanes on Haswell CPUs (16 lanes from the CPU, up to 8 lanes from the chipset).

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge