The PCH / Chipset / Bridges -- What does a Chipset Do?
Before starting this section, I wanted to present a quick reference list of resources that we've published previously:
Because I've already written several thousand words about this subject, I'm going to let the above links (and the new video) do most the talking. We'll recap here, though.
The chipset is effectively the spinal cord of the computer. It serves as the center of nearly all transactions and interactions between components, including I/O, some graphics handling, communications, and advanced firmware features through BIOS. We've previously quoted 30-year computer science veteran (and GN photographer) Jim Vincent as saying:
"The chipset is like a spinal cord that controls most of the devices responsible for communicating with the outside world; the CPU can be thought of as a disembodied brain -- it needs the chipset to be fully functional. All of the CPU's I/O goes through channels to the chipset, which then relays or receives information from other vital organs -- video cards, peripherals, disk drives, audio, USB, and so on.
In original PCs, everything used to hang off of one bus (including memory). These days, the computer consists of separated systems. The memory bus (DDR3 channels, of which there are normally more than one in modern systems), the bus to the bridge chips (chipset - northbridge/southrbidge, hypertransport or QPI) SATA buses, PCI-e (video cards), USB buses, legacy buses (PS2, RS-232, parallel ports) are all separate entities that communicate via lanes and channels, all feeding back into the CPU to help efficiently organize and manage instructions and interrupt requests."
There've been a lot of terminological changes over the history of the chipset. Presently, Intel refers to their bridge configuration as a PCH (Platform Controller Hub), while AMD still uses the more traditional north bridge and south bridge terminology. Both AMD and Intel have unified their chipset.
Chipset selection will directly impact the ability of the system to utilize different features, like overclocking, multi-GPU configurations (via PCI-e lane dedication), and RAID. Both Intel and AMD publish block diagrams that show chipset differences; if you're between chipsets, check the diagrams and see if features you'll actually use are present in one and not the other.
Use our above links for determining the specific differences between modern chipsets.
PCI-e Lanes, Slots, PLX/PEX chips, and General Information
PCI Express is a little more straight-forward than some of our other topics. With the death of AGP and rise of PCI-e about a decade ago, we saw new theoretical maximum bandwidth caps that far-and-away exceeded device throughput at the time. Even today, no consumer video card can fully-saturate PCIe 3.0 x16 bandwidth.
Tom's has run the x8/x8 vs. x16/x16 test a few times now, and in their testing, the site has discovered a delta of a couple percent (at most) between the two configurations. The short version of that is to not fret over dual x16 vs. dual x8 video card configurations. Because the theoretical maximum bandwidth is so high, and because the throughput very rarely (if ever) saturates that bandwidth, bottlenecking never becomes a concern. Especially in real-world scenarios, where games aren't optimized to put a GPU under 100% load anyway.
Left is a graphic from our "Video Card Dictionary" that shows the PCI-e 2.0 vs. 3.0 (and x16 vs. x8) interface speeds.
When connecting your video card devices or attempting to discern the legitimacy of marketing claims, you can actually assess the x16/x8 differences by looking at the pins physically in the PCI-e slot. An x16 slot will have twice as many pins present as an x8 slot (which will have half of the slot filled, half empty). An x4 slot will have a quarter of the pins of an x16 slot, but be the same interface size, obviously.
The number of lanes dedicated to PCI-e devices hinges upon both the CPU and chipset. In Haswell CPUs, a number of lanes are dedicated to PCI-e 3.0 straight from the CPU; the chipset (as seen in the above block diagram) also assigns lanes to PCI-e 2.x interfaces. A PCI-e x16 device will consume 16 PCI-e lanes (from either the CPU or the chipset), so if you've selected a CPU/chipset combination that (for purposes of example and ease) only has 16 total lanes, then you'd have fully saturated all available lanes with a video card.
Intel's Haswell CPUs have 16 native PCI-e lanes on-chip, Z87 offers an additional 8xPCI-e 2.x lanes. AMD has a more advanced PCIe lane configuration, with the 990FX offering 38 PCI-e 2.x lanes and the 990X & 970 both offering 22 PCI-e 2.x lanes.
All this talk about lanes -- especially the low-count Haswell PCI-e config -- and you might be wondering how some boards can run triple- or quad-GPU arrays. The way this is normally done is with a multiplexer, which can effectively process the lanes twice to artificially increase lane count at the cost of added latency. High-end motherboards do this with a PEX chip (made by PLX), a special on-board solution that's often located near PCI-e x16_1. If you're trying to "stretch out" your available lanes from the CPU or chipset, it's worth looking into boards that feature some sort of multiplexer, like a PLX-made chip.
If you're interested in further learning on topics relating to motherboards, we'd suggest following these links:
Conclusion - The Anatomy of a Motherboard
This is one of our first serious video attempts at producing more technical content that answers the "... so, how does all this work?" question. Please let us know in the comments below what you thought of the video content! Do you have any suggestions for future content? Questions about how this stuff works? If so, drop a comment on our forums for in-depth support or below for a quick answer.
Just remember that not everyone needs a $180+ motherboard that's spec'd for high OCs. Most builders can get away with a $100-$150 board for a mid-range gaming machine, and can often even drop down lower for budget machines. Be sure to check our PC builds guides for suggestions on motherboards specific to each price range.
Editorial & Video Editing: Steve "Lelldorianx" Burke.
Supporting Research & Writing: Jim Vincent.
Supporting Video Production: Patrick "MoCalcium" Stone.