Cyberpunk Case Mod Contest
Makers of Cyberpunk 2077, CD Projekt Red, recently announced that they’re hosting a PC case modding contest themed around Cyberpunk and e-waste and recyclables. In the early stages of the contest, open until May 17, Cyberpunk 2077’s team is asking for mock-ups to be submitted without any actual hardware case modding, then they plan to select 5 of the best designers to pair with pro case modders. We’re hoping to put some of our viewer-submitted e-waste and dead CPUs to use to actually make a physical mod of some sort.
The “Cyber Up Your PC” contest says that entries will be judged based on the following criteria:
1 - Has to include recycled material, like old PCBs or heatsinks as decoration
2 - Has to feature a megacorporation logo or gang logo from Cyberpunk 2077, which the team has made available in an asset pack
3 - They’d like to see themes from the game somewhere, like the fact that rich and poor live adjacent to one another in the game’s Night City.
It sounds like you can use any tools you want for the mock-up, so presumably that could be paper, 3D modeling software, or similar. They’re asking for a front, left, and right image. The prize to the winner will be a new PC, including peripherals.
Follow up: Picometers versus Nanometers
In the interest of addressing a couple points from last week:
We noticed the research piece that proposed the LMC density benchmark to supplement the nebulous node naming scheme got a lot of traction, which is good. However, some seemed to miss the point, prompting questions as to whether or not we realize that a) numbers (read: decimals, which we use every day in content) go below zero, and, b) that picometers exist. Rest assured; we do. As we noted from the LMC research paper we were quoting, the concern isn’t about running out of numbers. Believe it or not, some might say there is an infinite supply of those; no, in fact, the concern was about using a single digit to represent an entire process, and further that marketing becomes a challenge when you have 500 picometers versus 0.5 nanometers, or whatever it may be. It’s pretty sad to see how many people thought that they knew better than the world’s leading semiconductor researchers and manufacturers, to include TSMC, because they know that measurement prefixes below nano exist, and surely the researchers could have saved all that time and just looked at Wikipedia.
The problem with simply trading nanometers for picometers is that you’re replacing one arbitrary number with another. The underlying point in that research paper is that measuring the minimum gate length of a transistor in silicon is no longer suitable for gauging the true advancements of a chip. Once upon a time, it used to be that simple. Sort of. As simple as giving a pile of sand brains ever was, anyway.
Gate length used to be a fairly honest way to measure a manufacturer's ability to pack transistors. A 90nm manufacturing process would mean that 90 nanometers was the minimum transistor gate length that could be achieved with that process. However, it’s getting harder to shrink transistors and gate lengths in a given die area, and advances -- especially internode advances -- are further muddying the waters. For instance, TSMC currently has no less than three iterations of 7nm: N7, N7P, and N7+. All the while, the number ascribed to a process node can actually be smaller than the minimum gate length it's meant to represent.
This is because not all companies measure gate length the same way -- another point the paper brought up. It’s why Intel claims its 10nm is comparable to TSMC’s 7nm, and that its 7nm will be competitive with TSMC’s 5nm. Process nodes are simply not standard and the numbers attached to them are not wholly representative of the silicon they produce. Node numbers have become more marketing than anything else.
Which brings about another point: it’s unlikely that any chip maker wants to advertise their silicon on the basis of a picometer measurement. For years, consumers have been conditioned into thinking smaller equals better. A picometer is one thousandth of a nanometer. Put another way, 10 nanometers is 10,000 picometers. Which sounds smaller?
All that said, it’s why semiconductor manufacturing needs a holistic density metric. One that can accompany whatever node numbers being pushed by TSMC, Intel, Samsung, GloFo, et al.
Rumor: Intel Alder Lake and LGA1700
Now that Comet Lake-S and Comet Lake-H parts have arrived, we’ll need another Intel platform to speculate about for the next year. Enter Alder Lake. And to a lesser extent, Rocket Lake-S.
We know Rocket Lake-S is set to succeed Comet Lake-S and should use the same LGA1200 socket that Comet Lake-S occupies. Rocket Lake is thought to be based on Intel’s Willow Cove core design, the same cores that Tiger Lake uses, and come with an overhauled I/O; that is, PCIe 4.0.
Some Z490 motherboard designs more or less confirm Intel’s future plans to support PCIe 4.0 with Z490. Gigabyte has already confirmed that Rocket Lake-S will drop into its Z490 boards. And while 400-series chipsets may not natively support PCIe 4.0, many boards have the necessary components, such as timers and re-drivers, to facilitate some PCIe 4.0 functionality. We’ll likely have to wait for Intel’s 500-series platform for fully baked-in support.
Sitting further out and succeeding Rocket Lake-S is Alder Lake-S. According to lit-tech, a supplier of Intel’s VRTT Tool software and voltage regulation test tools in the Asia Pacific area, an LGA1700 interposer exists for Alder Lake-S. Furthermore, according to Twitter slueth Komach_Ensaka, the ADL-S LGA1700 socket measures 45mm x 37.5mm. Assuming this is true, that means an LGA1700 socket would be decidedly more rectangular than the LGA1200 socket that measures 37.5mm x 37.5mm. That, of course, means not only new boards, but new coolers would be in order.
Ampere Rumors: Ray Tracing Gets Cheaper, GTX Is Dead
With Nvidia suggesting we get “amped” for its upcoming GTC keynote on May 14th, we’re all but certain Nvidia plans to take the wraps off of its next generation GPU architecture, Ampere. While we don’t expect any GeForce related announcements (we could be surprised, though), we can expect to see what exactly Nvidia has planned for Ampere. With that, the rumor mill hasn’t been idle.
An interesting rumor picked up by PC Gamer suggests that Ampere could mark a shift in segmentation with Ampere. Nvidia, to a certain extent, segments its consumer and professional products with different architectures. Volta was strictly aimed at professional users, while Pascal and Turing were largely consumer oriented with GeForce cards, but also featured Quadro and Tesla cards. However, Ampere is rumored to be both a consumer and professional architecture.
Additionally, Ampere’s ray tracing is alleged to be several times what Turing is capable of, and that Ampere will effectively kill off the GTX branding, with Tensor and RT cores being distributed up and down the product stack. Furthermore, Ampere is said to herald the arrival of ray tracing at more economical prices and forego the RTX 20-series sticker shock, and that Nvidia has also addressed the infamous “RTX On” performance hit for ray tracing.
Nvidia Splitting Chip Orders Between TSMC And Samsung
In a separate report by Digitimes on Nvidia’s Ampere and Hopper, loosely translated by Twitter user RetiredEngineer, it looks as if Nvidia has slept on AMD and TSMC’s partnership. That partnership has borne a lot of fruit for AMD, and it’s apparently gotten Nvidia’s attention.
First and foremost, it seems Nvidia is splitting orders for Ampere between TSMC and Samsung, which is a sensible move, if true. Digitimes notes that Nvidia will tap TSMC’s 7nm EUV (N7+) process for its high-end SKUs, while funneling orders to Samsung’s aggressively priced 7nm and 8m processes. While we don’t know exactly what 7nm process Nvidia would be using from Samsung, Samsung’s 8nm LPP/LPU process largely serves as a bridge between 10nm and 7nm and doesn’t use EUV.
Furthemore, Nvidia has supposedly already booked some of TSMC’s 5nm capacity for Hopper, which is the architecture set to succeed Ampere. While this isn’t confirmed, Digitimes reported that TSMC was expecting strong results for the first half of 2020 due to an influx of orders for its 5nm process from AMD and Nvidia. We expect AMD’s Zen 4 and “Genoa” to be on TSMC’s 5nm, but we can’t be sure what Nvidia has planned for 5nm, outside of the rumored Hopper.
It also seems that Nvidia could be trying to outbuy AMD at 5nm, as AMD seems to have become TSMC’s premier partner at 7nm, supposedly leaving Nvidia out in the cold. Rumors were circulating throughout 2019 that Nvidia could be courting Samsung for its 7nm GPUs, a move that could’ve been an attempt to bait TSMC into lowering contract prices, but it seems Nvidia ended up going with TSMC anyway. However, in the interim, AMD seemingly bought up the unclaimed 7nm capacity. AMD was also the first to lay claim to a 7nm GPU, something that no doubt irks Nvidia.
To that end, the report suggests that Nvidia has big plans to compete heavily with AMD in adopting TSMC’s EUV-based 7nm and 5nm nodes.
FCC Forced To Surrender IP Addresses For Fake Net Neutrality Comments
Back in 2017, the FCC invited public comments regarding the demise and future gutting of net neutrality. The Ajit Pai led FCC did this under the guise of pretending to care about what anyone had to say, and it was simply a matter of formality. Pai, Big Telecom’s Goodest Boy, wasn’t going to let net neutrality survive. So, the FCC had its dog and pony show, and things got...interesting.
The FCC was flooded with millions of comments, many in favor of upholding net neutrality regulations. However, many comments, often in favor of dismantling net neutrality, were fake; some even assumed the identities of the deceased. This immediately sparked demands for server logs to investigate the IP addresses tied to the fake comments. The FCC went as far as to claim that the public comment process suffered a massive DDoS attack, which turned out to be a flagrant lie.
The FCC moved on to roll back net neutrality with a 3-2 vote, and in its place, enshrined the Restoring Internet Freedom order. This order removed the Title II utility classification from ISPs and stripped the FCC of any power to police ISPs, and instead defaulted to the FTC for regulation -- another problem entirely. The Restoring Internet Freedom framework, named in the most double-speak way possible, also contains other spectacular horse shit, such as net neutrality killing innovation and investment with heavy handed regulations, and that the FCC’s new rules would tighten transparency requirements for ISPs.
Thus far, the FCC has dodged having to surrender server logs, but not anymore. After denying a pair of New York Times reporters access to the server logs, the reporters sued the FCC under the Freedom of Information Act -- and won. A judge has ordered that the FCC must now provide the logs. Of course, this move didn’t come without significant pushback from the FCC to keep the logs hidden.
The FCC maintained that handing over the server logs would represent an “unwarranted invasion of personal privacy.” The judge presiding over the case determined that the FCC failed to prove how such a disclosure would harm anyone, and that any hypothetical harm that could be done was outweighed by the value of public information and determining if the FCC’s decision making is “vulnerable to corruption.”
Apple’s T2 Chip Is BuildingThe E-Waste Pile And Jailbreak Community
In 2018, Apple started shipping certain products like the MacBook Air, MacBook Pro, and Mac Minis with the controversial T2 security coprocessor that seemed a great way to thwart right to repair. Fast forward to today, and that premonition seems to have come true.
In a report by Vice, many recyclers and refurbishers are claiming to be unable to repair or restore MacBooks due to the nature of the T2 chip inside these models in addition to software locks requiring official Apple diagnostic software.
“By default you can't get to recovery mode and wipe the machine without a user password, and you can't boot to an external drive and wipe that way because it's prohibited by default,” John Bumstead, a MacBook refurbisher, told Motherboard. “Because T2 machines have no removable hard drive, and the drive is simply chips on the board, this default setting means that a recycler (or anyone) can’t wipe or reinstall a T2 machine that has default settings unless they have the user password.”
The T2 chip validates any hardware changes in post-repair reboot, and if it detects that proprietary Apple software hasn’t been run, it renders the device inoperable. Bumstead also mentions Apple's Device Enrollment Program (DEP), a program that delivers software updates and proprietary company software, is a nightmare for independent repair shops. Without a way to simply wipe drives or make repairs, many MacBooks, some less than two years old, are being scrapped.
“The irony is that I’d like to do the responsible thing and wipe user data from these machines, but Apple won’t let me,” said Bumstead in a tweet. “Literally the only option is to destroy these beautiful $3,000 MacBooks and recover the $12/ea they are worth as scrap.”
This underlines Apple’s resistive stance on right to repair, and has given rise to the Apple jailbreak community, such as Checkra1n. This community is based on the “checkm8” exploit, which is an unpatchable bootrom exploit used to jailbreak iOS devices, and its recent releases have been used to demote the T2 chip in Macs.
And while such workarounds are valuable, they shouldn’t be required. As the e-waste pile grows, so too does the need for comprehensive right to repair legislation.
Maybe this could be an idea for the Cyberup your PC contest -- bolt a bunch of aluminum Macbook shells to the outside of a chassis.
Ryzen Pro 4000 & New Socket AM4 Roadmap
AMD announced its newest line of commercial processors, the Ryzen Pro 4000 lineup. Unlike its Ryzen 4000 mobile chips, the Pro line targets business and enterprise deployments with specific features not present in AMD’s consumer facing Ryzen offerings.
Ryzen Pro 4000 doubles the core/thread count over the last Pro iteration, with offerings scaling up to 8C/16T. There will be three new SKUs: The Ryzen 7 Pro 4750U (8C/16T), Ryzen 5 Pro 4650U (6C/12T), and the Ryzen 3 Pro 4450U (4C/8T). The ‘U’ suffix denotes a 15W TDP, and all SKUs come with integrated Vega graphics. Ryzen Pro comes with business oriented security features such as Memory Guard, as well as OS and OEM-specific security features like Windows Secured-core PC and Lenovo’s Self-Healing BIOS.
Additionally, AMD shared an updated socket AM4 roadmap. While heavy on marketing and light on relevant details, AMD did divulge that current X570 and B550 chipsets would support Zen 3. This will require a BIOS update, obviously. AMD also noted that it has no plans to backport Zen 3 to any chipset pre-X570 (that is, X470, B450, A320, etc.). AMD has already had issues with maintaining backwards compatibility on AM4 due to storage limitations with BIOS ROM chips on AM4 motherboards. So, not exactly a surprise.
Editorial: Eric Hamilton
Additional Reporting, Host: Steve Burke
Video: Keegan Gallick, Andrew Coleman