Crysis Can’t Run Crysis: Remastered
Crytek, which has presumably resumed paying its employees, woke its Crysis twitter account this week to tease everyone about its Crysis Remastered update. There’s no release date yet (aside from the ominous “soon”), but the game will launch for PC, PlayStation 4, Xbox One, and even the Nintendo Switch, which is a first for Crysis.
The game’s temporarily-online webpage, which looks an awful lot like an accidental leak but was probably just architected marketing, claims that the game will move to modern Cryengine with API-agnostic raytracing. API-agnostic ray tracing was already shown by Crytek previously. It still helps to have RT hardware physically on the GPU, but we’re uncertain at this time of the extent to which Crysis Remastered will use NVIDIA’s RT hardware.
NVIDIA RTX for Minecraft, DLSS 2.0
The Minecraft RTX beta launched for Minecraft on Windows store this week, following-up on one of NVIDIA’s biggest name gets since RTX launched. Minecraft raytracing has existed for a while now, but it’s been entirely done through mods that are unable to leverage RT hardware and are generally unoptimized as compared to the Minecraft RTX official launch. This RTX version will obviously make use of RT hardware on NVIDIA 20-series cards, and now that Minecraft is on-board, NVIDIA has expanded the reach and validity of its raytracing components into a significantly wider audience.
Minecraft with the RTX update will introduce a new PBR (physically-based rendering) system, adding standard support to mats for things like roughness. Mats will support reflections, refractions, and global illumination based on their properties and the properties of the lights around them. DXR and RTX will add volumetric effects, like crepuscular rays (often “god-rays”) and fog, the expected reflections on surfaces which would reflect, and new emissive lighting for lava and glowstone.
This update also brings DLSS 2.0 to Minecraft, which NVIDIA admits as being the “fix” to its DLSS 1 blunder. DLSS 2.0 uses tensor cores on RTX GPUs to, quote, “offers image quality comparable to native resolution while rendering only one quarter to one half of the pixels.” One of the biggest problems with DLSS 1.0 was its per-game deployment, something NVIDIA is trying to fix. The company wrote the following in its blog post:
“The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.”
The new version of DLSS will now have quality settings, including performance modes to help negate the performance hit from ray tracing. The two are intended to be used together to make the game more playable with ray tracing enabled.
NVIDIA is doing this by using a neural network and training on a DGX supercomputer. The network takes images generated by the game engine as input, including a focus on motion vector analysis to plot a history of object movement and project its future movements. NVIDIA writes:
“During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images.”
NVIDIA has completely reworked DLSS.
HDD Makers Seemingly Hiding SMR Technology
It seems deploying SMR technology in HDDs and not disclosing that fact to customers is all the rage with HDD makers these days. For the uninitiated, SMR (Shingled Magnetic Recording) is a process that adds density without adding platters. SMR writes new data tracks that overlap existing data tracks, essentially shingling the data tracks akin to roof shingles -- hence the ‘shingled’ in the name.
This allows HDD makers to economically increase density without adding platters or altering read/write heads, which is important in the ever-waning HDD market that’s been mostly relegated to cheap storage. However, it comes with a serious caveat in terms of random write performance. If any data track has to be modified, or new data has to be written, any shingled data tracks have to be completely rewritten along with it. This makes SMR inferior for write-intensive applications. This in itself isn’t a problem; however, HDD makers not disclosing it to customers is.
For instance, it became apparent (via reddit and Tom’s Hardware) that Western Digital is currently shipping WD Red drives (for NASes) that use SMR, without identifying the drives as using SMR. Users on Reddit have claimed that these drives are performing poorly, can’t be configured correctly in ZFS arrays, and are problematic in RAID setups, among other issues.
On the heels of this news, it swiftly became apparent that Toshiba and Seagate engage in the same nebulous practice, thanks to Chris Mellor with Block & Files. Toshiba confirmed to Block & Files that its P300 uses SMR, but does not document the use of SMR in any customer documentation or marketing materials. Seagate also confirmed to Block & Files that many of its HDDs in its Barracuda line use SMR, but again, it’s not made clear to consumers in any documentation.
SMR is fine for WORM (write once, read many) or cold data storage, or anything that is read intensive. However, with all (three) HDD makers decisively marketing certain HDDs without disclosing the use of SMR, it’s inevitable that more consumers, unbeknownst to them, will end up with a drive not ideal for their application. Hopefully this tactic will change, but don’t color us hopeful.
AMD Rolls-Out 2nd-Gen Epyc 7Fx2 Chips
AMD recently trotted-out its latest Epyc offerings to bolster its assault on Intel’s server market share. The new Epyc 7Fx2 are still Epyc ‘Rome’ derivatives based on Zen 2. However, they are essentially frequency-optimized counterparts to some of AMD’s existing SKUs in the Epyc 7xx2 stack. AMD is also claiming the Epyc 7Fx2 CPUs represent “the world’s highest per-core performance x86 server CPU.”
The Epyc 7Fx2 line encompasses three SKUs: the Epyc 7F32 (8C/16T), Epyc 7F52 (16C/32T), and Epyc 7F72 (24C/48T). First and foremost, all SKUs leverage a ~500Mhz increase to base clocks, and significant uplifts in boost clocks. Secondly, AMD has endowed each SKU with a substantial increase in L3 cache. Specifically, 128MB for the Epyc 7F32, 256MB for the Epyc 7F52, and 192MB for the Epyc 7F72.
Nothing is for free, however. The new chips also come with higher TDPs, to the tune of 180W - 240W. Additionally, the new 7Fx2 chips will also come with similar gains in price, as these are essentially binned, cherry-picked chips. The Epyc 7F32, 7F52, and 7F72 will have a 1Ku price of $2,100, $3,100, and $2,450 respectively.
AMD is positioning the chips as ideal for database, commercial high-performance computing (HPC) and hyperconverged infrastructure workloads. AMD has also already seen customers within its ecosystem adopting Epyc 7Fx2. These include big names such as Dell, HPE, IBM Cloud, and Microsoft, among others.
Rumor: PS5 Manufacturing May Be Restricted for First Run
Per Bloomberg’s reporting, Sony may be dialing back production of its upcoming PS5 console, at least initially. We need to note that Bloomberg has had a checkered accuracy with rumors, so we wouldn’t trust their rumors quite as much as, say, Digitimes.
According to Bloomberg, citing sources close to Sony, it seems Sony is concerned that there will be some sticker shock for the PS5. Both Sony and Microsoft are making significant generational leaps in terms of hardware with their respective consoles. We’ve covered both quite a bit over the last couple months, but both consoles will leverage 8C/16T semi-custom AMD SoCs, 16GB of GDDR6 (which we expect still hasn’t reached price parity with GDDR5) and fairly dense solid state storage systems -- none of which are cheap.
Of course, both Sony and Microsoft have touted these specs (Microsoft more so than Sony) without
addressing the elephant in the room: price. While nothing concrete has surfaced, it’s been suggested that $499 could be the new normal for consoles, as more potent hardware will facilitate an equally potent price tag. Furthermore, Bloomberg mentions that game developers working on games for the PS5 are expecting a price point between $499 to $549. It seems Sony is wary of overproducing PS5 consoles in its first run, for fear of a slower adoption due to cost. No doubt that the current economic climate is not conducive to high expenditure on entertainment devices, which likely also affects Sony’s decision-making.
More Than 500K Zoom Accounts Surfaced on Dark Web
In today’s episode of Delete Zoom, we’re going to look at how some 500,000 Zoom accounts being aggregated on the dark web underscores Zoom’s laughable security, if that’s what we’re calling it.
Zoom has risen to prominence thanks in part to the current pandemic, and also because Microsoft hasn’t decided if it would rather compete with Snapchat or Slack while letting Skype perpetually rot. According to Zoom CEO Eric S. Yuan, active daily users on Zoom exploded from 10 million in December to 200 million this past March -- highlighting that pre-pandemic, no one knew what Zoom was.
With a surge in its user base came a magnifying glass that put Zoom’s security -- or lack thereof -- under closer examination. Everything from the lack of end-to-end encryption (which Zoom promised and didn’t deliver), to seriously questionable call routing, all the way to the more prominent and repugnant “Zoom-bombing” has arisen in recent weeks. Not to mention, security experts have found multiple flaws that affect both Windows and Mac users.
To Zoom’s credit, the company has scrambled to release patches addressing some of these issues, and has vowed to ratchet up security development over the next several months. Not to Zoom’s credit, Zoom-bombing -- which is still plaguing the service. To salt the wound, earlier this week, over 500,000 Zoom accounts surfaced on the dark web, some selling for less than a penny. Others were being given away for free, which is a common practice on dark web hacking forums.
Most of the credentials that have surfaced on the dark web were obtained via credential stuffing attacks, whereby attackers use credentials exposed in past data breaches. Obviously, this is particularly effective if users are recycling passwords for various sites and services. And while a certain amount of blame has to be shifted to users refusing to use unique passwords or password managers, it isn’t doing anything for Zoom’s image at the moment.
Per BleepingComputer, Zoom has “hired multiple intelligence firms to find these password dumps and the tools used to create them, as well as a firm that has shut down thousands of websites attempting to trick users into downloading malware or giving up their credentials. We continue to investigate, are locking accounts we have found to be compromised, asking users to change their passwords to something more secure, and are looking at implementing additional technology solutions to bolster our efforts.”
Folding@Home Hits 2.4 exaFLOPS
The meteoric rise of Folding@Home’s compute power continues unabated, following successful recruiting campaigns from not only PCMR and Nvidia, but many tech publications as well. Folding@Home has notched several milestones, such as reaching a compute power besting the top 7 supercomputers combined and eventually crossing the almighty exascale barrier not long after.
Now, Folding@Home is touting a combined 2.4 exaFLOPS of computational power spread across its distributed network of folders. For those counting, that’s more than the top 500 supercomputers combined. At least, in terms of sheer x86 exaFLOPS. Of course, supercomputers might do certain things better -- like their usual military research for which they’re deployed.
However, the heightened interest in Folding@Home’s projects have led to something of a drought in terms of WUs, something Folding@Home has been working to rectify. “More GPU work units are coming for @foldingathome. We've had to shift some of our effort from setting up projects to moving data off servers to make room for more data. It's amazing how quickly the data is coming in! Something like 6TB/hr,” said Greg Bowman, Folding@Home’s Director, via Twitter.
GN’s machines are capable of producing about 7 million PPD when full of WUs, although we’ve been between 4M and 6M on average. The GN team is #234771, if you want to join, and we’ve moved into the top 100 teams of all time in rapid time. Linus is currently ranked #4, and we’re now #81 up from the deep hundreds or thousands previously, whatever the depths were.
Rumor: Ryzen 3 3100 and Ryzen 3 3100X
Everyone’s favorite hardware leaker is back this week, with purported information regarding rumored Ryzen 3 chips. Last year’s Ryzen 3000 family notably never produced any Ryzen 3 chips for the desktop, though if information that momomo_us has gleaned is accurate, AMD could possibly have at least two quad-core SKUs in the works for a Ryzen 3 revival.
Via Twitter, momomo_us has pointed to information showing a 4C/8T Ryzen 3 3100 and Ryzen 3 3100X. Given the naming, we can assume these will be based on Zen 2, 7nm silicon. Furthermore, the Ryzen 3 3100 would run an alleged clock speed of 3.9 GHz while the Ryzen 3 3100X would feature a clock speed of 4.3 GHz. The leak didn’t note whether these were base or boost speeds.
The leak further mentions a 65W TDP and a total of 18MB of cache. These chips would no doubt target upcoming Intel’s lower-rung i3 SKUs from its looming Comet Lake S line-up. Per our usual handling of rumors, take this with all due salt.
Editorial: Eric Hamilton, Steve Burke
Video: Josh Svoboda, Keegan Gallick