AnandTech This channel features the latest computer hardware related articles. en-us Copyright 2023 AnandTech AnandTech Canon Prepares Nanoimprint Lithography Tool To Challenge EUV Scanners Anton Shilov

Canon has recently revealed its FPA-1200NZ2C, a nanoimprint semiconductor manufacturing tool that can be used to make advanced chips. The device uses nanoimprint lithography (NIL) technology as an alternative to photolithography, and can theoretically challenge extreme ultraviolet (EUV) and deep ultraviolet (DUV) lithography tools when it comes to resolution.

Unlike traditional DUV and EUV photolithography equipment that transfers a circuit pattern onto a resist-coated wafer through projection, nanoimprint tool employs a different technique. It uses a mask, embossed with the circuit pattern, which directly presses against the resist on the wafer. This method eliminates the need for an optical mechanism in the pattern transfer process, which promises a more accurate reproduction of intricate circuit patterns from the mask to the wafer. In theory, NIL enables formation of complex two- or three-dimensional circuit patterns in a single step, which promises to lower costs. NIL itself is not a new technology, but it has remained in parallel development over the years, while the challenges involved in further improving photolithography have Canon believing that now is a good time for a second-look.

Canon says that its FPA-1200NZ2C enables patterning with a minimum linewidth (critical dimensions, CD) of 14 nm, which is good enough to 'stamp' a circa 26-nm minimum metal pitch, and therefore suitable for 5 nm-class process technologies. That would be in line with capabilities of ASML's Twinscan NXE:3400C (and similar) EUV lithography scanners with a 0.33 numerical aperture (NA) optics.

Meanwhile, Canon says that further refinements of its technology, its tool can achieve finer resolutions that can enable 3 nm and even 2 nm-class production nodes.

Nanoimprint lithography offers several compelling advantages over photolithography. Primarily, NIL excels in resolution, enabling the creation of structures at the nanometer scale with remarkable precision without using photomasks. This technology bypasses the diffraction limits encountered in conventional photolithography, allowing for more intricate and smaller features. Additionally, NIL operates without the necessity of complex optics or high-energy radiation sources, leading to potentially lower operational costs and simpler equipment.

Another advantage of NIL is its direct patterning capability, enabling the reproduction of three-dimensional nanostructures effectively. Such functionality makes NIL a potent tool in the production of photonics and other applications where three-dimensional nano-patterns are essential. The technology also facilitates better pattern fidelity and uniformity.

However, NIL also presents certain challenges and limitations. One notable issue is its susceptibility to defects due to the direct contact involved in the imprinting process. Particles or contaminants present on the substrate or the mold can lead to defects, which may affect the overall yield and reliability of the manufacturing process. This necessitates impeccable process control and cleanliness to maintain consistent output quality.

Additionally, NIL, in its traditional form, is a serial process, which limits its throughput and production capacity. Unlike photolithography, which can process entire wafers or large areas in a parallel fashion, NIL often involves processing smaller areas sequentially. This poses great challenges in scaling the technology for high-volume manufacturing of chips, which limits its usage for chip manufacturing. Meanwhile, NIL can be used to create photomasks for EUV and DUV. Also, it can theoretically be used to create patterned media for hard disk drives.

]]> Tue, 17 Oct 2023 12:00:00 EDT,21097:news
Intel Core i9-14900K, Core i7-14700K and Core i5-14600K Review: Raptor Lake Refreshed Gavin Bonshor In what is to be the last of Intel's processor families to use the well-established Core i9, i7, i5, and i3 naming scheme, Intel has released its 14th Generation Core series of desktop processors, aptly codenamed Raptor Lake Refresh (RPL-R). Building upon the already laid foundations of the 13th Gen Core series family, Intel has announced a variety of overclockable K and KF (no iGPU) SKUs, including the flagship Core i9-14900K, the reconfigured Core i7-14700K, and the cheaper, yet capable Core i5-14600K.

The new flagship model for Intel's client desktop products is the Core i9-14900K, which looks to build upon the Core i9-13900K, although it's more comparable to the special edition Core i9-13900KS. Not only are the Core i9-14900K and Core i9-13900KS similar in specifications, but the Core i9-14900K/KF are the second and third chips consecutively to hit 6.0 GHz core frequencies out of the box.

Perhaps the most interesting chip within Intel's new 14th Gen Core series family is the Core i7-14700K, which is the only chip to receive an uplift in cores over the previous SKU, the Core i7-13700K. Intel has added four more E-cores, giving the Core i7-13700K a total of 8 P-cores and 12 E-cores (28 threads), with up to a 5.6 GHz P-core turbo and a 125 W base TDP; the same TDP across all of Intel's K and KF series 14th Gen core chips.

Also being released and reviewed today is the Intel Core i5-14600K, which has a more modest 6P+8E/20T configuration and has the same configuration 5.3 GHz boost frequency and 3.5 GHz base frequency on the P-cores as the Core i5-13600K. Intel has only boosted the E-core boost frequency by 100 MHz to justify the Core i5-14600K, which means it should perform similarly to its predecessor.

Despite being a refreshed selection of Intel's 13th Gen Raptor Lake platform, the biggest question, aside from the performance, is what are the differences, and are there any nuances to speak of, and how do they correlate regarding performance to 13th Gen? We aim to answer these questions and more in our review of the Intel 14th Gen Core i9-14900K, Core i7-14700K and Core i5-14600K processors.

]]> Tue, 17 Oct 2023 09:00:00 EDT,21084:news
Intel Announces 14th Gen Core Series For Desktop: Core i9-14900K, Core i7-14700K and Core i5-14600K Gavin Bonshor

Ahead of tomorrow's full-scale launch, Intel this afternoon is pre-announcing their 14th Generation Core desktop processors. Aptly codenamed Raptor Lake Refresh, these new chips are based on Intel's existing Raptor Lake silicon – which was used in their 13th generation chips – with Intel tapping further refinements in manufacturing and binning in order to squeeze out a little more performance from the silicon. For their second iteration of Raptor Lake, Intel is also preserving their pricing for the Core i9, i7, and i5 processors, which aligns with the pricing during the launch of Intel's 13th Gen Core series last year.

Headlining the new lineup is Intel's latest flagship desktop processor, the Core i9-14900K, which can boost up to 6 GHz out of the box. This is the second Intel Raptor Lake chip to hit that clockspeed – behind their special edition Core i9-13900KS – but while that was a limited edition chip, the Core i9-14900K is Intel's first mass-produced processor that's rated to hit 6 GHz. Under the hood, the i9-14900K uses the same CPU core configuration as the previous Core i9-13900K chips, with 8 Raptor Cove performance (P) cores and 16 Gracemont-based efficiency (E) cores, for a total of 24 CPU cores capable of executing on 32 threads.

Intel 14th Gen Core, Raptor Lake-R (K/KF Series)
Pricing as of 10/16
AnandTech Cores
L3 Cache
iGPU Base
i9-14900K 8+16/32 3200 6000 2400 4400 36 770 125 253 $589
i9-14900KF 8+16/32 3200 6000 2400 4400 36 - 125 253 $564
i9-13900K 8+16/32 3000 5800 2200 4300 36 770 125 253 $537
i7-14700K 8+12/28 3400 5600 2500 4300 30 770 125 253 $409
i7-14700KF 8+12/28 3400 5600 2500 4300 30 - 125 253 $384
i7-13700K 8+8/24 3400 5400 2500 4200 30 770 125 253 $365
i5-14600K 6+8/20 3500 5300 2600 4000 24 770 125 181 $319
i5-14600KF 6+8/20 3500 5300 2600 4000 24 - 125 181 $294
i5-13600K 6+8/20 3500 5300 2600 3900 24 770 125 181 $285

Moving down the stack, arguably the most interesting of the chips being announced today is the new i7-tier chip, the Core i7-14700K. Intel's decision to bolster the core count of its Core i7 is noteworthy: the i7-14700K now boasts 12 E-cores and 8 P-cores, 4 more E-cores than its 13th Gen counterpart – and only 4 behind the flagship i9. With base clock rates mirroring the previous generation's Core i7-13700K, the additional efficiency cores aim to add extra range in multitasking capabilities, designed to benefit creators and gamers.

Rounding out the 14th Gen Core collection is the i5 series. Not much has changed between the latest Core i5-14600K and the Core i5-13600K, with the only differences coming in E-core turbo frequencies; just a 100 MHz uptick here. Both families share the same 6P+8E (20T) configuration, 5.3 GHz P-core turbo, and 3.5 GHz P-core base frequencies. Price-wise (at the time of writing), the Core i5-13600K is currently available at Amazon for $285, which is a $34 saving over the MSRP of the Core i5-14600K, and that money could potentially be spent elsewhere, such as storage or memory.

Since the Intel 14th and 13th Gen core series are essentially the same chips but with slightly faster frequencies, Intel has made no changes to the underlying core architecture. Intel does include a new overclocking feature for users looking to overclock their 14th Gen Core i9 processors. Dubbed 'AI Assist,' it enhances things through its Extreme Tuning Utility (XTU) overclocking software. Harnessing AI to provide users with more intelligent options for overclocking settings outside of the traditional look-up tables based on set parameters, Intel's AI Assist goes further. Using various systems with various components such as memory, motherboards, and cooling configurations to train the AI model, Intel claims their in-house AI is constantly being trained to offer users the most comprehensive automatic overclocking settings thus far.

Of course, it should be noted that overclocking does, in fact, void Intel's warranty, so users should use this feature at their own risk.

Intel boasts up to 23% better gaming performance with their in-house testing than Intel's 12th Gen Core series (Alder Lake), the first platform to bring the hybrid core architecture to Intel's desktop lineup. It must be noted that Intel hasn't compared performance directly to 13th Gen (Raptor Lake), likely due to the close similarities both families share: same cores, same architecture, just slightly faster frequencies out of the box.

The Intel 14th Gen chips are designed for the preexisting 600 and 700-series motherboards, which use the LGA 1700 socket. Motherboard vendors have already begun refreshing their Z790 offerings with more modern features, such as Wi-Fi 7 and Bluetooth 5.4, providing motherboard manufacturers decide to integrate them into their refreshed Z790 models. Official memory compatibility remains the same as 13th Gen, supporting DDR5-5600 and DDR4-3200 memory. Though overclockers may find the highest binned chips more capable than before, with Intel teasing speeds beyond DDR5-8000 for their best chips.

The Intel 14th Gen Core family of desktop processors (K and KF) is launching on October 17th at retailers and system integrators. Pricing-wise, the flagship Core i9-14900K costs $589, the Core i7-14700K will be available for $409, and the more affordable Core i5-14600K for $319.

]]> Mon, 16 Oct 2023 13:30:00 EDT,21096:news
TSMC: We Want OSATs to Expand Their Advanced Packaging Capability Anton Shilov

Almost since the inception of the foundry business model in the late 1980s, TSMC would produce silicon. In contrast, an outsourced semiconductor assembly and test (OSAT) service provider would then package it into a ceramic or organic encasing. Things have changed in recent years with the emergence of advanced packaging methods that require sophisticated tools and cleanrooms that are akin to those used for silicon production because TSMC was at the forefront of innovative packaging methods, which the company aggregates under the 3DFabric brand and because it built appropriate capacity, it quickly emerged as a significant OSAT for advanced packaging.

Many companies, such as Nvidia, want to send in blueprints and get their product that is ready to ship, which is why they choose to use TSMC's services to package their advanced system-in-packages, such as H100, using such technologies as integrated fan-out (InFO, chip first) and chip-on-wafer-on-substrate (CoWoS, chip last) developed by the foundry. As a result, TSMC had to admit earlier this year that it could not keep up with CoWoS demand and would expand appropriate production capacity.

Although TSMC makes tons of money on advanced chip packaging methods these days, the company does not have plans to steal business away from its traditional OSAT partners, which is why it wants these companies to expand their sophisticated packaging capacity and use similar tools to TSMC and its partners to offer to package compatible with TSMC-made chiplets. 

But it is not that simple. All leading assembly and test specialists like ASE Group, Amkor Technology, and JCET have advanced chip packaging technologies, many resembling those of TSMC. These OSATs own advanced packaging fabs already and can serve fabless chip designers. For example, just this week, Amkor opened up its $1.6 billion advanced packaging facility in Vietnam. It is set to have a cleanroom space comparable to that GlobalFoundries owns across multiple fabs.

But while packaging technologies offered by OSATs may be similar to those of TSMC in terms of pitch dimensions and bump I/O pitch dimensions, they are not the same in terms of flow and may even have slightly different electric specifications. Meanwhile, OSATs use the same tools as TSMC, so they can pack chips that use CoWoS interposer. So far, TSMC has certified two OSATs to perform the final CoWoS assembly. However, there is still a shortage of CoWoS capacity on the market because TSMC's capacity is the bottleneck, at least based on TSMC's comments from earlier this year.

"So, we have ASE and SPIL, we have qualified their substrates," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam. "The next step is also doing the same thing to bring them into using the automated routing of the substrate as well. So, we can have the whole [CoWoS service] stack."

TSMC's advanced packaging technologies like CoWoS and InFO are supported by electronic design automation (EDA) tools from companies like Ansys, Cadence, Siemens EDA, and Synopsys. So, TSMC needs OSATs to use the same programs and align their technical capabilities with what these tools design and TSMC produces. 

"We want them to use the same EDA tools," said Kochpatcharin. "So, let's say TSMC interposer on OSAT's substrate. So, they use 3Dblox and [appropriate] EDA tools to do analysis, then it is easier for the customer. Right? Like we qualified the two partners to [produce] substrate. So, we do CoWoS, and OSATs do substrate. So, it would be good to use the same flow, because it is easier for customer. "If you have customers who use [different EDA tools] then the multi physics analysis [of the package] will be more difficult. It can be done just more difficult."

To meet the demand for CoWoS and other advanced packaging methods, OSATs need to invest in appropriate capacities and tools, which are expensive. The problem is that assembly and test specialists cannot keep up with Intel, TSMC, and Samsung regarding investments in advanced packaging facilities. Last year, Intel spent $4 billion on advanced packaging plants, and TSMC's capital expenditures on advanced packaging totaled $3.6 billion. In contrast, Samsung spent around $2 billion, according to Yole Group's estimates published by EE Times. By comparison, the capital expenditures of ASE Group (with SPIL and USI) totaled $1.7 billion in 2022, whereas the spending of Amkor reached $908 million.

There are several reasons why advanced packaging technologies like TSMC's CoWoS and InFO, as well as Intel's EMIB and Foveros, are gaining importance. First up, disaggregated chip designs are getting more popular because chip manufacturing is getting more expensive, smaller chips are easier to yield, and many chips are reaching the reticle limit. At the same time, their designers want them to be bigger and more powerful. Secondly, disaggregated designs using chiplets made on different nodes are cheaper than one monolithic chip on a leading-edge node.

OSATs are poised to expand their advanced production capacities as their clients demand appropriate services. Meanwhile, they are less inclined to offer such services than foundries simply because if something fails during packaging steps, they have to throw away all the expensive silicon they package, and they do not earn as much as chipmakers do. Their margins are also significantly lower. Finally, it may be unclear in many cases why a multi-chiplet package does not work and whether the problem is with the package itself or with one of the chips. Today, all TSMC can do is to optically check the wafers before dicing them, but this is not a particularly efficient way of testing.

To gain the capability to test chiplets individually, TSMC is working with makers of chip test equipment and expects to validate these tools next year.

"On the 3DFabric on the testing, we work with Advantest, Teradyne, and Synopsys to leverage the high-speed die-to-die testing," said Kochpatcharin. "When you have all these things stacked together, it is getting very difficult to test them. So, we have worked with Teradyne and Advantest to work […] [die-to-die] testing, and we will have silicon validation in 2024."

]]> Mon, 16 Oct 2023 10:30:00 EDT,21095:news
GEEKOM Mini IT13 Review: Core i9-13900H in a 4x4 Package Ganesh T S The performance of ultra-compact form-factor (UCFF) desktops has improved significantly over the years, thanks to advancements in semiconductor fabrication and processor architecture. Thermal solutions suitable for these 4in. x 4in. machines have also been evolving simultaneously. As a result, vendors have been able to configure higher sustained power limits for the processors in these systems. With Intel and AMD allowing configurable TDPs for their notebook segment offerings, UCFF systems with regular 45W TDP processors (albeit, in cTDP-down mode) are now being introduced into the market.

GEEKOM became one of the first vendors to release a Core i9-based UCFF machines with the launch of the Mini IT13. Based on paper specifications, this high-end Raptor Lake-H (RPL-H) UCFF desktop is meant to give the mainstream RPL-P NUCs stiff competition in both performance and price. Read on for a detailed look into the performance profile and value proposition of the Mini IT13's flagship configuration, along with analysis of the tradeoffs involved in cramming a 45W TDP processor into a 4x4 machine.

]]> Mon, 16 Oct 2023 08:00:00 EDT,21075:news
TSMC: Ecosystem for 2nm Chip Development Is Nearing Completion Anton Shilov

Speaking to partners last week as part of their annual Open Innovation Platform forum in Europe, a big portion of TSMC's roadshow was dedicated to the next generation of the company's foundry technology. TSMC's 2 nm-class N2N2P, and N2X process technologies are set to introduce multiple innovations, including nanosheet gate-all-around (GAA) transistors, backside power delivery, and super-high-performance metal-insulator-metal (SHPMIM) capacitor over the next few years. But in order to take advantage of these innovations, TSMC warns, chip designers will need to use all-new electronic design automation (EDA), simulation, and verification tools as well as IP. And while making such a big shift is never an easy task, TSMC is bringing some good news to chip designers early-on: even with N2 still a couple of years out, many of the major EDA tools, verification tools, foundation IP, and even analog IP for N2 are already available for use.

"For N2 we could be working with them two years in advance already because nanosheet is different," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam. "[EDA] tools have to be ready, so what the OIP did is to work with them early. We have a huge engineering team to work with the EDA partners, IP partners, [and other] partners."

Advertised PPA Improvements of New Process Technologies
Data announced during conference calls, events, press briefings and press releases
Power -30% -25-30% -34% -25-30%
Performance +15% +10-15% +18% +10-15%
Chip Density* ? ? ~1.3X >1.15X
Q2 2022 H2 2022 Q2/Q3 2023 H2 2025

*Chip density published by TSMC reflects 'mixed' chip density consisting of 50% logic, 30% SRAM, and 20% analog. 

Preparations for the start of N2 chip production, scheduled for sometime in the second half of 2025, began long ago. Nanosheet GAA transistors behave differently than familiar FinFETs, so EDA and other tool and IP makers had to build their products from scratch. This is where TSMC's Open Innovation Platform (OIP) demonstrated its prowess and enabled TSMC's partners to start working on their products well in advance.

By now, major EDA tools from Cadence and Synopsys as well as many tools from Ansys and Siemens EDA have been certified by TSMC, so chip developers can already use them to design chips. Also, EDA software programs from Cadence and Synopsys are ready for analog design migration. Furthermore, Cadence's EDA tools already support N2P's backside power delivery network.

With pre-built IP designs, things are taking a bit longer. TSMC's foundation libraries and IP, including standard cells, GPIO/ESD, PLL, SRAM, and ROM are ready both for mobile and high-performance computing applications. Meanwhile, some PLLs exist in pre-silicon development kits, whereas others are silicon proven. Finally, blocks such as non-volatile memory, interface IP, and even chiplet IP are not yet available - bottlenecking some chip designs - but these blocks in active development or planned for development by companies like Alphawave, Cadence, Credo, eMemory, GUC, and Synopsys, according to a TSMC slide. Ultimately, the ecosystem of tools and libraries for designing 2 nm chips is coming together, but it's not all there quite yet.

"[Developing IP featuring nanosheet transistors] is not harder, but it does take more cycle time, cycle time is a bit longer," said Kochpatcharin. "Some of these IP vendors also need to be trained [because] it is just different. To go from planar [transistor] to FinFET, is not harder, you just need to know how to do the FinFET. [It is] same thing, you just need to know how to do [this]. So, it does take some to be trained, but [when you are trained], it is easy. So that is why we started early."

Although many of the major building blocks for chips are N2-ready, a lot of work still has to be done by many companies before TSMC's 2 nm-class process technologies go into mass production. Large companies, which tend to design (or co-design) IP and development tools themselves are already working on their 2 nm chips, and should be ready with their products by the time mass production starts in 2H 2025. Other players can also fire up their design engines because 2 nm preps are well underway at TSMC and its partners.

]]> Thu, 12 Oct 2023 17:00:00 EDT,21091:news
Samsung Lines Up First Server Customer For 3nm Fabs Anton Shilov

Although Samsung Foundry was the first contract fab to formally start mass production of chips on a 3 nm-class process, so far, the company's latest process has largely been relegated to producing tiny cryptocurrency mining chips. But it looks like things will start picking up for Samsung's foundry business soon, as this week it was announced that the company has landed a more substantial order which will see the Samsung make a server-grade system-in-package (SiP) with HBM memory for an unknown client.

Per this week's press releases, Samsung Foundry is set to produce a server-grade processor with HBM memory that is set to be designed by ADTechnology, a contract chip developer from South Korea, for an American company. For now, details on the chip are light, so all we know about the 3 nm-based datacenter product is that it will will use 2.5D packaging in conjunction with HBM memory. All of which points to a high-end system-on-chip (SoC) – or rather a system-in-package (SiP).

"This 3nm project will be one of the largest semiconductor products in the industry," said Park Joon-Gyu, chief executive of AD Technology. "This 3nm and 2.5D design experience will be a significant differentiation factor between other companies and AD Technology. We will do our utmost to deliver the best design results to our customers."

Meanwhile, it is unclear which of Samsung Foundry's 3 nm-class process technologies the company is set to use for the project. Currently the company is producing cryptocurrency mining ASICs using its SF3E process technology, which is the initial version of Samsung's gate-all-around (GAA) manufacturing tech.

The company is set to roll-out an enhanced SF3 process technology next year. This version of the node provides additional design flexibility, which is enabled by varying nanosheet channel widths of the GAA device within the same cell type. All of this will, in turn, improve the performance, power, and area characteristics of SF3 compared to SF3E, making it more suitable for server designs. Yet, the company is also prepping SP3P technology with performance enhancements for 2025, which is likely to be even better for server-grade silicon.

"We are pleased to announce our 3nm design collaboration with AD Technology," said Jung Ki-Bong, Vice President of Samsung Electronics Foundry Business Development team. "This project will set a good precedent in the collaboration program between Samsung Electronics Foundry Division and our ecosystem partners, and Samsung Electronics Foundry Division will strengthen our cooperation with partners to provide the best quality to our customers."

Sources: ADTechnologyPulsenew

]]> Thu, 12 Oct 2023 13:00:00 EDT,21094:news
HBM4 in Development, Organizers Eyeing Even Wider 2048-Bit Interface Anton Shilov

High-bandwidth memory has been around for about a decade, and throughout its its continued development it has steadily increased in speed, starting at a data transfer rate from 1 GT/s (the original HBM) and reaching upwards of 9 GT/s with the forthcoming HBM3E. This has made for an impressive jump in bandwidth in less than 10 years, making HBM an important cornerstone for whole new classes of HPC accelerators that have since hit the market. But it's also a pace that's getting harder to sustain as memory transfer rates increase, especially as the underlying physics of DRAM cells have not changed. As a result, for HBM4 the major memory manufacturers behind the spec are planning on making a more substantial change to the high-bandwidth memory technology, starting with an even wider 2048-bit memory interface.

Designed as a wide-but-slow memory technology that utilizes an ultra-wide interface running at a relatively modest clockspeed, HBM's current 1024-bit memory interface has been a defining characteristic of the technology. Meanwhile its modest clockspeeds have become increasingly less modest in order to keep improving memory bandwidth. This has worked thus far, but as clockspeeds increase, the highly parallel memory is risking running into the same signal integrity and energy efficiency issues that challenge GDDR and other highly serial memory technologies.

Consequently, for the next generation of the technology, organizers are looking at going wider once more, expanding the width of the HBM memory interface even further to 2048-bits. And, equally as important for multiple technical reasons, they intend to do this without increasing the footprint of HBM memory stacks, essentially doubling the interconnection density for the next-generation HBM memory. The net result would be a memory technology with an even wider memory bus than HBM today, giving memory and device vendors room to further improve bandwidth without further increasing clock speeds.

As planned, this would make HBM4 a major technical leap forward on multiple levels. On the DRAM stacking side of matters, a 2048-bit memory interface is going to require a significant increase in the number of through-silicon vias routed through a memory stack. Meanwhile the external chip interface will require shrinking the bump pitch to well below 55 um, all the while increasing the total number of micro bumps significantly from the current count of (around) 3982 bumps for HBM3.

Adding some additional complexity to the technology, memory makers have indicated that they are also going to stack up to 16 memory dies in one module; so-called 16-Hi stacking. (HBM3 technically supports 16-Hi stacks as well, but so far no manufacturer is actually using it) This will allow memory vendors to significantly increase the capacity of their HBM stacks, but it brings new complexity in wiring up an even larger number of DRAM dies without defects, and then keeping the resulting HBM stack suitably and consistently short.

All of this, in turn will require even closer collaboration between chip makers, memory makers, and chip packaging firms in order to make everything come together smoothly.

Speaking at TSMC's OIP 2023 conference in Amsterdam, Dan Kochpatcharin, TSMC's Head of Design Infrastructure Management had this to say: "Because instead of doubling the speed, they doubled the [interface] pins [with HBM4]. That is why we are pushing to make sure that we work with all three partners to qualify their HBM4 [with our advanced packaging methods] and also make sure that either RDL or interposer or whatever in between can support the layout and the speed [of HBM4]. So, [we work with] Samsung, SK Hynix, and Micron."

Since system-in-package (SiP) designs are getting larger, and the number of HBM stacks supported by advanced chip packages is increasing (e.g. 6x reticle size interposers and chips with 12 HBM stacks on-package), chip packages are getting more complex. To ensure that everything continues to work together, TSMC is pushing chip and memory designers to embrace Design Technology Co-Optimization (DTCO). This being a big part of the reason why the world's largest foundry recently organized 3DFabric Memory Alliance, a program designed to enable close collaboration between DRAM makers and TSMC in a bid to enable next-generation solutions that will pack huge amounts of logic transistors and advanced memory.

Among other things, TSMC's 3DFabric Memory Alliance is currently working on ensuring that HBM3E/HBM3 Gen2 memory works with CoWoS packaging, 12-Hi HBM3/HBM3E packages are compatible with advanced packages, UCIe for HBM PHY, and buffer-less HBM (a technology spearheaded by Samsung).

Overall, TSMC's comments last week give us our best look yet at the next generation of high-bandwidth memory. Still, additional technical details about HBM4 remain rather scarce for the moment. Micron said earlier this year that 'HBMNext' memory set to arrive around 2026 will offer capacities between 36 GB and 64 GB per stack and peak bandwidth of 2 TB/s per stack or higher. All of which indicates that memory makers won't be backing off on memory interface clockspeeds for HBM4, even with the move to a wider memory bus.

]]> Thu, 12 Oct 2023 10:00:00 EDT,21088:news
TSMC: Importance of Open Innovation Platform Is Growing, Collaboration Needed for Next-Gen Chips Anton Shilov

This year TSMC is commemorating 15 years of its Open Innovation Platform, a multi-faceted program that brings together the foundry's suppliers, partners, and customers to help TSMC's customers better build innovative chips in an efficient and timely manner. The OIP program has grown over the years and now involves tens of companies and over 70,000 IP solutions for a variety of applications. It continues to grow, and its importance will get higher than ever when next generation technologies, such as 2 nm, and advanced packaging methods become mainstream in the coming years.

"This is not a marketing program, it is actually an engineering program to enable the industry," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam, the Netherlands. "We have a huge engineering team behind to work with the EDA partners, IP partners, and design partners."

Shrinking Time-to-Market

Speeding up time-to-market is one of the corner stones of TSMC's OIP program. Before emergence of the OIP program in 2008, TSMC would develop a process technology and process development kits (PDKs) in about 18 months time, then hand over PDKs and design rules to its partners among electronic design automation (EDA) software and IP developers. The latter would spend another 12 months creating EDA tools and building IP blocks before supplying programs and IP solutions to actual chip designers. Then it would take chip developers another 12 months to build actual chips. 

With OIP, TSMC's  EDA tool and IP design partners can start development of their products a few months after TSMC begins development of its new production node. And, by the time TSMC finalizes its process technology, EDA tools and IP are ready for chip designers, the foundry claims. This speeds up time-to-market by about 15 months, TSMC says. Meanwhile, as development time for new nodes is stretching and so is development time for chips, the value of early collaboration between TSMC and EDA and IP providers is increasing. 

For example, TSMC has been working with its partners on N2 (2nm-class) EDA and IP readiness for two years now, with TSMC aiming to have tools and common IP ready for chip designers in H2 2025. 

Quality Matters

An avid reader would wonder why, even with the success of the program, OIP only grew to 39 IP members in 15 years. As it turns out, TSMC is extremely picky with companies that join the program, according to Dan Kochpatcharin. TSMC needs members of the OIP program to really contribute to it and make the joint effort something bigger than the sum of all parts. Because TSMC clients use IP, software, and services offered by participants of the OIP program, the latter have to be really good in their fields to be a part of OIP.

In fact, TSMC even has its TSMC9000 program (the name mimics ISO 9000 quality policy) that sets quality requirements for IP designs. IP collaborators undergo TSMC9000 evaluations, with results available on TSMC-Online, guiding customers on IP reliability and risks. 

"We do a lot of qualifications for IP, before the test shuttles they do tape outs, and then they have TSMC 9000 checklists, […] customer can see [all] the results on TSMC-Online," explained Kochpatcharin. "So, they can see okay, this IP got silicon introductions, so, they have more confidence in the IP. [They also see] how many customers adopted [this IP], how many tape outs, and how many productions. For the lack of a better term, Consumer Report for IP."

Alliance members list their IPs in TSMC's premier catalog, which features thousands of IP options from 39 contributors. Customers can search for IPs using the 'IP Center' on the TSMC-Online Design Portal. Each IP in the catalog is developed, sold, and supported by its originating partner. Meanwhile, chip developers can even check out how popular is one IP or another, which can give chip developers some more confidence in their choice. Confidence is something important today and will be even more important for 3 nm, 2 nm, and future nodes as tape outs get more expensive.

Six Alliances

But speeding up time-to-market and ensuring quality are not the only purposes of the OIP program. It is meant to simplify development, production, testing, and packaging of chips. TSMC's OIP involves a variety of members and is organized into six programs or alliances, each responsible for a separate line of work: 

  • IP Alliance that that is focused on providing silicon-verified, production-proven and foundry-specific intellectual property (IP) that TSMC customers can choose from.
  • EDA Alliance that includes companies which offer electronic design automation (EDA) software that is compliant with TSMC technology requirements and support the foundry's production nodes.
  • Design Center Alliance which comprises of contract chip designers as well as companies offering system level design solution enablement.
  • Cloud Alliance that combines EDA toolmakers and cloud service providers enabling TSMC's customers to develop and simulate their chips in the cloud to reduce in-house compute needs.
  • 3D Fabric Alliance that unites all companies responsible for advanced packaging as well as development of multi-chiplet processors, which essentially includes all of the abovementioned companies as well as makers of memory (including Micron, Samsung, and SK Hynix), substrates, OSATs, and makers of test equipment.
  • Value Chain Alliance that resembles Design Center Alliance, but is meant to offer a broader range of contract chip design services and IP offerings to cater to needs of a broad range of customers spanning from startups and OEMs to ASIC designers. 

The 3D Fabric Alliance program was introduced late last year, so it can be considered the newest addition to OIP. Meanwhile, 3DFabric Alliance looks to be expanding fast with new members and for a reason.

Multi-Chiplet Designs Become New Standard

Process technologies are getting more complex, and this is not going to change. Chip design workflow might get a little easier going forward as EDA makers like Ansys, Cadence, Siemens EDA, and Synopsys are incorporating artificial intelligence capabilities into their tools. But because High-NA EUV lithography scanners halve reticle size from 858 mm2to 429 mm2, it looks like the majority of AI and high-performance computing (HPC) processors are going to adopt multi-tile design in the coming years, which will drive the need for software that assists creation of multi-tile solutions, advanced packaging, HBM-type memory, and all-new methods of testing. This will again increase importance of industry-wide collaboration and the importance of TSMC's OIP.

"[We have offered InFO_PoP] and InFO_oS 3D IC since 2016, [3D ICs have been] in production for years already, [but] back then it was still a niche [market]," said Kochpatcharin. "The customer had to know what they were doing […] and only a few people could do a 3D IC [back then]. [In] 2021 we launched the 3DFabric activity, we wanted to make it generic for everybody because with AI and HPC coming [from multiple companies], [these] cannot be niche things anymore. So, everybody has to be able to use 3D IC. [For example], automotive is a wonderful [application for] 3D IC, there is a [huge] market out there."

Meanwhile, to enable next-generation connectivity between chips and between chiplets, TSMC envisions silicon photonics will be needed, so the company is actively working in this direction within its OIP program. 

"If you go to N2 and the next one coming up is silicon photonics," said Kochpatcharin. "This is where we launched a process needed to have [design service partners] to be able to support the customer."

]]> Thu, 12 Oct 2023 08:00:00 EDT,21086:news
Intel Launches Arc A580: A $179 Graphics Card for 1080p Gaming Anton Shilov

When Intel unveiled its range of Arc A-series desktop graphics cards last year, it introduced four models: the Arc A770, Arc A750, Arc A580, and Arc A380. However, the Arc A580, which uses a cut-down ACM-G10 GPU, never reached the market for reasons that remain unclear. On Tuesday Intel finally fleshed out the Arc desktop lineup with a 500 series card, formally and immediately launching the Arc A580 graphics card.

Intel's Arc A580 is based on the Alchemist ACM-G10 graphics processor with 3072 stream processors and that is paired with 8 GB of memory using a 256-bit interface. While the the cut-down GPU has fewer SPs than its higher-performing counterparts, it retains all of the features that the Alchemist architecture has to offer, including world-class media playback capabilities, including hardware accelerated decoding and encoding in AV1, H.264, and H.265 formats.

The card sits under the Arc A770 and Arc A750 in terms of performance, but above the Arc A380, thus targeting gamers in budget. Intel itself positions its Arc A580 for 1080p gaming against AMD's Radeon RX 6600 and NVIDIA's GeForce RTX 3050 graphics cards that have been available on the market for about two-and-a-half years.

When compared to its rivals, the Arc A580 has higher compute performance (10.445 FP32 TFLOPS vs. Radeon RX 6600's 9 FP32 TFLOPS and GeForce RTX 3050's 8 FP32 TFLOPS) as well as dramatically higher memory bandwidth (512 GB/s vs. 224 GB/s). Though as FLOPS are not everything, we'll have to see how benchmarks play out. The biggest advantage for Intel right now is going to be memory bandwidth, as Intel is shipping a card with a far wider memory bus than anything else in this class – something that AMD and NVIDIA shied away from after multiple cryptocurrency rushes and crashes.

But Intel's Arc A580 is more power hungry than its rivals: as this part is based on Intel's top-tier ACM-G10 GPU, it has the power consumption to match, with a total graphics power rating of 185W. Conversely, AMD's Radeon RX 6600 and NVIDIA's GeForce RTX 3050 are rated for 132W and 130W, respectively.

Graphics cards based on the Intel Arc A580 GPU are set to be offered by ASRock, Gunnir, and Sparkle, starting at $179. At $179, the boards are cheaper than AMD's Radeon RX 6600 ($199) and Nvidia's GeForce RTX 3050 ($199), which makes it quite a competitive offering. Meanwhile, Intel's higher-performing Arc A750 can now be obtained for $189 - $199, which somewhat reduces appeal of the new board – though it remains to be seen if those A750 prices will last.

]]> Tue, 10 Oct 2023 16:45:00 EDT,21090:news
ASRock Industrial NUC BOX-N97 and GMKtec NucBox G2 Review: Contrasting Compact ADL-N Options Ganesh T S Intel has been maintaining a low-power / low-cost x86 microarchitecture since the introduction of the Silverthorne Atom processors in 2008. Its latest iteration, Gracemont, made its debut in the Alder Lake lineup. The hybrid processors in that family teamed up the Gracemont efficiency cores with the Golden Cove performance cores. Eventually, Intel released a new line of processors under the 'Alder Lake-N' (ADL-N) tag comprising only the Gracemont cores. As a replacement for the Tremont-based Jasper Lake SoCs, ADL-N has found its way into a variety of entry-level computing systems including notebooks and compact desktops.

ASRock Industrial's lineup of ultra-compact form-factor (UCFF) systems - the Intel-based NUC BOX series and AMD-based 4X4 BOX series - has enjoyed significant market success. At the same time, the expanding market for compact computing systems has also brought many Asian manufacturers such as ACEMAGIC, Beelink, GMKtec, and MinisForum into play. As ADL-N ramps up, we are seeing a flood of systems based on it from these vendors. We took advantage of this opportunity to source two contrasting ADL-N mini-PCs - the ASRock Industrial NUC BOX-N97 and the GMKtec NucBox G2. Though both systems utilize a quad-core ADL-N SoC, the feature set and target markets are very different. Read on for a detailed analysis of the system features, build, performance profile, and value proposition of the NUC BOX-N97 and the NucBox G2.

]]> Fri, 06 Oct 2023 09:45:00 EDT,21085:news
Intel to Spin-off Programmable Solutions Group as Standalone Business, Eyeing IPO in 2-3 Years Ryan Smith

Intel this afternoon has announced that the company will be spinning off its programmable solutions group (PSG), to operate as a standalone business. The business unit, responsible for developing Intel’s Agilex, Stratix, and other FPGA products, will become a standalone entity under Intel’s corporate umbrella starting in Q1 of 2024, with the long-term goal of eventually selling off part of the group in an IPO in two to three years’ time.

The reorganization announced today will see Intel’s PSG transition to operating as a standalone business unit at the start of 2024, with Intel EVP Sandra Rivera heading up PSG as its new CEO. Rivera is currently the general manager of Intel’s Data Center and AI Group (DCAI), which is where PSG is currently housed, so she has significant familiarity with the group. In the interim, Rivera will also continue serving in her role in DCAI until Intel can find a replacement, with the company looking for candidates both externally and internally.

The separating of PSG is the latest move from Intel to reorganize the company’s multi-faceted business in an effort to focus on its core competencies of silicon photolithography and chip design. Since bringing on current CEO Pat Gelsinger two years ago, Intel has sold or spun off several business units, including its SSD business, NUC mini-PC business, Mobileye ADAS unit, and others, all the while making significant new investments in Intel’s Foundry Services (IFS) fab division. Though, unlike some of Intel's other divestments, it's notable that the company isn't separating from PSG because the business unit is underperforming or is in a commoditized, low-margin market – rather, Intel thinks PSG could perform even better without the immense business and bureaucratic weight of Intel hanging over it.

For the standalone PSG business unit, Intel is eyeing a very similar track to how they’ve handled Mobileye, which will see Intel maintaining majority ownership while still freeing up the business unit to operate more independently. This strategy has played out very well for Mobileye, with the company enjoying continued commercial growth while successfully IPOing last year, and which Intel is hoping they can achieve again with a standalone PSG.

This business unit separation comes as Intel, by their own admittance, has mismanaged PSG. While PSG has enjoyed a string of record quarters financially, Intel believes that PSG has been underserving the true high growth, high profitability markets for FPGAs, such as industrial, automotive, defense, and aerospace. Since being acquired by Intel in 2015 – and especially in the last few years as a formal part of DCAI – Intel’s PSG has been focused on datacenter solutions, to the detriment of other business segments.

Reforming PSG as a standalone business unit, in turn, is intended to improve the agility of the business unit. While PSG will remain under the ownership of Intel both now and in the future, Intel’s control over the group will be largely reduced to that of an investor. This will leave Sandra Rivera and her leadership team free to adjust the company’s product portfolio and positioning as to best serve the wider FPGA market, and not just Intel’s datacenter-centric ambitions. Meanwhile, if all goes well, over the long-haul Intel gets to pocket the profits of a successful IPO while having one less business unit to manage, allowing Intel to funnel its money and time into its own higher priority ventures such as fabs.

Keeping in mind that the PSG was an acquisition for Intel in the first place, in some respects this is an unwinding of that acquisition. In 2015 Intel paid $16.7 billion for what was then Altera, which under Intel became the PSG as we know it today. And while Intel’s eventual IPO plans for PSG have them retaining a stake in the business unit – and a majority stake, at that – this very much re-separates PSG/Altera in terms of operations.

Still, PSG/Altera has a very long history with Intel, going all the way back to 1984, and even as a standalone business unit, PSG will still be tied closely to Intel. Altera will be free to use whatever contract fab it would like, but as the company has been under Intel’s umbrella all this time, it is no surprise that many of the company’s upcoming products are slated to be built at Intel’s fabs, where PSG is expecting to leverage Intel’s advanced packaging techniques. And over the longer term, as Intel lays the groundwork to become the top contract fab in the world, it’s Intel’s hope that they’ll be able to keep PSG’s business.

At the same time, however, PSG will need to win back the business it has lost in the last several years due to its datecenter focus under Intel. The FPGA space is highly competitive, with arch rival AMD having acquired Xilinx in 2020, and who is starting to reap some of the first benefits of that acquisition and integration. Meanwhile in the low power FPGA space, fellow Oregon firm Lattice Semiconductor is not to be underestimated. Intel believes the FPGA market is primed for significant growth – on the order of a “high single digit” compound annual growth rate – so it’s not just a matter of winning back existing dollars from PSG’s rivals. But they’ll have to win back mindshare as well, a task that may take a significant amount of time as the FPGA market moves much slower and offers much longer-lived products than the CPU market.

But first, PSG must get ready to stand on its own two feet. PSG will transition to operating as a standalone business unit at the start of 2024, and it will be reported as such on Intel’s financial statements. Meanwhile, Intel is looking to bring on an initial external investor in 2024, to act as an outside resource to help prepare the group for an eventual IPO. According to Intel, PSG will need two to three years to develop the financial history and leadership stability for a successful IPO, which is why Intel is focusing on making the business unit standalone now, while eyeing an IPO a few years down the line.

Finally, for now it remains to be seen what the standalone PSG will be calling itself. As “programmable solutions group” is arguably unsuitable as a business name, expect to see PSG renamed. Whether that means resurrecting the Altera name or coming up with a new name entirely, as part of standing up on its own two feet, Intel’s FPGA business will need an identity of its own to become a business of its own.

]]> Tue, 03 Oct 2023 18:45:00 EDT,21083:news
Seagate Releases Game Drive PCIe 4.0 SSDs for PlayStation 5 Zhiye Liu

Western Digital's WD_Black SN850P was the first officially PlayStation 5-licensed SSD to hit the market. Seagate wants a piece of that and has hopped on the PlayStation 5 train with the new Game Drive PCIe 4.0 NVMe SSD series, officially licensed for Sony's current-generation gaming console.

Unlike Microsoft, which uses a proprietary SSD expansion card for the Xbox Series X and Xbox Series S, Sony opted to employ a standard M.2 slot for storage expansion on the PlayStation 5. The Japanese console maker's decision provides more storage options for gamers since they have many M.2 SSD offerings on the market. The M.2 slot has also paved the way for SSD manufacturers to partner with Sony to release licensed drives, which have been tested and approved for the PlayStation 5. Therefore, you don't want to worry whether the SSD's heatsink keeps the drive cool or if the Game Drive will fit inside the PlayStation 5.

Seagate's Game Drive SSDs, like the WD_Black SN850P, stick to the PCIe 4.0 x4 interface. That's the same interface in the PlayStation 5, so it makes little sense for vendors to tailor faster toward the gaming console. The Game Drive SSDs utilize Phison's PS5018-E18 PCIe 4.0 SSD controller capable of hitting write and read speeds over 7 GB/s. Built with TSMC's 12nm process node, the E18 is a popular, high-end SSD controller for mainstream PCIe 4.0 SSDs. The E18 comes equipped with three 32-bit Arm Cortex R5 CPU cores and an eight-channel design to support NAND flash speeds up to 1,600 MT/s and capacities up to 8 TB. Seagate pairs the E18 controller with unspecified 3D TLC NAND in the company's Game Drive SSDs.

Seagate Game Drive Specifications
  1 TB 2 TB 4 TB
Part Number ZP1000GP304001 ZP2000GP304001 ZP4000GP304001
Seq Reads (MB/s) 7,300 7,300 7,250
Seq Writes (MB/s) 6,000 6,900 6,900
Random Reads (K IOPS) 800 1,000 1,000
Random Writes (K IOPS) 1,000 1,000 1,000
Endurance (TBW) 1,275 2,550 5,100
Active Power, Average (W) 6.3 7.8 8.6
Idle Power PS3, Average (mW) 20 25 30
Low Power L1.2 mode (mW) <5 <5 <5

Seagate offers the Game Drive SSDs in 1 TB, 2 TB, and 4 TB variants. Sony recently deployed a software update for the PlayStation 5 to support 8 TB SSDs. It's a shame that Seagate doesn't commercialize an 8 TB variant of the Game Drive SSDs as competing brands, including Corsair, Sabrent, PNY, Addlink, and Inland, all have 8 TB drives in their arsenals.

Seagate's Game Drive series delivers sequential read and write speeds up to 7,300 MB/s and 6,900 MB/s, respectively. Random performance scales up to 1,000,000 IOPS writes and reads. However, the sequential and random performance vary by capacity. The 2 TB model is the only SKU to hit the maximum quoted figures. The Game Drive series' sequential read performance is on par with the WD_Black SN850P. Sequential write performance is somewhat faster. However, the WD_Black SN850P flaunts better random performance than the Game Drive.

Endurance doubles by the capacity. The 1 TB is rated for 1,275 TBW (terabytes written), while the 2 TB and 4 TB drives are at 2,550 TBW and 5,100 TBW. That's one aspect where the Game Drive is substantially better than the WD_Black SN850P. For comparison, the WD_Black SN850P's endurance levels for the 1 TB, 2 TB, and 4 TB drives are 600 TBW, 1,200 TBW, and 2,400 TBW, respectively. Seagate's SSDs are over 2X more durable than the Western Digital drives.

The Game Drive 1 TB and 2 TB models are shipping now and have $99.99 and $159.99 MSRPs, respectively. However, they're selling for $104 and $174 on Amazon. Meanwhile, the Game Drive 4 TB will set you back $449. The 1 TB and 2 TB drives are $10 cheaper than the WD_Black SN850P. The WD_Black SN850P 4 TB is up to $70 less expensive. Due to the licensing fees, Seagate's Game Drive series is significantly more costly than the non-licensed SSDs. For comparison, the WD_Black SN850X, which is commonly regarded as one of the best SSDs for the PlayStation 5, is available at $69 for the 1 TB model, $129 for the 2 TB model, and $282 for the 4 TB model.

]]> Tue, 03 Oct 2023 16:00:00 EDT,21082:news
Tenstorrent to Use Samsung’s SF4X for Quasar Low-Cost AI Chiplet Anton Shilov

Tenstorrent this week announced that it had chosen to use Samsung's SF4X (4 nm-class) process technology for its upcoming low-cost, low-power codenamed Quasar chiplet for machine learning workloads. The chiplets will be made at Samsung's new fab near Taylor, Texas when it becomes operational in 2024.

Tenstorrent's Quasar chiplet is a new addition to the company's roadmap. Based on an image provided by the company, the chiplet is set to pack at least 80 Tensix cores based on the RISC-V instruction set architecture and tailored to run artificial intelligence workloads in a variety of formats, such as BF4, BF8, INT8, FP16, and BF16. Tenstorrent's Quasar's are designed to operate in groups and be paired with the company's CPU chiplets, so they are equipped with non-blocking die-to-die interfaces.

Samsung's SF4X is a process technology designed for high-performance computing applications. It is tailored for high clocks and high voltages to ensure maximum performance.

Tenstorrent does not disclose the estimated performance of its Quasar chiplet, but assuming that it has 80 Tensix cores, which is the same number as the Wormhole chiplet taking a 328 FP8 TOPS performance, we can probably make a similar estimate of the performance, considering that it is made using a performance-enhanced process technology.

Tenstorrent officially positions its Quasar chiplets as a low-power, low-cost solution for machine learning, so we can only wonder whether the company will try to squeeze every last bit of performance out of them or choose a different power strategy.

"Samsung Foundry is expanding in the U.S., and we are committed to serving our customers with the best available semiconductor technology," said Marco Chisari, head of Samsung's U.S. Foundry business. "Samsung's advanced silicon manufacturing nodes will accelerate Tenstorrent's innovations in RISC-V and AI for data center and automotive solutions. We look forward to working together and serving as Tenstorrent's foundry partner."

One interesting wrinkle about Tenstorrent's relationship with Samsung is that it recently secured $100 million in financing from various companies in a round co-led by Hyundai Motor Group and the Samsung Catalyst Fund. Hyundai and Samsung need AI processors in one form or another, so it is not surprising that their funds are invested in Tenstorrent. Meanwhile, and I am speculating here, the same applies to chipmaking, and Samsung may be interested in producing chips for Tenstorrent for strategic reasons.

]]> Tue, 03 Oct 2023 10:30:00 EDT,21081:news
Samsung T9 Portable SSD Review: A 20 Gbps PSSD for Prosumer Workloads Ganesh T S Samsung's portable SSD lineup has enjoyed significant market success since the launch of the T1 back in 2015. Despite the release of the Thunderbolt-capable X5 PSSD in 2018, the company has been focusing on the mainstream market. Multiple T series drives have made it to the market over the last 8 years. The product line made the transition to NVMe and USB 3.2 Gen 2 only in 2020 with the launch of the T7 Touch. Today, the company unveiled its first USB 3.2 Gen 2x2 (20 Gbps) PSSD - the Samsung T9. Read on for an in-depth investigation into the design and performance profile of the T9's 4 TB version.

]]> Tue, 03 Oct 2023 10:00:00 EDT,21079:news
Asus Formally Completes Acquisition of Intel's NUC Business Anton Shilov

ASUS has formally acquired Intel's Next Unit of Computing (NUC) products based on Intel's 10th to 13th Generation Core processors. Asus is set to continue building and supporting Intel's existing NUCs and will, over time, roll out its own compact NUC systems for office, entertainment, gaming, and many other applications.

"I am confident that this collaboration will enhance and accelerate our vision for the mini PC," said Jackie Hsu, Asus senior vice president and co-head of OP & AIoT business groups, at the signing ceremony. "Adding the Intel NUC product line to our portfolio will extend ASUS's AI and IoT R&D capabilities and technology solutions, especially in three key markets – industrial, commercial, and prosumer."

Asus held a formal handover ceremony in Taipei and took control of the NUC product lines that span from business applications to gaming. With the acquisition, Asus instantly commenced business processes for the NUC range, ensuring a hassle-free transition for existing customers. Under the terms of the agreement, Asus obtained licenses for both Intel's hardware designs and software. This move widens Asus's operational scope in R&D and extends its reach in logistics, tech support, and numerous application areas. 

Asus envisions broadening its NUC product line and distribution channels. The focus will remain on offering high-quality compact PCs with robust security and advanced technologies, which NUC is known for. ASUS also aims to produce eco-friendly NUC products while emphasizing impeccable service for its customer base.

"This is an exciting time for both Intel and Asus as we move forward with the next chapter in NUC's story," said Michelle Johnston Holthaus, Executive Vice President and General Manager of the Client Computing Group at Intel, who also attended the event. "Today's signing ceremony signifies more than just a business deal. It signifies ASUS' dedication to enhancing the lives of NUC customers and partners around the world. I look forward to seeing NUC thrive as part of the ASUS family."

It should be noted that Asus's Intel NUC license is not exclusive, so Intel may eventually enable other PC makers to build its NUCs. Though at this point, Asus remains the only licensee.

]]> Mon, 02 Oct 2023 14:15:00 EDT,21080:news
Micron to Ship HBM3E Memory to NVIDIA in Early 2024 Anton Shilov

Micron has reaffirmed plans to start shipments of its HBM3E memory in high volume in early 2024, while also revealing that NVIDIA is one of its primary customers for the new RAM. Meanwhile, the company stressed that its new product has been received with great interest by the industry at large, hinting that NVIDIA will likely not be the only customer to end up using Micron's HBM3E.

"The introduction of our HBM3E product offering has been met with strong customer interest and enthusiasm," said Sanjay Mehrotra, president and chief executive of Micron, at the company's earnings call.

Introducing HBM3E, which the company also calls HBM3 Gen2, ahead of its rivals Samsung and SK Hynix is a big deal for Micron, which is an underdog on the HBM market with a 10% market share. The company obviously pins a lot of hopes on its HBM3E since this will likely enable it to offer a premium product (to drive up its revenue and margins) ahead of its rivals (to win market share).

Typically, memory makers tend not to reveal names of their customers, but this time around Micron emphasized that its HBM3E is a part of its customer's roadmap, and specifically mentioned NVIDIA as its ally. Meanwhile, the only HBM3E-supporting product that NVIDIA has announced so far is its Grace Hopper GH200 compute platform, which features an H100 compute GPU and a Grace CPU.

"We have been working closely with our customers throughout the development process and are becoming a closely integrated partner in their AI roadmaps," said Mehrotra. "Micron HBM3E is currently in qualification for NVIDIA compute products, which will drive HBM3E-powered AI solutions."

Micron's 24 GB HBM3E modules are based on eight stacked 24Gbit memory dies made using the company's 1β (1-beta) fabrication process. These modules can hit data rates as high as 9.2 GT/second, enabling a peak bandwidth of 1.2 TB/s per stack, which is a 44% increase over the fastest HBM3 modules available. Meanwhile, the company is not going to stop with its 8-Hi 24 Gbit-based HBM3E assemblies. The company has announced plans to launch superior capacity 36 GB 12-Hi HBM3E stacks in 2024 after it initiates mass production of 8-Hi 24GB stacks.

"We expect to begin the production ramp of HBM3E in early calendar 2024 and to achieve meaningful revenues in fiscal 2024," added chief executive of Micron.

]]> Thu, 28 Sep 2023 20:00:00 EDT,21078:news
Micron Samples 128 GB Modules Based on 32 Gb DDR5 ICs Anton Shilov

Micron is sampling 128 GB DDR5 memory modules, the company said at its earnings call this week. The modules are based on the company's latest single die, non-stacked 32 Gb DDR5 memory devices, which the company announced earlier this summer and which will eventually open doors for 1 TB memory modules for servers.

"We expanded our high-capacity D5 DRAM module portfolio with a monolithic die-based 128 GB module, and we have started shipping samples to customers to help support their AI application needs," said Sanjay Mehrotra, president and chief executive of Micron. "We expect revenue from this product in Q2 of calendar 2024."

Micron's 32 Gb DDR5 dies are made on the company's 1β (1-beta) manufacturing process, which is the last production node that solely relies on multi-patterning using deep ultraviolet (DUV) lithography and does not use extreme ultraviolet (EUV) lithography tools. This is all that we know about Micron's 32 Gb DDR5 ICs at this point, though: the company does not disclose its maximum speed bin, though we can expect a drop in power consumption compared to two 16 Gb DDR5 ICs operating at the same voltage and data transfer rate.

Micron's new 32 Gb memory chips pave the way for creating a standard 32 GB module for personal computers with just eight individual memory chips and a server-oriented 128 GB module based on 32 of such ICs. Moreover, these chips make producing memory modules with a 1 TB capacity feasible, deemed unattainable today. These 1 TB modules might seem excessive for now, but they benefit fields like artificial intelligence, Big Data, and server databases. Such modules can enable servers to support up to 12 TB of DDR5 memory per socket (in the case of a 12-channel memory subsystem).

Speaking of DDR5 memory in general, it is noteworthy that the company expects that its bit production of DDR5 will exceed that of DDR4 in early 2024, placing it a bit ahead of the industry.

"Micron also has a strong position in the industry transition to D5," said Mehrotra. "We expect Micron D5 volume to cross over D4 in early calendar 2024, ahead of the industry."

]]> Thu, 28 Sep 2023 10:00:00 EDT,21077:news
Intel Meteor Lake SoC is NOT Coming to Desktops: Well, Not Technically Gavin Bonshor

Over the last couple of days, numerous reports have revealed that Intel's recently announced Meteor Lake SoC, primarily a mobile platform, would be coming to desktop PCs. Intel has further clarified that while their Meteor Lake processors will be featured in desktop systems next year, they won't power traditional socketed desktop PCs. Instead, these CPUs, primarily crafted for laptops, will be packaged in ball grid array (BGA) formats, making them suitable for compact desktops and all-in-one (AIO) devices.

Intel's statement, as reported by ComputerBase, emphasizes, "Meteor Lake is a power efficient architecture that will power innovative mobile and desktop designs, including desktop form factors such as All-in-One (AIO). We will have more product details to share in the future.

A senior Intel official recently mentioned that Meteor Lake processors are slated for desktop release in 2024. However, they won't be available in Intel's LGA1851 form factor, which caters to gaming rigs, client workstations, and conventional desktop systems. The practice of integrating laptop CPUs into compact PCs, such as NUCs and all-in-one PCs, isn't a novel one. Manufacturers have been doing this for years, and the intriguing aspect will be observing the performance and efficiency metrics of these high-end Meteor Lake laptop CPUs, especially when juxtaposed against the existing Raptor Lake processors designed for both desktops and laptops.

The rationale behind Intel's decision to exclude Meteor Lake processors from socketed desktops remains ambiguous. The CPU employs a multi-tile structure, with its compute tile being developed on the Intel 4 process technology. This technology marks Intel's inaugural use of extreme ultraviolet lithography (EUV), while the graphics tile and SoC leverage TSMC's fabrication methods. Both production techniques are poised to deliver commendable performance and efficiency, but Meteor Lake is not designed as a pure desktop product.

Current indications suggest that the Arrow Lake-S series will be aimed at LGA1851 motherboards, but this is anticipated for the latter half of 2024. While Q3/Q4 of 2024 is still a while away, Intel's motherboard partners, such as GIGABYTE and MSI, have been readying up new refreshed Z790 motherboards, with features such as Wi-Fi 7 set to come to Intel's impending Raptor Lake refresh platform which is due sometime before the end of the year.

Source: ComputerBase

]]> Thu, 28 Sep 2023 09:00:00 EDT,21076:news
eMMC Destined to Live a Bit Longer: KIOXIA Releases New Generation of eMMC Modules Anton Shilov

While the tech industry as a whole is well in the middle of transitioning to UFS and NVMe storage for portable devices, it would seem that the sun won't be setting on eMMC quite yet. This week Kioxia has introduced a new generation of eMMC 5.1 modules, based around a newer generation of their 3D NAND and aimed at those final, low-budget devices that are still using the older storage technology.

Kioxia's new storage modules are compliant with the eMMC 5.1 standard, offering sequential read performance that tops out at 250 MB/s – the best that this technology can provide. But the internal upgrade to a newer generation of Kioxia's 3D NAND can still provide some benefits over older modules, including 2.5x higher sequential and random write performance as well as 2.7x higher random read performance. Also, the new eMMC modules are spec'd to be more durable, with up to 3.3x the TBW rating of its predecessors.

"e-MMC remains a popular embedded memory solution for a wide range of applications," said said Maitry Dholakia, vice president, Memory Business Unit, for Kioxia America. “Kioxia remains steadfast in its commitment to delivering the latest in flash technology for these applications. Our new generation brings new performance features which address end user demands – and create a better user experience."

Given that the remaining devices using eMMC storage fall into the simplistic and inexpensive category, the new lineup of Kioxia's eMMC modules only includes packages offering 64GB and 128GB of storage. Which, in the big picture, is a small amount of storage – but it's suitable for budget devices, as well as for electronics with limited storage needs, such as drones, digital signage, smart speakers, and TVs.

But the main idea behind the new eMMC modules from Kioxia is perhaps not to improve their performance and user experience, but rather use newer and cheaper 3D NAND memory with them. This enables Kioxia to address inexpensive applications more cost efficiently, which ensures that the company will continue doing it going forward.

Kioxia expects to start mass production of its new 64 GB and 128 GB eMMC 5.1 storage modules in 2024. The company is sampling the new devices with its partners at present.

]]> Wed, 27 Sep 2023 20:00:00 EDT,21074:news
Crucial Unveils X9 Portable SSD: QLC for the Cost-Conscious Consumer Ganesh T S

Crucial entered the portable SSD market relatively late, with their X6 and X8 PSSDs being the mainstay for many years. Based on QLC NAND, they were marketed for read-intensive use-cases, though the generous amount of SLC cache ended up delivering good write performance too for mainstream consumers - particularly in the X8. Recently, the company also started focusing on the prosumer / power users market with the launch of the X9 Pro and X10 Pro. Based on Micron's 176L 3D TLC NAND, these drives came with guaranteed write speeds.

Earlier this week, the company launched a successor to the Crucial X8 in the same form-factor as that of the recently launched X9 Pro and X10 Pro. The new USB 3.2 Gen 2 Crucial X9 PSSD takes on the same 65 x 50mm dimensions, but opts for an ABS plastic enclosure instead of the metal one used in the Pro units. Similar to the X8 that is being replaced, the X9 also doesn't advertise write speeds and there is no hardware encryption available. The lanyard hole is retained from the Pro design, but the LED indicator has been dropped. While the X9 is drop-proof up to 2m, the water- and dust-resistance features are not included.

The Crucial X9 PSSD utilizes Micron's 176L 3D QLC NAND and retains the Phison U17 native flash controller. The 1TB, 2TB, and 4TB capacity points are being introduced at $80, $120, and $250, though Amazon currently lists them at $90, $140, and $280 respectively. It is no secret that there is a glut in the flash market currently, resulting in very attractive (P)SSD price points for consumers. However, it is also well-known that it is a cyclic trend. Industry observers expect prices to go up sometime next year, and based on inventory levels of various models with different retailers, we might see strange pricing swings.

The Crucial X9 PSSD is a much-needed upgrade to the aging X8, and we are glad that Crucial has decided to release a new model instead of silently updating the NAND in the older version. The new form-factor and design for this product class is also a welcome change. Crucial's expanded product lineup ensures that it is competitive against established players like Samsung and Western Digital across all high-volume PSSD market segments. The only missing part is a Thunderbolt / USB4 model, and we hope Crucial will address that in the near future.

]]> Wed, 27 Sep 2023 08:00:00 EDT,21072:news
Corsair's Dominator Titanium Memory Now Available, Unveils Plans for Beyond 8000 MT/s Anton Shilov

Corsair has started sales of its Dominator Titanium memory modules that were formally introduced this May. The new modules bring together luxurious look, customizable design, and extreme data transfer rates of up to 8000 MT/s. Speaking of performance, the company implied that it intends to introduce Dominator Titanium with speed bins beyond DDR5-8000 when the right platform arrives.

Corsair's Dominator Titanium family is based around 16 GB, 24 GB, 32 GB, and 48 GB memory modules that come in kits ranging from 32GB (2 x 16GB) up to 192GB (4x 48GB). As for performance, the lineup listed at the company's website includes DDR5-6000 CL30, DDR5-6400 CL32, DDR5-6600 CL32, DDR5-7000 CL34, DDR5-7000 CL36, DDR5-7200 CL34, and DDR5-7200 CL36 with voltages of 1.40 V – 1.45V.

Although Corsair claims that Dominator Titanium with data transfer speeds beyond 8000 MT/s are coming, it is necessary to note that they will be supported by next generation platforms from AMD and Intel. For now, the company only offers 500 First Edition Dominator Titanium kits rated for DDR5-8266 mode for its loyal fans.

To address demand from different types of users, Corsair offers Dominator Titanium with XMP 3.0 SPD settings for Intel's 12th and 13th Generation Core CPUs with black and white heat spreaders as well as with AMD EXPO SPD profiles for AMD's Ryzen processors with grey finish on heat spreaders.

In terms of design of heat spreaders, Corsair remained true to aesthetics. The modules are equipped with 11 customizable Capellix RGB LEDs, offering users a personalized touch. This can be easily adjusted using Corsair's proprietary software. For enthusiasts who lean towards a more traditional aesthetic, Corsair provides an alternative design with fins, reminiscent of their classic memory modules.

Speaking of heat spreaders, it is necessary to note that despite the name of the modules, they do not come with titanium radiators and keep using aluminum, which is a good thing since titanium has a rather low thermal conductivity of 11.4 W/mK and will therefore heat up memory chips rather than distribute heat away from them. Traditionally, Corsair's Dominator memory modules use cherry-picked DRAM chips and the company's proprietary printed circuit boards enhanced with internal cooling planes and external thermal pads to improve cooling.

Corsair's Dominator Titanium memory products are now available both directly from the company and from its resellers. The cheapest Dominator Titanium DDR5-6000 CL30 32 GB kit (2 x 16 GB) costs $175, whereas faster and higher-capacity kits are priced higher.

]]> Tue, 26 Sep 2023 19:00:00 EDT,21071:news
GlobalFoundries Applies for CHIPS Money to Expand U.S. Fabs Anton Shilov

Update 9/30: Correcting the number of companies interested in receiving support from the CHIPS fund.

GlobalFoundries has applied for financial support from the U.S. CHIPS and Science Act to expand its American manufacturing sites, the company said this week. The company intends to get federal grants and investment tax credits to upgrade facilities used to build chips for various applications, including automotive, aerospace, defense, and many other industries.

GlobalFoundries's initiative is in line with the provisions of the U.S. CHIPS and Science Act, which aims to strengthen the nation's semiconductor production capabilities. The act has set aside a substantial amount, $52.7 billion, to support semiconductor research, production, and workforce development. Additionally, it offers a 25% investment tax incentive for the construction of chip plants, estimated to be worth around $24 billion, Reuters reminded.

This expansion is beneficial for the company and essential for enhancing the U.S.'s economic stability, supply chain robustness, and defense mechanisms, the company said.

"As the leading manufacturer of essential semiconductors for the U.S. government, and a vital supplier to the automotive, aerospace and defense, IoT and other markets, GF has submitted our applications to the CHIPS Program Office to participate in the federal grants and investment tax credits enabled by the U.S. CHIPS and Science Act," said Steven Grasso, senior director of global government affairs at GF. "This federal support is critical for GF to continue growing its U.S. manufacturing footprint, strengthening U.S economic security, supply chain resiliency, and national defense."

GlobalFoundries is not alone in getting money from the CHIPS fund. The U.S. Department of Commerce recently said that over 500 companies from 42 states had expressed interest get these semiconductor subsidies in August. The subsidies aim to foster innovation and ensure the U.S. remains at the forefront of semiconductor technology.

Sources: GlobalFoundriesReuters

]]> Tue, 26 Sep 2023 12:00:00 EDT,21070:news
Modular LPDDR Memory Becomes A Reality: Samsung Introduces LPCAMM Memory Modules Ryan Smith Although Low Power DDR(LPDDR) memory has played a pivotal role in reducing PC laptop power usage, the drawback to the mobile-focused memory has always been its tight signaling and power delivery requirements. Designed to be placed close to its host CPU in order to minimize power expenditures and maximize clockspeeds, LPDDR memory is unsuitable for use in traditional DIMMs and SO-DIMMs – instead requiring that it be soldered down on a device in advance. But it looks like the days of soldered-down LPDDR memory are soon at an end, as this evening Samsung is announcing a new standard for removable and modular LPDDR memory: LPCAMM.

]]> Mon, 25 Sep 2023 22:00:00 EDT,21069:news
Solidigm Introduces D7-P5810: 144L SLC NVMe Drive for Write-Intensive Workloads Ganesh T S

Solidigm's datacenter SSD offerings have been clearly delineated into different categories - the D3- SATA offerings for legacy servers, the D5- QLC-based offerings (with different models offering different tradeoffs between cost and endurance), and the D7- NVMe drives for the best performance and endurance ratings. The company has been using TLC NAND in the D7 drives so far. Last week, the company introduced a new member in their D7 lineup for extremely write-intensive workloads - the D7-P5810 using their mature 144L SLC 3D NAND.

Storage-class memory (SCM) options such as Optane have been used by hyperscalers for a variety of use-cases such as write-caching, HPC applications, journaling, online transaction processing (OLTP), etc. With the winding down of the Optane product line, many opportunities have opened up for SSD vendors to bring near-SCM type products into the market. We saw Micron introducing their XTR NVMe SSDs earlier this year using their 176L 3D NAND in SLC mode. The company had optimized the firmware on the drives and drawn up specifications for near-Optane performance in Microsoft SQL Server analytics workloads. Solidigm is taking a similar approach with the D7-P5810, albeit with optimizations for a different use-case.

Solidigm took a look at the requirements satisfied by Optane drives in Alibaba's (Optane + QLC) deployment for local disks in their cloud servers, and figured out that the Optane drives were greatly over-engineered for them. As an example, Alibaba's workload only demanded 37 DWPD, while Optane provided 100. The 4K random write requirements was also only 8K IOPS per tenant, while Alibaba's configuration resulted in the Optane drive providing 20K IOPS per tenant. Solidigm has optimized the firmware of the D7-P5810 to meet these requirements by providing 50 DWPD worst-case endurance, and 10K IOPS per tenant at capacities similar to the Optane drives used by Alibaba.

The specifications of the D7-P5810 are summarized below. The 800 GB version of the drive is in mass production, while the 1.6 TB version is expected to make an appearance in the first half of 2024.

The company had a few interesting presentations at Storage Field Day 26, and in one of those, the company put out a slide comparing the D7-P5810 against the competition.

It is not difficult to figure out that Competitor A in the above slide is Micron's XTR NVMe SSD, while Competitor B is Kioxia's FL6 Series. Different enterprise SSD use-cases have different requirements in terms of sequential speeds, random access IOPS, endurance, and power consumption. As a result, we are starting to see vendors offer specialized drives with firmware optimized for a particular use-case. The differences in the above comparison can be attributed to the vendor optimizing for different use-cases. Fundamental differences in the flash packages apart (176L in Micron's XTR vs. 96L BiCS in Kioxia's FL6 vs. 144L in Solidigm's D7-P5810), it is likely that these vendors can achieve different tradeoffs with their drive's firmware if required.

Solidigm acquired Intel's Cloud Storage Acceleration Layer (CSAL) team earlier this year. At Intel, the group (which had open-sourced its work in the Storage Performance Development Kit) had been working on Optane as an accompanying drive for other slower media. After joining Solidigm, the company has shifted focus from Optane to SLC, with a focus on using drives such as the D7-P5810 as a complement to their high-density QLC drives.

Other than the above use-case in deployment at Alibaba, the D7-P5810 can also be used in a wide variety of scenarios such as metadata storage, caching, and data placement based on service-level agreement (SLA) requirements.

With the inclusion of the D7-P5810, the Solidigm enterprise SSD product line has a product portfolio encompassing a wide range of endurance ratings with suitability for different applications and use-cases.

Optane may be winding down soon, but it is heartening to see vendors like Micron and Solidigm stepping up to provide SLC-based alternatives. By avoiding over-engineering for specific use-cases that are currently being served by Optane drives, the vendors are also able to present enterprise SSD users with a cost-effective solution.

]]> Mon, 25 Sep 2023 11:00:00 EDT,20069:news
Sabrent Ships 8TB SSD for PlayStation 5: High Capacity for a High Price Anton Shilov

Although Sony's PlayStation 5 game console fully supports off-the-shelf PCIe 4.0 solid-state drives, Sony initially limited the maximum capacity to 4 TB. Recently the company removed that cap as part of the PS5 8.00 firmware update, and now the system can support drives with up to 8 TB. Sabrent, in turn, is among the first SSD makers to offer an 8 TB drive specifically marketed for the PS5.

"PC and PS5 enthusiasts have long anticipated the expansion of internal storage capacity, and now, this dream has become a reality with the introduction of the Sabrent 8TB Rocket 4 Plus SSD," a statement by Sabrent reads.

Sabrent's Rocket 4 Plus 8 TB is based on a Phison platform and is actually a bit faster than the rest of the drives in the series. The manufacturer says that the SSD offers an up to 7,100 MB/s sequential read and up to 6,000 MB/s sequential write speeds. In order to keep the drive properly cooled under high loads, the drive comes equipped with a PS5-compatible aluminum heatsink that also doubles as a replacement for the drive bay's metal cover plate.

Sabrent's 8 TB Rocket 4 Plus drive (SB-RKT4P-PSHS-8TB) can now be purchased from Amazon for $1,009.99, which is twice the price of Sony's PlayStation 5 console, and a $10 premium over a bare 8TB Rocket 4 Plus.

This is of course a huge investment, but PS5's 825 GB of capacity available to end users is a fraction of what modern SSDs can provide 3 years later – and whose small capacity is quickly being consumed by modern, high-end games. For example, Call Of Duty: Black Ops Cold War takes up over 300 GB and Gran Turismo 7 nears 200 GB.

Now that Sony's PlayStation 5 supports 8 TB SSDs, the console gets a yet another advantage over Microsoft's Xbox Series X|S consoles, which only support proprietary drives with an up to 2 TB capacity. Since these drives are essentially M.2-2230 SSDs encapsulated into a plastic case, it remains to be seen when an 8 TB drive will come to the latest generation of Xbox consoles.

]]> Fri, 22 Sep 2023 17:30:00 EDT,20068:news
ECS LIVA Q3D and ACEMAGIC T8 Plus micro-PCs Review: Jasper Lake and Alder Lake-N in a Smaller-than-UCFF Package Ganesh T S Compact computing systems have gained significant market share over the last decade. Improvements in the performance per watt metric of processors have enabled the replacement of bulky desktop PCs by ultra-compact form-factor (UCFF) machines with a 4 in. x 4 in. footprint. Motivated by IoT applications at the edge, some companies started creating x86 systems in sub-4x4 form-factors using Intel's Apollo Lake processors. ECS was one of the first mainstream vendors to pay attention to this segment with their LIVA Q Series using Intel's Atom series and AMD's first-generation Ryzen Embedded SoCs. With the introduction of more power-efficient platforms, Asian manufacturers such as ACEMAGIC, GMKtec, and MinisForum have also entered this micro-PC market with a wider range of processor choices.

Intel introduced the Alder Lake-N (ADL-N) product family to take over Jasper Lake's role in the cost-conscious low-power PC market. As ADL-N ramps up and Jasper Lake winds down, we are seeing products based on both families being actively sold in the market. We took advantage of this opportunity to source two micro-PCs - the LIVA Q3D from ECS, and the T8 Plus from ACEMAGIC - and put them through our evaluation routine to study the benefits of ADL-N's Gracemont microarchitecture over Jasper Lake's Tremont. Read on for a detailed look at the results along with a discussion of the tradeoffs involved in pursuing a smaller-than-UCFF footprint.

]]> Thu, 21 Sep 2023 09:10:00 EDT,20056:news
Asus Launches ROG Matrix GeForce RTX 4090: All a 4090 Can Be, For $3200 Anton Shilov

When Asus teased its ROG Matrix GeForce RTX 4090 graphics card back at Computex, it was clear that the company's ambitions were to develop no less than the world's fastest graphics card. The company meticulously described the card's advanced printed circuit board design, voltage regulating module, and cooling system, but it never revealed two important details: actual clocks and price. This week it disclosed both: the board will clock the GPU at 2.70 GHz out-of-box and will cost $3,199, twice the price of a reference GeForce RTX 4090.

An Overclocker's Dream Comes True

Asus proudly states that the ROG Matrix GeForce RTX 4090 is ideal for overclocking enthusiasts. The board used the AD102 GPU equipped with 16,384 CUDA cores that has a peak frequency of 2700 MHz, surpassing NVIDIA's reference boost clock of 2520 MHz. In a physically unmodified (but LN cooled) state, an extreme overclocked ROG Matrix GeForce RTX 4090 surpassed the 4 GHz GPU clock threshold earlier this year, an achievement that underscores its potential for overclocking.

Since its debut at Computex, the card has secured three World Records and five top spots, totaling seven overclocking achievements in various benchmarks, Asus says.

NVIDIA has dozens of add-in-board (AIB) partners producing factory overclocked graphics cards. But with EVGA and its Kingpin-edition graphics cards gone, there are not so many brands left which cater to demands of extreme enthusiasts. Asus is certainly one of them and with its range-topping ROG Matrix RTX 4090, the company went above and beyond with enhancements beyond reference designs.

Through Hardware and Software

The card employs a custom circuit board featuring a 24-phase VRM and a 12VHPWR connector, ensuring up to 600W of power for the GPU. This board is equipped with multiple sensors to oversee temperatures of various components (and even create a temperature map) and even measure currents on the card's 12VHPWR connector (more on this later).

The ROG Matrix GeForce RTX 4090 comes with a comprehensive closed-loop hybrid liquid cooling solution with a 360-mm radiator, magnetically connected fans, and RGB illumination. In a bid to improve efficiency of the cooler, Asus used a liquid metal thermal compound, which it uses for its gaming laptops and which is particularly hard to use for desktop PC components (marking a first in the GPU industry for Asus) since they tend to be located under a different angle.

The ROG Matrix RTX 4090's strengths are not solely in its hardware though. Asus has enhanced its GPU Tweak III software, adding more monitoring and overclocking capabilities that leverage the card's advanced features and sensors. Users can customize various settings, including power targets, GPU voltage, and fan speed. The software also offers real-time temperature insights and tracks the card's performance at varying power settings.

Another notable aspect is the card's Power Detector+ feature. This function examines the 12VHPWR connector, monitoring currents across all power rails to identify any irregularities, then recommends customers to reconnect the notorious plug if needed.

A Niche Product

Meanwhile, performance of the ROG Matrix RTX 4090 comes at a cost as the product's price doubles that of a standard GeForce RTX 4090. This greatly devalues the product in the eyes of average people. But the Asus ROG Matrix RTX 4090 is a niche product. It targets hardcore overclocking enthusiasts eager to maximize their hardware's performance. This card is for those who relish fine-tuning their systems for minor benchmarking improvements, making it a trophy piece for tech enthusiasts.

]]> Wed, 20 Sep 2023 20:15:00 EDT,20067:news
Intel High-NA Lithography Update: Dev Work On Intel 18A, Production On Future Node Ryan Smith

As part of Intel’s suite of hardware announcements at this year’s Intel Innovation 2023 conference, the company offered a brief update on their plans for High-NA EUV machines, which will become a cornerstone of future Intel process nodes. Following some changes in Intel’s process roadmap – in particular Intel 18A being pulled in because it was ahead of schedule – Intel’s plans for the next-generation EUV machines. Intel will now only be using the machines with their 18A node as part of their development and validation work of the new machines; production use of High-NA machines will now come on Intel’s post-18A node.

High Numerical Aperture (High-NA) machines are the next generation of EUV photolithography machines. The massive scanners incorporate 0.55 numerical aperture optics, significantly larger than the 0.33 NA optics used in first-generation production EUV machines, which will ultimately allow for higher/finer quality lines to be etched. Ultimately, High-NA machines are going to be a critical component to enabling nodes below 2nm/20 angstroms.

At the time that Intel laid out their “5 nodes in 4 years” roadmap in 2021, the company announced that they were going to be the lead customer for ASML’s High-NA machines, and would be receiving the first production machine. High-NA, in turn, was slated to be a major part of Intel’s 18A node.

Size Comparison: ASML Normal & High NA EUV Machines

But since 2021, plans have changed for Intel, seemingly in a good way. Progress on 18A has been ahead of schedule, such that, in 2022, Intel announced they were pulling in 18A manufacturing from 2025 to H2’2024. Given that the release date of ASML’s High-NA machines has not changed, however, that announcement from Intel left open some questions about how High-NA would fit into their 18A node. And now we finally have some clarification on the matter from Intel.

High-NA machines are no longer a part of Intel’s production plans for 18A. With the node now arriving before production-grade High-NA machines, Intel will be producing 18A with the tools they have, such as ASML’s NXE 3000 series EUV scanners. Instead, the intersection between 18A and High-NA will be that Intel using the 18A line to develop and validate the use of High-NA scanners for future production. After which, Intel will finally use High-NA machines as part of the production process for their next-generation, post-18A node, which is simply being called “Intel Next” right now.

As for the first High-NA development machine, Intel also confirmed this week that their schedule for development remains on track. Intel is slated to receive their first High-NA machine late this year – which as Pat Gelsinger put it in his keynote, is his Christmas present to Dr. Ann Kelleher, Intel’s EVP and GM of technology development.

Finally, back on the subject of the Intel 18A process, Intel says that they are progressing well on their second-generation angstrom node. The 0.9 PDK, which should be the final pre-production PDK, is nearly done, and should enable Intel’s teams to ramp up designing chips for the process. Intel, for its part, intends to start 18A silicon fab work on Q1’2024. Based on Intel’s roadmaps thus far, that is most likely going to be the first revision of one of the dies on Panther Lake, Intel’s first 18A client platform.

]]> Wed, 20 Sep 2023 19:20:00 EDT,20066:news
Intel Announces Panther Lake Client Platform, Built on Intel 18A For 2025 Gavin Bonshor

While the primary focus has been on Intel's impending Meteor Lake SoC due by the end of the year, Intel CEO Pat Gelsinger unveiled more about their current client processor roadmap. Aside from a demo showing off a 'Lunar Lake' test box, Pat Gelsinger also announced Panther Lake, a new Intel client platform that is on track for a release sometime in 2025.

Intel's updated roadmap has given the industry a glimpse into what lies ahead. Following the much-anticipated Lunar Lake processors set for a 2024-2025 timeframe, Panther Lake is set to bring all the technological advancements of Intel's 18A node to the party.

As mentioned, Intel demoed Lunar Lake's AI capabilities with a live demo at Intel Innovation 2023. This included a pair of demos, one running an AI plugin called Riffusion within the Audacity software, which can generate music. The second was a demo running Stable Diffusion using a text-to-image generation model; it was a giraffe in a cowboy hat for reference. This was all done using a working Lunar Lake test box, which seamlessly looked to run the two demos with ease.

Intel Client Processor Roadmap
Name P-Core uArch E-Core uArch Process Node
(Compute Tile)
Release Year
Meteor Lake Redwood Cove Crestmont Intel 4 2023 (December)
Arrow Lake Lion Cove? Crestmont? Intel 20A 2024
Lunar Lake Lion Cove? Skymont? Intel 20A? 2024?
Panther Lake ? ? Intel 18A 2025

Pivoting to the Panther Lake, Intel, via CEO Pat Gelsinger during Intel Innovation 2023, said that it's on track for release in 2025; we also know that Intel is sending it to fabs in Q1 of 2024. This means we're getting Meteor, Arrow, Lunar, and then Panther Lake (in that order) by the end of 2025. Panther Lake aims to build on Lunar Lake with all its tiles fabricated on the advanced 18A node. While (understandably) details are thin, we don't know what P-core or E-core architectures Panther Lake will use. 

Intel's Innovation 2023 event was a starting point for Intel CEO Pat Gelsinger to elaborate on a comprehensive processor roadmap beyond the much anticipated Meteor Lake SoC, with the first Ultra SKU set to launch on December 14th; this about counts as a launch this year, barring any unexpected foibles. With Panther Lake on track for a 2025 release and set to go to fabs in Q1 of 2024, Intel's ambitious "5 nodes in 4 years" strategy is in full swing. While Lunar Lake paves the way with advanced on-chip AI capabilities on the 20A node, Panther Lake aims to build upon this foundation using the more advanced 18A node.

Although specific architectural details remain scant, the sequential release of Meteor, Arrow, Lunar, and Panther Lake by the end of 2025 underscores Intel's aggressive push to redefine the client processor landscape.

]]> Wed, 20 Sep 2023 14:30:00 EDT,20065:news