

🚀 Unlock insane Gen4 NVMe RAID speeds and leave bottlenecks in the dust!
The ASUS Hyper M.2 X16 Gen 4 Expansion Card supports up to four NVMe M.2 SSDs simultaneously, delivering a combined throughput of up to 256Gbps via a PCIe 4.0 x16 interface. Designed for high-end AMD Ryzen sTRX40, AM4, and Intel VROC platforms, it features server-grade PCB materials and a robust cooling system with a heatsink and blower fan to prevent thermal throttling. Ideal for professionals seeking ultra-fast RAID storage solutions with future-ready power support for 14W SSDs.


| ASIN | B084HMHGSP |
| Best Sellers Rank | #72 in Computer Motherboards #635 in Data Storage |
| Brand | ASUS |
| Color | Gen 4 (PCIe 4.0) |
| Customer Reviews | 4.2 4.2 out of 5 stars (745) |
| Date First Available | February 4, 2020 |
| Graphics Coprocessor | AMD 3rd Ryzen sTRX40 |
| Item Dimensions LxWxH | 10.63 x 4.8 x 0.6 inches |
| Item Weight | 1.95 pounds |
| Item model number | HYPER M.2 X16 GEN 4 Card |
| Manufacturer | ASUS |
| Operating System | Windows, macOS, Linux |
| Product Dimensions | 10.63 x 4.8 x 0.6 inches |
| Series | HYPER M.2 X16 GEN 4 CARD |
C**N
Absolutely Scorching Speeds If Setup Properly
Absolutely rock solid integration with the TRX40 chipset on an ASUS motherboard that has the PCI-e bifurcation option (PCIe RAID mode) that you can activate on a lot via BIOS. Disassembly and re-assembly was a breeze, the correct amount of screws was included, each slot has a full size thermal pad inside ready to go and the fan seems to work wonders even with four SN850X running in RAID0. It's setting up the AMD RAID drivers properly that is the wild card and can be a complete buzzkill if you do one step wrong. You have to navigate to (for AMD processors) to the support page specific to your CHIPSET, not the processor, and download whatever RAID software package they have there for your AM4 or TRX40 board. You have to enable both SATA and NVMe RAID modes in the BIOS before you can even install the RAID package you just grabbed and once that is on properly, you will see some unrecognized storage controllers in your Device Manager. From there you right click each unrecognized item and point it toward the folder with drivers that came with that package or that you grabbed separately and manually update each of them one by one. Then go back into the BIOS and delete the legacy array housing your single NVMe drives and create whatever RAID array you want there and if all is well it should show up as one single drive in Windows that you can then format as a Simple Volume and manage through AMD RAID Xpert. At each step you might require a few restarts or a full shutodown/power down in order for the changes to fully take. Once you clear this hurdle though, it never reverts and is insanely easy to manage as long as you don't flip back to non-RAID mode in BIOS. Highly recommend sticking with the 256K allocation size recommended in the BIOS when creating the array and the Windows default value when formatting it in Windows, any other tweaks yielded lesser or spottier performance. As other readers have stated, you must make sure that you have a free x8 or x16 slot for two or four SSDs respectively and you must be able to split the 16 into four siloed lanes for each drive in the BIOS and you must make sure that whatever slot you're using isn't sharing lanes with your processor or RAM or another NVMe slot on your mobo. Threadripper platforms are worry free in this department because of the gross amount of PCIe lanes they have. As long as you're not running two 5090s in parallel and maxing out every RAM, NVMe slot and with some SATA drives thrown in, this should not be a concern. Even I didn't expect quite the eye watering results I got with four of the fastest 1TB Gen4 drives on the market running in a striped RAID0 setup. The screenshots speak for themselves. That's basically brushing right up against the theoretical limits of Gen 4 NVMe 1.4 drives and what it should look like when those drives are striped and running free of bottlenecks. For $50 (Used - Like New...came in perfect condition) plus less than $100 each for the drives, this is an absolute no brainer in terms of bang for buck value and will give you performance exceeding that of current Gen5 SSDs by another 8GB/sec. For reference this is a hair shy of the 2400 MT/s base, non XMP clock speed of my 3600 mhz RAM. This is madness. If you do video editing, hosting off your main rig or are looking to trim any possible other system bottlenecks to max out a current gen graphics card, what are you doing still reading??
D**.
Solves AER issues on AMD boards with Gen4 NVMe drives
Preface: This card is NOT intended to just be dropped in to a commodity PC motherboard. This is a card you install in a server or high-end workstation that has more x16 slots than the number of graphics cards you have. Reviewers complaining that their BIOS doesn't recognize all 4 slots probably plugged it into an x4 or x8 slot on their motherboard. Make sure to check your motherboard specifications before you buy! Background of why I got this: I have an ASRock Rack ROMED8-2T with an AMD EPYC 7443P processor and 128GB of ECC PC4-25600. I have a pair of Samsung 980 PRO 500GB NVMe drives installed which are used for the root filesystem (RAID1), ZFS L2ARC and ZFS secondary log (SLOG/ZIL) device. Since completing the initial build, I noticed frequent correctable PCIe bus errors in dmesg (see 1st and 2nd screenshots). Communication with the NVMe drive would shut down for second or two after each AER event, which got *really* bad when I enabled the ZIL on them. Approximately once every 3 months I'd also get a hard lockup necessitating a hard reset of the server over IPMI, which I strongly suspect was related to this. The hard lockups became less frequent with the 6.1 LTS kernel and P3.50 firmware but I was still able to upset it under heavy write workloads. The PCI device implicated in these messages was "AMD Starship/Matisse GPP Bridge" at 0000:40:01.1, the only child device of which was the NVMe drive at 41:00.0. The other bridge device and NVMe drive did not experience the issue. Research online indicated that an NVMe riser card such as this one would resolve the issue. There are numerous alphabet-soup brand cards for a half or third of the price here on Amazon, but to me $80 is cheap insurance to protect my NVMe drives and the data on them by using a card with a hefty heatsink, a high quality fan and a warranty. I had two hiccups during installation, which were: - This card is quite long, and will interfere with the fan connectors on the ROMED8-2T in PCIE slots 1-4. It will fit in slots 5-7, although 5 and 6 will bring the card quite close to the SAS connectors. - The NVMe drives were not detected until I manually changed the link speed on the slot to "x4x4x4x4" in the BIOS setup. After setting everything correctly I confirmed that both drives are running at full gen4 speed of 16GT/s (see 3rd screenshot).
C**T
Not all Mobo's will support this. Make sure you get one that supports "Gen 4" Also you will need to reconfigure your PCIE lanes in Bios to 4*4*4*4*, if you leave them at default/16x this won't work
S**U
On time delhivery product
G**E
I almost didn’t buy this product due to negative ratings and that would have been a mistake. After use and re-reading it is clear all negative reviews on this product simply don’t understand the technical limitations of the environment, to wit: Each NVME requires x4 PCIE lanes, and many motherboards have a single x16 slot (which furthermore requires firmware support for 4x4x4x4 bifurcation). Simply check the support and know that to use all four slots on this card you likely will need to move your graphics card to an x4/x8 slot and/or update BIOS and/or make configuration changes. These options might not be called “bifurcation” and may be “pcie raid” … these firmware and hardware inconsistencies are not Asus’ fault. Just because your PCIE slot looks like a full-sized x16 slot does not mean this product will work. No, this is not an active raid controller. That said, this product for those with the technical aptitude to understand the limitations and requirements is fantastic. There are power filtering capacitors on the board, and other passive components populated to protect our expensive PCIE4-era NVMEs. There is a huge solid block of machined aluminium and the correct riser rubbers/thermal adhesives/risers/screws to mount four single- or double-sided NVMEs. The fan isn’t super useful in a system with above average cooling, and you can simply turn it off. My NVMEs went from circa 90 degrees (on motherboard with no heat sinks, in the random access RW work I use them for) to barely above ambient. The maximum theoretical throughput of x4 PCIE is approaching 8Gb/sec and I see RW speeds at random access approaching 16Gb across 4 drives, I don’t know if sequential RW is at full x16 speed, but with the 4x 980 Pros I use I see ever-so-slightly more than double speed of 2x 980 Pros, so it definitely scales in a linear fashion IF YOU USE x16 IN 4x4x4x4 BIFURCATION!
T**G
The expansion card is well made. I added 4 more NVME. Now trying to workout how to configure it as RAID10..
T**F
So far after a couple/few months I can say this card works great with my Gigabyte x570 Aorus Pro Wifi motherboard and two pcie gen 4 nvme SSDs. No problems at all so far. Given this card doesn't have an active pcie bridge/switch I feel it's a bit overpriced for what is essentially a mostly empty PCB with a fan and heatsink. That said, I have a couple complaints: 1. I feel it's a little overpriced as it's essentially a bare PCB with no active pcie bridge/switch and therefore relies on your motherboard and cpu to properly support pcie bifurcation (splitting a single pcie x16 slot into multiple separate smaller slots). It's kind of a crap shoot what any given motherboard will allow. On mine for instance, IIRC it groups the two main pcie slots such that the bifurcation is limited to either: 16/0, 8/8, 8/4/4, 4/4/4/4. Which means, if you use the second slot in any way, you cut your primary slot down to at most 8x and potentially 4x pcie. So your GPU will only get an 8x link at best. Not the cards fault. It's more of a cpu limitation but something you need to keep in mind. What it does mean is that if I want to use all 4 slots on this card I'd have to set my bios/UEFI to 4x4 mode. If this card had an active bridge/switch, I could assign 8 pcie lanes to the card and it could dynamically allocate the bandwidth between all 4 nvme slots. As it is I'm stuck to "just" 2 usable slots as I want to keep 8x pcie lanes for my GPU. Also I'm not super happy with receiving an obvious return unit as if it were brand new. But there was nothing physically wrong with it and I didn't want to wait for a return so just kept it.
ترست بايلوت
منذ 3 أسابيع
منذ شهر