Native Z170 Hyper M.2 vs PCIe3 M.2

vbimport

#1

We’ve just posted the following review: Native Z170 Hyper M.2 vs PCIe3 M.2[newsimage]http://static.myce.com//images_posts/2015/11/native-m2-95x75.png[/newsimage]

PCIe3 from the CPU for connecting SSDs is a known quantity. They have a high bandwidth, low latency connection.

Now that NVMe SSDs are making their way into the consumer market segment, in the form of M.2 NVMe SSDs. How will these SSDs compare when they are connected to a ‘native’ M.2 socket, managed by a native chipset solution such as the Intel Z170 hyper M.2 socket.

Let’s find out how they compare in this article.

            Read the full article here: [http://www.myce.com/review/native-z170-hyper-m-2-vs-pcie3-m-2-77791/](http://www.myce.com/review/native-z170-hyper-m-2-vs-pcie3-m-2-77791/)

            Please note that the reactions from the complete site will be synched below.

#2

Thank you for doing this comparison. Highly useful.


#3

Great test! I look forward to revisiting this when 3d xpoint ssds are released. Since the latency is much lower, I would assume the interface is much more important.


#4

The study was detailed, thanks for the effort.
I was trying to figure out which way could load CPU most.
I mean M2 vs PCIe3. One of them should have greater CPU load.
In that case the one with the least CPU load is the best way to follow.
If possible please provide benchmarks on that issue.
Thanks…


#5

Welcome to the forum.

The CPU utilisation numbers appear in the IOMeter (screenshot) test results on page 2 of the article. But any different in CPU load is very small, as you can see from the IOMeter results.


#6

Thanks Dee

Very interesting review…I have been wondering how the two connectivity methods would stack up against each other…

And, now we have the answer…


#7

Great article!
I’m trying to understand the slot requirements for M.2 RAID.
The ASUS website says that the hyper M.2/PCIe3 adapter card can be installed into any of the PCIe x16 slots. And I notice that the ASUS photo shown in the article shows the card being installed into PCIe x16_2.

In the “limitations” section, you indicate that M.2 RAID requires hyper M.2/PCIe3 adapter card to be installed into PCIe x16_3. So, is the placement flexible without RAID, and PCIe x16_3 is only required when setting up M.2 RAID? If yes, I suppose this is because the on-board M.2 socket is using the PCIe x16_2 lanes, and the second M.2 needs a separate set of lanes?


#8

Welcome to the forum.
The 3rd X16 PCIe3 socket (the furthest away from the CPU) is connected to the PCH (platform controller hub). The two other X16 PCIe3 sockets are connected directly to the CPU.

The 3rd X16 PCIe socket is electrically X4, and shares bandwidth with the native M.2 socket, and everything on the PCH connects to the CPU via DMI3 which has a maximum data transfer rate of 8GT per second (giga transfers). In practice the maximum data rate of DMI is around 4GB/s, so quite easily saturated by two high performance NVMe SSDs.

I haven’t tried a RAID array with two NVMe SSDs, as I personally think that RAIDing two NVMe SSDs is a pointless exercise on a consumer platform, as you are extremely unlikely to generate anywhere near the amount of traffic to the SSDs required to get the best out of the RAID array.

Having said that, I don’t see any reason why you couldn’t connect one of the SSDs to the native M.2 slot, and the second SSD in the NVMe RAID array to either of the X16 PCIe3 sockets directly connected to the CPU.
You can also connect a single NVMe M.2 SSD via the (hyper card) to any one the two main X16 PCIe3 sockets which connect directly to the CPU.

Keep in mind that doing this would rule out an SLI graphics solution.