The very first sentence of this article mistakes Terabytes and Petabytes. I used to dismiss the entire article as poor quality on seeing a mistake like this. But these days it also feels like an indicator the article was written by a human and might actually have something interesting to say.
Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.
All it really means is that big corps that have already existing sales relationships with Dell will be purchasing them in the next fiscal year. Anyone else that needed this level of storage density has already built their own boxes.
There's been a lot of talk about orbital DCs lately, but with these levels of density, orbital CDNs might be a more obvious usecase. It would be interesting to see if something like Starlink can use something like this to cache media content and reduce their overall data moving through the constellation. It could even be worth it to have some satellites in higher orbits (even GEO if the ground hw can reach it) dedicated to streaming media content. You can tolerate higher RTT for content that doesn't need to be real time.
no, absolutely not. orbital datacenters are never going to happen, it doesn't matter whether you try to frame them as compute or storage or whatever else.
the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.
the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.
radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.
and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.
AFAICT[1] the latest generation of SpaceX Starlink satellites use AMD Versal XQR SoCs, which are built on a 7nm process with components like the main processor (dual-core ARM Cortex A-72) and memory (DDR4) clocked in the gigahertz, not megahertz, range.[2] At least some of these SoCs models (presumably the lower-clocked ones) are certified for geosynchronous orbits, not just low-earth orbits.
The RAD750 is like 20 years old and is the absolute king in high reliability in the most extreme radiation environments. LEO is much more forgiving and there's plenty of examples of commercial gear operating in it. You could definitely put this much storage into LEO along with some EDAC and be fine for a few years.
In the limit, packing transistors tighter should mean more radiation resistance, not less, because you can shield them with a smaller mass of water or lead or whatever.
It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.
Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.
Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.
For the sake of the generations that come after us, we really should not dump valuable material into space. I somehow doubt the electronics in space would be recovered and recycled properly.
Nothing is recycled properly. Recycling was a story told to ease consumers minds so they keep on consuming. The stuff you throw away ends up on a landfill, in the sea, or on a ship to someplace else where it gets burned and then buried. Sending it to space makes absolutely no difference.
If I correctly understand what you're suggesting, then that could save on uplink bandwidth. Sending one copy into space, and then sending it back down over and over again sounds nice.
But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?
At current enterprise NVMe prices, the drives alone for this must easily push past the $500k to $1M mark. It's fascinating to see this level of density, but it’s strictly going to be hyperscaler or high-end defense/research budget territory for a long time.
My understanding is no one actually pays Dell sticker price. They list that price on the website but if you talk to whatever they are calling their sales reps you get the real price.
I know, I'm just not sure how high discount you are going to get in this kind of system. My understanding is that the discounts normally tend to be something like 30-40% so it would still be within same magnitude (10M+).
This is one of the case limited by PCIe speed, sharing it with SSD so Network could only do 5x400Gbps Network. This is on PCIe 5.0, luckily we have 7.0 spec ready and 8.0 is even in 0.5 draft status.
If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.
The most interesting part to me is the last sentence.
>Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.
Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.
10PB is probably the amount of data a medium sized country can collect about its all citizens (basic details, work history, all taxes, all financial records, all medical records, all police records, all biometric records and more) for their lifetime.
I think development like this might get many public sector focused firms sweating.
Those records are going to be pretty negligible in terms of storage. It is only a couple new records per day. Even if you add things like detailed mobile and tracking telemetry, it is a few MB per person per day.
Some wealthy techbro from /r/datahoarders is going to purchase this to store all episodes of Doctor Who in uncompressed 10-bit 4:2:2 FFV1 Matroska remuxes with redundant PAR2 recovery archives.
E3.L is just fancy-shaped PCIe, is it not? What's stopping the standard off-the-shelf NVMe-to-USB converter chips from being used?
Given this disk is going to cost something like $40k, what's another $500 for having a Chinese hw eng throw one of those chips together with an E3 connector on a PCB for you, and 3D printing a neat housing?
I've been waiting with bated breath for a SATA 3.5" SSD with high capacity.
I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.
I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.
But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...
I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.
There's just no benefit to using SATA for it. Even a PCIe Gen3 x1 link provides a significantly higher performance - and those aren't exactly rare these days. Why invest a huge amount of time and money into building a controller chip which is significantly worse than its competitors in every way? Even if you're very interested in backwards compatibility it would make more sense to go for PCIe-based U.2 instead. And 3.5" is just a waste: look at 2.5" SATA SSD teardowns, they are mostly empty space.
If you really want a classic hotswap bay form factor, something like the Aoostar WRT Max allows installing a bunch of M.2 SSDs in a 3.5" bay. The QNAP TBS-h574 gives you five swap bays for M.2 SSDs - albeit in a cute custom form factor. Just want a whole bunch of storage? The Asustor Flashstor Gen 2 has up to 12(!) screw-down M.2 slots. At 8TB per slot that's 96TB of storage - significantly more than the 40TB 4-bay NAS you are proposing.
Or wait for someone to build a NAS which accepts EDSFF SSDs. But don't count on it, because there's no market for prosumer-level EDSFF drives (nobody has a bay for it, and M.2 is a far more attractive option for most people), so there's no market for EDSFF-compatible NASes either. Unless you plan on shipping them with M.2-to-EDSFF adapters - but at that point why not save a whole lot of space by going directly for M.2 instead?
It's a bit like asking for a Mini-ITX motherboard with an SP5 socket so you can plug in the latest Epyc CPU, and wanting it to have DDR3 memory slots: even if it's technically possible at all, what market is it supposed to serve?
There's a ton of different adapters already between edsff connector used for e3 / e2 / e1 drives and everything else pcie already (pcie, m.2, u.2). For example this pcie card. (Good luck tweaking your equalizer settings jumpers by hand though, whew!!) https://www.microsatacables.com/pcie-x8-gen4-with-redriver-t...
Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html
I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.
Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).
I've been wanting to update my (100TB) NAS for over five years, but I haven't yet found anything that I feel is worth upgrading to. One of these with a QSFP56 interface would be nice, but I would need to sell one of my houses to pay for it, so I'll be waiting a little longer...
I work in the refurb department of an e-waste recycling company. In my n=1 data point, some server drives are shredded/destroyed, some aren't (maybe half) before they reach my team. Of the ones that aren't, most are too small to sell, or have bad reads or reallocated sectors. Maybe 10% are fit to resell, not zero.
NVME SSDs are consumable items more so than HDDs are.
These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.
I work in the refurb department of an e-waste recycling company. Some SSD brands are more durable than others. In my experience, a greater proportion of Intel and Micron SSDs are (or have) failed than any other brand. It's as if sysadmins are like "Intel is a good brand, lets use these SSDs to cache our HDD storage array", then throw them out when they turn read-only.
All the increases in density are impressive, but they come with the downside of repairability and recycling difficulties. I hope we can still repair this when parts of it break or at least recycle it properly. No matter how high tech it is, eventually this will break.
These drives all use standard enterprise storage interconnects, and the server chassis is like other Dell server chassis. Not using ATX or EATX, but it's status quo for Dell, and many old Dell servers wind down their old age in homelabs.
Hopefully one of these 10 PB monsters will be under $2,000 someday, at which point I will pop it in my homelab :)
Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.
the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.
the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.
radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.
and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.
0: https://en.wikipedia.org/wiki/RAD750
1: https://en.wikipedia.org/wiki/RAD5500
2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...
[1] https://www.pcmag.com/news/amd-chips-are-powering-newest-sta...
[2] https://docs.amd.com/r/en-US/ds955-xqr-versal-ai-edge/Genera...
Some error rate is acceptable for uses which aren't "mission-critical".
It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.
Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.
Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.
[1] Processor node names after all haven't been tied to physical scale for 30 years https://www.eejournal.com/article/no-more-nanometers
[2] HBM that modern GPUs use already leverage 3D ICs.
doesn't mean i'm correct. [2]
Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes
Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.
Or space data centre.
But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?
Now how heavy the discounts you can get I don't know.
Satan’s NAS!
If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.
The most interesting part to me is the last sentence.
>Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.
Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.
So, yes.
I think development like this might get many public sector focused firms sweating.
There’s probably bulk pricing, but if you bought 40 drives separately thats 2,000,000USD in storage alone.
The 64's are 25k, up from 6k a year ago. I have to imagine the 128's or 256's are at least 500/tb
So a petabyte will be $600-800k alone, plus a server with enough high-speed PCIe lanes to serve the 40+ drives, definitely $1m+
Could you ever buy it?
Concretely: a 64TB was like 6000 bucks last year. You could get them easily. It's now 25,000 for the same SSD.
I feel like we’re in that season.
The interesting thing here is ~256TB in a single drive, but it's in E3.L form factor.
I have about 160TB on hard drives that I'm waiting to offload onto a single SSD.
But that needs to come with a connector that has adapters to USB-C, so I can attach it to my Macbook Neo.
Hopefully they get it a bit more dense soon and into the 2.5" NVMe form.
Given this disk is going to cost something like $40k, what's another $500 for having a Chinese hw eng throw one of those chips together with an E3 connector on a PCB for you, and 3D printing a neat housing?
I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.
I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.
But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...
I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.
Alas.
If you really want a classic hotswap bay form factor, something like the Aoostar WRT Max allows installing a bunch of M.2 SSDs in a 3.5" bay. The QNAP TBS-h574 gives you five swap bays for M.2 SSDs - albeit in a cute custom form factor. Just want a whole bunch of storage? The Asustor Flashstor Gen 2 has up to 12(!) screw-down M.2 slots. At 8TB per slot that's 96TB of storage - significantly more than the 40TB 4-bay NAS you are proposing.
Or wait for someone to build a NAS which accepts EDSFF SSDs. But don't count on it, because there's no market for prosumer-level EDSFF drives (nobody has a bay for it, and M.2 is a far more attractive option for most people), so there's no market for EDSFF-compatible NASes either. Unless you plan on shipping them with M.2-to-EDSFF adapters - but at that point why not save a whole lot of space by going directly for M.2 instead?
It's a bit like asking for a Mini-ITX motherboard with an SP5 socket so you can plug in the latest Epyc CPU, and wanting it to have DDR3 memory slots: even if it's technically possible at all, what market is it supposed to serve?
Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html
I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.
Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).
These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.
Hopefully one of these 10 PB monsters will be under $2,000 someday, at which point I will pop it in my homelab :)