Working specifically in storage for the last 15 years, the significance of big data has typically been that of how much storage is required to house the data and, more often than not, the storage tends to be a huge bottleneck in the access of that data.
We know that analysis of big data can be separated into 3 basic things:
CPU to process
Memory to speed up access
Storage to house the dataset
With those considerations in mind, the actual architecture for working with big data is changing significantly.
I read an excellent write up forwarded by a partner, Streaming Big Data, which walks through a lot of the structure and architecture of big data, especially in streaming scenarios and processing that data to get the most value for the business. When reviewing this article, the concept of edge computing comes up often. Edge computing (like cloud or many other IT words) has many definitions, but the end goal is to get the HW and SW architected to be 1)where they are needed most, and 2)when they are needed the most. Edge computing can be thought of as results that are needed where the data sits, be that in the edge of the cloud or the edge of the data center. The point, imho, is that the architecture needs to be isolated from other production functions, and it must be able to house and process that data quick enough for the business to be able to capitalize on the results.
With that in mind, X-IO feels that the development that they did with Intel for over 3 years results in the perfect implementation of NVMe (fast, dense storage), dense cpu, large potential for memory, and an extremely fast PCIe fabric (FabricXpress), all in an extremely small footprint.
The product is being called Axellio FabricXpressTM, and is in beta now, do to release later 2017.
Keep an eye on this beast as it can change the world of big data!