In the beginning, workloads, tools, and requirements for big data were simple because big data wasn’t really all that big. When we hit 5TB of data, however, things got complicated. Large data sets weren’t well suited to traditional storage like NAS, and large sequential reading of terabytes of data didn’t work well with traditional shared storage. As big data evolved, the analytics tools graduated from custom code like MapReduce, Hive, and Pig to tools like Spark, Python, and Tensorflow, which made analysis easier. With these newer tools came additional requirements that traditional big data storage couldn’t handle, including millions of…
[Continue Reading]
Aucun commentaire:
Enregistrer un commentaire