Saturday, September 30, 2017

Hadoop is obsolete, cloud and data lake are the next

An interesting article. Refer to the link here:

Hadoop is not dying, but it's obsolete, and not pioneering. It's just a marking of the 10 year tech cycle. Key takeaways:
1. The workloads that need optimizing will run mostly on emerging cloud architectures, getting Spark/HDFS working on Kubernetes.
2. Hadoop’s main architectural concept – that data should be centralized and that application workloads should be moved to the data — is still strong.
3. Cloud + data lake is the next, and schema on read is the answer to heterogeneous and dynamic data offering.


Post a Comment

<< Home