Sunday, January 26, 2020

Breaking - Clustered Computing


What is Clustered Computing in Big data?


To handle big data individual computers are often inadequate when it comes to storage and compute requirements. Node is a single computer and cluster is a bunch of them together.

Clustering helps in combining resources of smaller machines:

1. Resource pooling: Storage, CPU & Memory
2. High availability: Varying levels of fault tolerance to prevent failures
3. Easy scalability: Add machines horizontally

Manual & Automatic cluster examples: Veritas, Linux natic clusters, IBM AIX bases clusters.

One also needs solutions to manage cluster membership, resource sharing & scheduling work on individual nodes. Hadoop's YARN or Apache Mesos.

The cluster acts as a foundation layer for other processing software.  

No comments:

Post a Comment

The WHY of this Blog.

Demystifying Data: Your Quick Guide to Talking Tech The world of data – analytics, science, and ad-tech – can feel like a foreign languag...