Thursday, November 04, 2010

Shared Nothing Architecture

Shared nothing system greatly reduces the resource contention for memory, locks, or processors. As pointed out by DeWitt et al., among the three widely used approaches, shared memory is the least scallable, shared disk medium, and shared nothing is most scalable. A shared nothing system can scale almost linearly and infinitely, simply by adding more inexpensive nodes. Shared nothing is now prevalent in the Data Warehousing space due to its potential for scaling.

As one of the earliest implementation, in teradata, each AMP virtual processor(vproc) manages its own dedicated portition of the system's disk space(vdisk, which can be multiple disk array ranks). Rows are distributed to the AMPs according to the hash of the primary index(PI). For NoPI table supported from TD 13.0, it either hashes on the Query ID for a row, or it uses a different algorithm to assign the row to its home AMP. The unconditional parallelism and linearly expandability makes its leading position in enterprise data warehousing.

Nowadays the shared nothing architecture is adopted by most high performance scalable DBMSs, including Teradata, Netezza, Greenplum, DB2 and Vertica. It is also used by most of the high-end e-commerce platforms, inclusing Amazon, Yahoo, Google, and Facebook.

In DB2 UDB Enterprise-Extended Edition (EEE), partition key is chosen as one or more columns and hash of the partition key determines which node/node group a row should be sent to.

Oracle is a shared-disk approach. In Oracle shared nothing is at logical level. Once the degree of parallelism is chosen as a power of 2, number of partitions are decided and partitions are generated by the range - hash partitioning.

Cons: Shared Nothing Architectures takes longer to respond to queries that involve joins over large data sets from different partitions. For example, in Teradata OLTP is not efficient, CPU cycles are distributed to several AMPs and PE, PEs may get easily congested by massive OLTP requests.

Labels: ,

0 Comments:

Post a Comment

<< Home