The Leganet System

Read Complete Research Material

The leganet system

The leganet system: Freshness-aware transaction routing in a database cluster



Abstract

We address the use of a database cluster for Application Service Provider (ASP). In the ASP context, submissions and databases can be update-intensive and should stay autonomous. In this paper, we recount the Leganet scheme which presents freshness-aware transaction routing in a database cluster. We use multi-master replication and calm replica freshness to boost burden balancing. Our transaction routing takes into account freshness obligations of queries at the relative grade and values a cost function that takes into account the cluster burden and the cost to refresh replicas to the needed level. We applied the Leganet prototype on an 11-node Linux cluster running Oracle8i. Using experimentation and emulation up to 128 nodes, our validation founded on the TPC-C standard illustrates the presentation advantages of our approach.

 

The Article which I have selected for this research paper is “The leganet system: Freshness-aware transaction routing in a database cluster”. this article was published in Journal of Information Technology Volume 32, Issue 2, April 2007, Pages 320-343 Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.791&rep=rep1&type=pdf

Introduction

 Database clusters now supply a cost-effective alternate to aligned database systems. A database cluster is a cluster of PC servers, each running an off-the-shelf DBMS. A foremost distinction with aligned database schemes applied on PC clusters , for demonstration, Oracle Real Application Cluster, is the use of a “black-box” DBMS at each node which avoids costly data migration. However, since the DBMS source cipher is not inevitably accessible and will not be altered or expanded to be “cluster-aware”, added capabilities like aligned query processing should be applied by middleware. Database clusters make new enterprises like Application Service Provider (ASP) economically viable. In the ASP form, customers' submissions and databases (including data and DBMS) are hosted at the provider location and require be accessible, normally through the Internet, as effectively as if they were localized to the clientele site. Thus, the dispute for a provider is to completely exploit the cluster's parallelism and burden balancing capabilities to get a good cost/performance ratio. (Pacitti 2003) The usual answer to get good burden balancing in a database cluster is to duplicate submissions and data at distinct nodes in order that users can be assisted by any of the nodes counting on the present load. This furthermore presents high-availability since, in the happening of a node malfunction, other nodes can still manage the work. This answer has been effectively utilised by Web seek motors utilising high-volume server ranches (e.g., Google). However, Web seek motors are normally read-intensive which makes it simpler to exploit parallelism. (Amza 2003)

In the ASP context, the difficulty is far more difficult. First, submissions and databases should stay autonomous, i.e., stay unchanged when shifted to the provider site's cluster and stay under the command of the clients as if they were localized, utilising the identical DBMS. Preserving autonomy is critical to bypass the high charges and difficulties affiliated with cipher modification. Second, submissions can be update-intensive and the use of replication can conceive consistency difficulties ...