Cloud Data Management

Read Complete Research Material

CLOUD DATA MANAGEMENT

Cloud Data Management

[Name of the Institute]

Cloud Data Management

Chapter 1: Introduction and Objectives4

Introduction4

Research Problem6

Objectives6

Project Beneficiaries7

Background of the study8

Purpose of the Study9

Rationale of the Study10

Case Sudy12

Challenge12

The Enterprise Impact of Cloud Computing17

Pricing Transparency17

Concerns20

Conclusion21

Work plan and Structure of the Report22

Purpose22

Design/methodology/approach22

Finding22

References24

Cloud Data Management

Chapter 1: Introduction and Objectives

Introduction

The advancement of technology in exchanging-data, computation and storage has spawned huge collection of data and this is enforcing new confrontations on infrastructures of data center. To meet the ever increasing demand and challenges in current business circumstances has lead to contemplate the need to enhance the responsiveness of data centres and to achieve this ever increasing demand, the IT is wrenching towards virtualization. Cloud Computing, a remarkable development in advanced computing paradigm, is an instinctive expansion of this force towards virtualization. Many solutions have been formulated to manage data using concepts of Cloud Computing. In the last year or so, the buzz around cloud and virtual computing has grown almost exponentially. Not surprisingly, confusion and cynicism have also grown in direct proportion to the seemingly unending hype generated by these two concepts.

Industry enterprise applications are highly complex, especially for the ones consisting of a large number of software components. Each software component (e.g. Oracle, WebSphere, DB2) will in general produce numerous logs files containing information on the system level events. Within the log files, the amount of data produced by typical enterprise applications can be extremely large. Administrators very often need some logging facilities to help analyze the log data. One of problems of using log facilities is that most of them are too application or vendor dependent. By using different log facilities, normally will require administrators to be trained for new skills. Without using log facilities, another problem may arise is that the format and level details contained the log file can differ between different applications. The different layouts of event data can be problematic and time consuming for correlation and analysis of the data. In additional, using such correlated data to understand the overall behaviour of the system is often required by administrators to analyze and identify the actual problem when issues arise.

The task of aggregating events from different log files can be challenging. For manual testing teams, it is almost impossible to correlate and analyze each log in run time, because the log files are often updated too fast by applications. Testing teams in general need to wait until applications stop running. This can be particularly problematic for timely root cause analysis, which requires correlated event data in run time Traditionally, organizations have implemented applications by installing software for the application on one or more physical servers. In many cases, multiple servers may be used when an application services a high transaction volume or when there needs to be a high level of assurance that the failure of a single server will not cause an application to completely fail. A classic example of this type of application is a university's student registration system. During peak registration periods, transaction levels will be ...
Related Ads