Scaling Study

Read Complete Research Material

SCALING STUDY

Scaling Study

Scaling Study

Introduction

People are living in a three-dimensional space. They know what is up, down, left, right, close and far. They know when something is getting closer or moving away. However, the traditional personal computers can only make use of two dimensional space due to relatively low technology level of the video card in the past. As the new technology has been introduced to the video card industry in recent years, the video card can now render 3D graphics. Most of the PC computer games nowadays are in three dimensions. In addition, some web sites also apply the use of three dimensional space. This means that they are no longer a flat homepage, but instead a virtual world. With that added dimension, they all look more realistic and attractive. Nevertheless, 3D do not exist in most of the business programs today, but it can be forecasted that it is not far away.

Discussion

CPU scaling is a measure of how much workload can be driven when the CPU resources are increased. An increase of workload can occur when the number of total transactions or the transaction rate are increased. For this workload, the workload submission rate (the rate at which work is submitted to the J2EE middleware layer), has to be increased. However, increased workload in this study also requires a larger database, which means that not only the workload must be scaled, but the whole environment. Scaling the whole environment might have other effects on the performance than just doing more work with the same data.

Maximizing CPU utilization

To determine the performance characteristics of the workload, measurements are taken using one, two, four, and eight dedicated CPUs on the WebSphere® system. A workload entry rate is chosen that is high enough to drive the CPUs to near full utilization. The results can be used to gain a better understanding of the scalability of the workload, and can be used as a way to measure differences in the performance of the same workload on 64-bit WebSphere versus 31-bit WebSphere.

In all 64-bit WebSphere measurements, the heap settings for the JVM are set to 75% of the 8 GB available memory. This is the optimum percentage derived from the study Heapsize for the 64-bit Java Virtual Machine. That worked out to a 64-bit WebSphere JVM heap settings of -Xms6144m -Xmx6144m. A memory size of 8 GB is also configured for the DB2® LPAR, which runs with four configured CPUs for all of the tests.

10 Gb Ethernet chosen for highest workload

The workload submission rate of 600 was found to exceed the capacity of the 1 Gb Ethernet network. This causes network saturation, dampening throughput and providing additional work for error handling. The 600 workload submission rate tests are therefore run using a 10 Gb Ethernet, to remove the effects of a network bottleneck on the results. A submission rate higher than 600 would have required a larger restructuring of the environment, because of the higher resource usage from the clients to WebSphere and up to the ...
Related Ads