Computer Science-Memory Management

Read Complete Research Material



Computer Science-Memory Management

Abstract

The following study is conducted to explore the significance as well as implementation of memory management, its process and its threads. A comprehensive and detailed research is conducted to be acknowledged with the utilization as well as the functions of memory management. Memory management is supposed to be a complicated field, various fundamental thoughts of memory management is also described in this research.

Computer Science-Memory Management

Introduction

For every programming language, memory management is considered to be very significant. It may be automatic or in some cases it may be manual. The automatic memory management can also be termed as a Garbage Collection Technique. It is a technique that is implemented to automatically deallocate the memory when control flow is seemed to be increased to a high extent.

Discussion

The Memory Model

In C++ memory model, programs are required to behave in a sequentially consistent manner for reasoning. In order to carry out reasoning of performance, well-synchronized programs are settled to behave in a sequentially consistent manner. It is observed that the programs having data races are not well synchronized and for understanding data races, happens-before and synchronize-with relations are defined.

In the memory model, a program execution is consistently sequential when it is data race free otherwise its behavior is undefined. Java volatile equivalents are provided by atomic library. Reasoning is possible about multithreaded programs. It makes the compiler to be aware of those optimizations that are not acceptable. Reasoning is made that is independent of the processor. A set of requirements is given to multi-processor designers. Multithreaded programs are not portable between compilers/processors if they work at all. Specific vendor is implemented in the memory model.

Multi-threading is mostly performed in C++. In such cases, when no multi-threaded meaning is found, that the program must be carried out in volatile keyword. Cheating is often done by some compilers. Memory model may work only on a few processors. A memory model will be there for C++ 0x Race-free programs that are appeared sequentially and are supposed to be very reliable (Graham, 2001).

Global and Local Memory

GPU hardware characteristics are discussed by various online contents. This content is found to be different on the basis of hardware vendor that is focused, and even the hardware family from which every vendor is described. From the perspective of a C++ AMP developer, a cross-hardware and a rough, high-level (each hardware vendor is seemed to be comprised of some specific terms and details) picture, array and array view data can reside on a global memory of GPU. Registers are also found there for each thread, where local variables usually go.

Moreover, GPUs have a little scratchpad memory space next to every processing component. As compare to global memory, it can be accessed speedily at a speed of approximately 10 cycles per second. This memory area is considered to be a programmable cache that can also be termed as local data store. CPU caches are clearly and routinely managed, and provide all the performance advantages to the users. This GPU cache can e ...
Related Ads