Sign In
Not register? Register Now!
You are here: HomeEssayIT & Computer Science
Pages:
4 pages/≈1100 words
Sources:
1 Source
Level:
APA
Subject:
IT & Computer Science
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 21.6
Topic:

Parallel Computing (Essay Sample)

Instructions:
I did this essay for a computer science client and the results were great. The sample explains parallel computing and the various technologies involved in it. The essay is divided into various questions. Some of the topics include distributed memory and shared memory. Distributed memory involves a technology where the computing system access memories from various locations, which allows for continued task execution whenever there is a breakdown in any memory location. source..
Content:
Parallel Computing (Your Name) University Subject code and name Lecturer Date Parallel Computing Question 1 * Cache is a computing component that helps to store regularly accessed data for faster access by the processes or other applications when required. It reduces the time required to access the local memory to find the required data. * Cache hit is a phenomenon that occurs when data needed is located in the cache memory. * SIMD in full is Single Instruction, Multiple Data. This is a type of parallel programming, which involves processors working simultaneously on the same instruction but on different data * SISD in full is Single Instruction, Single Data. This type of processing involves one processor executing a single instruction on a given data at a time. * MIMD in full is Multiple Instruction, Multiple Data. This is a parallel computing architecture, which involves several processors executing several instructions on several pieces of data. * Race Condition is a condition, which occurs in parallel computing when several threads access shared resources without proper management leading breakdown in communication. * Pipelining is a technique used in computer architecture to reduce the time required for execution by overlapping various stages of processing. Question 2 Distributed Memory Parallel computing systems frequently employ distributed memory as a memory architecture. That is, every processing node has its own memory unit and the entire space for storage of memories is distributed across multiple nodes. Nodes can only directly access their local memories and communication between different nodes is by message passing to transfer data. Diagram of Distributed Memory -47625118110CPU 0Memory 0CPU 1Memory 1CPU 2Memory 2Communication Network0CPU 0Memory 0CPU 1Memory 1CPU 2Memory 2Communication Network Distributed memory is a NUMA architecture since the access times vary between the various memories. This is because of the nature of the data accessed, the size of the data, and the distance from the memory from the requesting hosts. Question 3 Shared Memory Shared memory is also utilised in the parallel computing architecture. This involves several processors access the same memory for data and other resources during execution. 0384810CPU 0CPU 1CPU 2Shared Memory0CPU 0CPU 1CPU 2Shared MemoryDiagram of Shared Memory Shared memory is UMA because all processors have uniform access times to all memory locations, regardless of their physical location in the system. Question 4 * In a distributed memory system, “critical” memory protection is commonly used since each processor node has its own local memory requiring careful management of access to this memory to avoid corruption or data inconsistencies. * Shared-memory systems often have to send a lot of inter-process messages because multiple processors or processing units concurrently access the same shared-memory spaces and therefore need communication mechanisms that can coordinate access as well as synchronize the updating of the data among different processes or threads. Question 5 Process A process is the execution of instruction by a computer system, using program code, data and resources required. Each of the processes works on their own and they get allocated separate memory space by the operating system. Thread A thread refers to a lightweight process that falls within another process where it shares the same memory space including resources. However, each thread has its personal execution stack and program counter for concurrent execution within a similar process. Differences * Processes have independent entities with different memory spaces as threads share memories within one process; processes consume more resources and are more isolated while threads can share resource fully and have better intercommunication * Threads are preferred for distributed memory architectures because they allow efficient flow of information between processing units unlike processes which may not be suitable due to their distributiveness and isolation. Question 6 1 Snooping-Based Protocol Caches monitor the system bus for memory transactions initiated by other processors. If a cache detects a write operation to an address it holds, it invalidates or updates its copy. While simple to implement, snooping protocols may cause bus contention and scalability issues as the processor count grows. 2 Directory-Based Prot...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

  • Article Summary
    Description: Article Summary IT & Computer Science Essay...
    1 page/≈275 words| 1 Source | APA | IT & Computer Science | Essay |
  • Foundation (CSS Framework)
    Description: Foundation (CSS Framework) IT & Computer Science Essay...
    7 pages/≈1925 words| 12 Sources | APA | IT & Computer Science | Essay |
  • Cyber Risk, Data Breach Identification, and Vulnerability Management
    Description: Cyber Risk, Data Breach Identification, and Vulnerability Management IT & Computer Science Essay...
    1 page/≈275 words| 3 Sources | APA | IT & Computer Science | Essay |
Need a Custom Essay Written?
First time 15% Discount!