Sign In
Not register? Register Now!
You are here: HomeEssayTechnology
Pages:
4 pages/≈1100 words
Sources:
8 Sources
Level:
Harvard
Subject:
Technology
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 21.6
Topic:

MPI Shared Communication on Infiniband and Gigabit Ethernet Clusters (Essay Sample)

Instructions:

I need an essay in this area message passing system, I need the title to be narrow to analyze. here is the replay from my lecture ( MPI as a topic is fine, but you need to bound the question a bit more -- what's the title? Otherwise, it will be an overview of MPI, which will not enable you to display the right level of critical analysis, etc.) You should identify the trend or topic yourself, but it should be inspired by lecture materials as well as your wider research. You should structure the essay to appropriately introduce the topic or theme, giving some historical context, as well as a critical analysis of the current trend, with thoughts for future developments. As always, your work should be appropriately referenced and evidence-based. Marks will be allocated according to the following scheme: Identication of an appropriate topic, contextualised to module themes: 10% Explanation of historical context and development: 20% Clear understanding and critical analysis of current trend: 40% Identication of future development: 20% Referencing, structure, clarity of language: 10% 100%

source..
Content:

MPI SHARED COMMUNICATION ON INFINIBAND AND GIGABIT ETHERNET CLUSTERS
By Student’s Name
Code + Name of Course
Professor/Tutor
Institution
City/State
Date
MPI SHARED COMMUNICATION ON INFINIBAND AND GIGABIT ETHERNET CLUSTERS
Introduction
MPI is a portable and standardized system of passing messages. There exists an application of message-passing systems on distributed equipment with a distinct memory for implementing parallel programs (Gropp, Ewing & Anthony 1999, P. 19). In this system, every process of execution communicates and shares its facts with others by conveying and accepting incoming messages. The MPI specification results from an MPI-Forum that entails several standards upon a movable system. Additionally, the standard is not specific on the explicit joint-memory processes, explicit support, and debugging facilities. The argument is that procedures proposed and established to increase the standard of implementation of the MPI in highly performing clusters like the Gigabit Ethernet and InfiniBand are useful in programming of parallel appliances; most clusters’ demonstration critically depend on the communication presentation of the routines provided by the library of the MPI.
Historical Context and Development
The message passing interface work started in 1991when a group of researchers began deliberations at a certain retreat in Austria. In another workshop, there were discussions on the Standards for Message Passing within an environment of Distributed Memory Environment in Virginia in 1992. During the workshop, there was a debate on the primary traits vital to a typical message-passing interface and a functional group created to commence the process of standardization (Gropp, Ewing & Anthony 1999, P. 27). Drafting of an introductory draft preceded this process in the same year, as a proposal to the project by three researchers. This was the MPI1. Thereafter, an assembly of the MPI team was conducted in Minneapolis. The working group of the MPI met frequently in 1993, and it comprised of people commonly from America and Europe. The standard of the MPI describes the semantics and syntax of a principal of library practices beneficial to many users drafting programs on portable message passing in C and Fortran (Gropp, Ewing & Anthony 2007, P. 790). In an effort to establish a real platform for message passing, the researchers integrated the most expedient traits of a number of systems into MPI as opposed to choosing one system to assume as a standard. Attributes were utilized from p4, Express, IBM, Intel, and PVM among other systems.
Currently, the message-passing standard is striking due to its wide portability. It can thus be used in conveying messages for shared memory and distributed memory networks, multiprocessors, and a mixture of these rudiments (Foster & Nicholas 1988, P. 4). Application of the paradigm exists in several settings, regardless of memory planning or network speed.
Critical Analysis of Current Trend
The output of the plan for implementing an MPI presents crucial issues for computing systems of high performance. This especially applies for more progressive processor technological systems. Consequently, benchmarking the implementation of MPI on multi-core designs can be measured by ascertaining the Open MPI combined communication performance on the Gigabit Ethernet, as well as the infiniBand group, using SKaMPI (Ismail, et al 2013, P. 455).
In the past years, clusters have developed into key architecture engaged for computing systems of high performance. The emergent style of the use of clusters as High Performance Computing has led to numerous research in the discipline, especially in the standard method used for communicating between nodes (Gropp, Ewing & Anthony 1999, P. 29). Another significant factor which can impact the performance of communication of clusters is the clusters interconnect. Slower interconnects are capable of slowing down processes. The preferable cluster interconnect has to offer non-blocking inter-connect architecture and low dormancy great bandwidth. Consequently, there exists proposals established to increase the standard of implementation of MPIs in highly performing clusters like the Gigabit Ethernet and InfiniBand (Ismail, et al 2013, P. 455).
Presently, Gigabit Ethernet and InfiniBand are the most famous interconnect used in high performance computers. Basing on data in2012from a list of top five hundred supercomputers location, InfiniBand was topping the list with 44.8% whereas GigabitEthernet came next with 37.8 per cent. Gigabit Ethernet provides Local Area Network expertise with a potential range of 50-300 μs (Foster &Nicholas1988, p.7). Currently, it can deliver nearly 1 Gbit per second bandwidth of ample communication using IP or TCP. For the time being InfiniBand can supply higher bandwidth and low latency than Gigabit Ethernet. Its dormancy ranges between 2-10 μs (Buyya 1999, P. 10). It can accommodate provisional network bandwidth of up to 11000 Mbytes per second. The InfiniBand with a multi-path gives better throughput when compared to the Gigabit Ethernet. This is because latency affects throughput in a network of High Performance Computers. Nevertheless, InfiniBand network of high-speed is more costly than the Gigabit Ethernet (Gropp, Ewing & Anthony 2007, P. 790). Given that most clusters employ these two kinds of interconnect for conveying data in the middle of nodes, it is significant to ensure efficient implementation of the MPI above the cluster intersect is well implemented to attain optimum performance. Thus, the assessment and examination of the routines of the MPI operation on clusters are crucial. The benchmark leads to the MPI united communication on InfiniBand as well as the Gigabit Ethernet categories of UPM are usually carried out using SKaMPI, which is a commonly used benchmark tool for MPI (Friedley, Bronevetsky& Lumsdaine 2013, P. 16). The result would be significant for further research associated with the implementation of the Open MPI on multi-core groupings. Clusters of processes can transmit data through collective communication. MPI-Bcast occurs as a commonly used routine for collective communication. It assists the root procedure of transmitting the information from the broadcast to all the procedures in the conveyor. A broadcast contains a root process, and each process obtains at least a copy of the information (message) from the root (Buyya 1999, P. 16).Overall, all processes have to specify a similar root. The root proposition is the location of the procedure of the root.
Future Development of the MPI
In the near future, most MPI systems contained in high-performance computing will be fitted with an ordered hardware design like the ccNUMA cluster or joint memory nodes (Keller, Edgar,Michael& Jack, 2011, Pg. 12). In this case, every node will have a number of multi-core Central Processing Units. Parallel programming will have to unite the parallelization of the distributed memory on the inter-connection of the node with the mutual memory parallelization within the nodes (Friedley, Bronevetsky & Lumsdaine 2013, P. 6). There will be minimal mismatch challenges between the typology of hybrid hardware and the homogeneous or hybrid parallel programming units on such hardware (Keller, et al 2011, P. 14). A combination of hybrid programming of the Open MP and MPI will be faster than ...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

Other Topics:

  • How Different Input-Output Models Work
    Description: The points give an indication of the efficiency of each model. How the multiplies in the IMPLAN model function is different to how those in the REMI or RIMS 2 models...
    2 pages/≈550 words| 1 Source | Harvard | Technology | Essay |
  • GIS and Remote Sensing and Environment
    Description: Evaluation of surface water pollution from industrial and municipal waste water by using of GIS and remote sensing applications and theories...
    10 pages/≈2750 words| No Sources | Harvard | Technology | Essay |
  • Negative Impacts of Technological Advancement to Society
    Description: Today, many people believe that technological advancement has made life simpler and more comfortable, with unparalleled abilities to perform tasks than in the past...
    2 pages/≈550 words| 6 Sources | Harvard | Technology | Essay |
Need a Custom Essay Written?
First time 15% Discount!