Over the last twenty-five years, computer architecture has undergone an enormous wave of evolution and innovative engineering. It is vital to note that such advancements have been critical in the efficiency and performance obtained in all the existing technological products. CPU designs and integration into computers of different sizes and functions is a key emphasis within the electronics industry. More computations computed signifies larger infrastructure systems that may be linked together. Among the various technological advancements impacting the technology industry including RISC pipelining, cache memory, and virtual memory. The research paper seeks to delineate the development of and existing developments of RISC, pipelining, cache memory, and virtual memory.
Reduced Instruction Set Computing (RISC)
RISC has played a vital role in the development of computer technology over the last twenty-five years. RISC was industrialized by the IBM Thomas J. Watson Research Center in the 1980s. The concept originated from the IBM 801 minicomputers developed in 1971 integrated as a rapid regulator in an enormous telephone switching structure. According to Lee (2016), RISC is a microprocessor developed to execute fewer computer commands that allow computers to function at an advanced speed. As technology progressed, specific character design attributes have been distinctive of various RISC processors. The aspects consist of one cycle execution period, pipelining, and a significant amount of registers.
Before the development of RISC, CISC was extensively utilized. CISC was aimed at executing tasks in fewer lines of assembly codes, and programmers would generate intricate commands straight into the hardware. With CISC, jobs regularly necessitated numerous machine sequences, taking longer to perform a job. With the fresh design, RISC, the microprocessor could execute a restricted set of commands. However, it could run them much quicker since the commands were considered to be simple. In 1987, Sun Microsystems initiated delivering machines founded on the SPARC architecture, a duplicate of the Berkeley RISC-II machine. The success of the SPARC was a facilitator in corporations accepting RISC. The constant development in design has exponentially enhanced the competence of RISC over the years. In 1990 and 1993, IBM came up with a new RISC design and broadcasted the new design.
Pipelining played a vital role in the progression of computer design. According to Pantazi-Mytarelli (2013), pipelining is the continuous and somewhat overlapped transfer of instructions to a CPU or in the arithmetic steps taken by the processor to perform a command. Devoid of pipelining, every instruction originates directly from memory. While waiting for every direction from the storage, the arithmetic segment of the processor is sedentary. It has to pause until it obtains its subsequent set of commands. Pipelining, the computer architecture lets the subsequent set of instructions be recovered while the processor executes the arithmetic commands. This intensifies the number of instructions that can be executed during a particular duration.
According to Pantazi-Mytarelli (2013), computer processor pipelining is split into two diverse forms. They include the instruction pipeline and arithmetic pipeline. The instruction pipeline epitomizes the phases in which instructions are moved through the CPU. The arithmetic pipeline characterizes the segments of an arithmetic operation that may be split and overlapped as they are executed. Moreover, pipelines and pipelining are applicable to computer memory regulators and transferring information through various memory staging regions. In the 1970s, supercomputers began integrating pipelining. Pipelining would be considered one of the principal large-scale integration circuit and chip-design systems. By permitting additional bandwidth to be accessible from the cache, pipelining facilitates an order of enormousness in execution. With the constant enhancements, certain benefits have been experienced. They include a drop in computer prices, contributing to an increase in possession of personal computers.
Cache memory, similarly identified as the processor memory, is a high-speed static random access memory that a microprocessor may rapidly gain access to compared to accessing a standard random access memory. According to Rouse (2018), the memory is characteristically assimilated straight into the microprocessor chip or positioned on a distinct chip with a different bus intersect to a central processing unit. The fundamental aim of a cache memory involves storing commands and information repeatedly integrated with the execution of programs or data that may be subsequently required. Cache memory has continuously evolved during the past years. One of the core reasons for the progression entails the dedication integrated with computers. This is mainly attributed to the aspect of cost. They solely concentrated on the direct price of cache, RAM, and regular rate of execution. Moreover, recent cache designs take into account energy efficacy and fault tolerance, amongst other objectives. The development from pipelining to cache storage involves reducing the regular time taken to access memory, enhancing information retrieval. This leads to higher processing power and rapid response duration by software applications.
Virtual memory is regarded as memory management capability of an operating system that integrates hardware and software, allowing a computer to reimburse for physical memory scarcities by momentarily transporting information from random access memory to disk memory(What is virtual memory, 2018). In the 1940s and 1950s, there existed a short supply of computer storage because of the high cost. As the technology evolved, virtual memory became an integral part of technology. It has become a tendency towards bigger virtual sizes of memory, particularly with the progression into a 64-bit range (What is virtual memory, 2018). Memory was considered as one of the biggest concerns and occupied a considerable amount of space. Initial computers had approximately 128 kilobytes of RAM. In 1985, Intel provided virtual memory and cache within the 386 microprocessors, and Microsoft provided multiprogramming in Windows 3.1. Even though pipelining and cache memory offered a more effective mechanism, the virtual memory offered a mechanism to use the memory more proficiently and creating additional space. Currently, virtual memory is regular in all CPUs, and it makes the most of productivity and execution.
Technological advances have been widely experienced in the last twenty-five years. With RISC, pipelining was set to accomplish every task in a logical fashion. Cache memory assisted in improving some insufficiencies attributed to pipelining by allowing quick data retrieval. Virtual memory has assisted users in exceeding the physical boundaries of computer RAM by generating virtual space to save additional information. The evolutions have generally increased efficacy, space, efficiency, cost, and largely enhanced system performance.
Nair, R. (2015). Evolution of Memory Architecture. Proceedings of the IEEE, 103(8), 1331– 1345. doi: 10.1109/jproc.2015.2435018
Pantazi-Mytarelli, I. (2013). The history and use of pipelining computer architecture: MIPS pipelining implementation. 2013 IEEE Long Island Systems, Applications, and Technology Conference (LISAT). doi: 10.1109/lisat.2013.6578243
Rouse, M. (2018, May). Cache Memory. Retrieved from searchstorage.techtarget.com: https://searchstorage.techtarget.com/definition/cache-memory
What is virtual memory? (2018, October 11). Retrieved from www.cbronline.com: https://www.cbronline.com/what-is/what-is-virtual-memory-4929986