Admission News

Latest Admission Notifications for BA, B.Com. B.Sc, BBA, BCA, MA, M.Com, M.Sc, MBA, Other PG and UG Courses

Question Papers

Question Papers for Various Universities of India. Download 10 Year Question Paper for Free.

Placement Papers

Placement Papers for Various Companies. Download Placement Papers for Free.

Universities in India

List of Various Universities in India. Get the List of Various Universities, Deemed Universities, IITs and NITs.

Colleges in India

List of Various Colleges in India.

Wednesday, March 5, 2008


RAID (Redundant Array of Inexpensive Disks )

RAID, short for Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks is a data storage methodology which makes use of multiple hard drives or disks to share the data among the various disks. The advantages which RAID provides are an increase in throughput of the system, its capacity and data integrity. Also, it is possible to use multiple low-cost devices to achieve greater reliability, speed and capacity. RAID treats many hard drives as a single unit. RAID finds use in servers.

The original RAID specification suggested a number of prototype "RAID levels", or combinations of disks. Each had theoretical advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one implementation of RAID 5, for example, can differ substantially from another. RAID 3 and RAID 4 are often confused and even used interchangeably.

History of RAID

What are now parts of various RAID “levels” were part of a 1978 US Patent which was granted to Norman Ken Ouchi of IBM. This was called “System for recovering data stored in failed memory unit”. The patent has described various techniques such as duplexing and dedicated parity protection etc. which are part of various RAID implementations of today.

However, RAID, as a technology, was first laid down or defined by some computer scientists at the University of California, Berkeley in 1987 while analyzing the possibility of using more than one device on a computer and making it look like a single drive. Then, in 1988, David A. Patterson, Garth A. Gibson and Randy H. Katz put out a paper called “A Case for Redundant Arrays of Inexpensive Disks (RAID)” which properly defined the RAID levels 1 to 5.

Hardware and Software Implementations of RAID

RAID technology can be implemented in two ways, namely, hardware and software. Also, both implementations may be used together to form a hybrid variety of RAID implementation.

In a software implementation, the disks connected to the system are managed with the help of the usual drive controllers such as SATA, SCSI etc. Since CPU speeds are now very high, a software RAID implementation is faster than a hardware one. However, a major disadvantage here is that the CPU power is compromised for the RAID implementation. Sometimes, a hardware implementation of RAID may make use of a battery backed-up write back cache which can speed up many applications. In such a condition, the hardware implementation will prove to be faster. It is for this reason that a hardware RAID implementation is seen as apt for database servers. A software implementation will also refuse to boot until the array of disks is restored after a disk in the array fails completely.

A solution to the above problem is to have a preinstalled hard drive ready for use after a disk failure. The implementation, be it hardware or software, would immediately switch to using this drive as soon as a disk in the array fails. This technique is called ‘hot spare’. When implementing RAID through hardware, it becomes necessary to use a RAID controller which can be a PCI card or a part of the motherboard of the system itself. The function of controller is to manage the disks in the array and perform any calculations that maybe required. Hardware implementations, due to the hardware controllers they use, allow what is called ‘hot swapping’ where any failed disks can be replaced even if the system is running. Modern hybrid RAID implementations do not have a special-purpose controller and instead uses the normal controller of the hard drive. The software part of such an implementation can then be activated by the user from within the BIOS and it is operated by the BIOS then onwards.

Hard Disk and Types of Hard Disk

Hard Disk

A hard disk or a hard disk drive or an HDD or a hard drive is a storage device for storing data typicaly used with computer system. It stored the data on a magnetic surface layered onto hard disk platters. Hard disk are non-volatile in nature. This means that they do not lose the information they contain when the electrical sypply (or power) is withdrawn or turned off.

History of Hard Disks

IBM 350 Disk File, made by Reynold Johnson in 1955 was the first hard disk. A slow drive, it had a capacity to hold 5 million characters. This hard disk used a single head to read data from each of the 24 platters.

It was not before 1961, that separate heads for each platters were used. The Winchester disk system, introduced in 1973 and now an industry standard was made by IBM and was the first to make use of a sealed head/disk assembly.

Hard disks were very delicate which prevented them from being used in an industrial environment. They were also large in size and expensive at the same time. Another problem was that of high power requirements. It was not until after 1980, that they were used with microcomputers when the first 5.25-inch drive, the ST-506 was made by Seagate. It could hold 5 MB of data.

Hard disks were usually sold by OEMs as a part of a larger product, rather than the manufacturer selling them seperately. This trend changed in the 1990s when, by 1995, hard disks were available for separate retail sale. Inspite of the popularity of hard disks, they were still not part of the original configuration in some computers including the Apple Macs as late as 1998, making it necessary to attach external storage.

In terms of capacity, there has been a huge increase since the first time hard disks were introduced. Starting off with as less as 5 MB of storage and 20 MB being considered as large (large enough, sometimes), today’s hard disks are available with various storage capacities, ranging from 40 GBs to 750 GBs.

Basic working of a Hard Disk

A hard disk is made of round platters that are magnetic in nature placed one over the other with some space between them. These platters have a corresponding read/write head that access them. Hence, each platter has a head assigned to it. Each platter of a hard disk is divided into circular paths called tracks which are concentric. Each of the tracks is further divided into arcs of equal radius called sectors. The actual data is stored in these sectors.

Each platter rotates at a certain speed with the help of the motor in the center. The speeds at which these platters rotate is measured in ‘Rotations per mintue’ or ‘rpm’ and range from 3600 to 10000. The head is placed at the end of an ‘arm’ that brings it right above the respective platter. This arm is moved in a fast and precise manner using a single-way motor at its other end. The head typically ‘flies’ very close above the platter (without actually touching it). When data is to be written (or read), the platters revolve and the head creates a magnetic field in the space between the platter and itself. It is through this that the data is read or written onto the hard disk. Since the platters are rotating, it is important to determine where the requested data is located in order to retrieve it as fast as possible.

Hard Disk types

The type of hard disk refers to the interface or connection used by it to communicate with a computer. Some hard disk types in use today are as follows:

* IDE or ATA

IDE stands for Integrated Drive Electronics and is also called Advanced Technology Attachment (ATA) or Parallel ATA. This interface is capable of accessing only one drive on the same channel at a time. If more than one drive has been installed, one would be the ‘master’ and rest all would act as ‘slaves’.


Standing for Small Computer System Interface, SCSI has been around since the mid-90s. Allowing for cable lengths of up to 12 meters, it is typically used for Hard Disks, though it can also be used for other types of drives such as an optical drive.


SATA, introduced in 2003, stands for Serial ATA. It features high data transfer rates and is the most advanced interface available today. The SATA interface eliminates the Master/Slave architecture for installing more than one drive on the same computer system. It is very fast as compared to its predecessor, Parallel ATA (PATA or simply, ATA), with a data transfer rate of over 1 Gb while the ATA had only a 50Mb rate.

Hyper-Threading Technology

Hyper-Threading Technology

Hyper-Threading Technology is a new technology that allows a processor to run two threads in parallel allowing you and your software to multiple tasks more effectively than ever before. It provides a way of harnessing the wasted computing power of CPU to increase performance without the need for additional physical processors. With

In a CPU, every clock cycle has the ability to do one or more operations at a time. An only one processor can only handle so much during an individual clock cycle. Hyper-Threading permits a single physical CPU to fool an operating system capable of SMT operations into thinking there are two processors. Hyper-Threading produces logical processors to handle multiple threads in the same time slice a single physical processor would normally only be able to handle a single operation. There are some prerequisites that must be satisfied before taking advantage of Hyper-Threading. The first prerequisite, you must have a Hyper-Threading enabled processor, HT enabled chipset, BIOS and operating system. Further, your operating system must support the multiple threads. Finally, the number and type of application being used makes a difference on the increase in performance as well.

Hyper-Threading is a hardware upgrade to help make use of some of the wasted power of a CPU, but it also helps the operating system and applications to run more efficiently and therefore they can do more at once. There are millions of transistors inside a CPU that turn on and off to process commands. By adding more transistors chipmakers typically add more brute force computing power. More transistors equal a large CPU and it produces more heat. Hyper-Threading looks for to increase performance without significantly increasing the number of transistors contained on the chip, making the CPU footprint smaller and producing less heat.

offers two logical processors in one physical package. Each logical processor must share external resources like memory, hard disk, etc. and also must use the same physical processor for computations. The performance boost will not scale the same as a true multiprocessor architecture because of the shared nature of Hyper-Threading processors. System performance will be somewhere between that of a single CPU without Hyper-Threading and a multi-processor system with two comparable CPUs.

This technology is independent upon platform. Some applications are already multi-threaded and will automatically benefit from this technology. Multi-threaded applications take full benefits of the increased performance that Hyper-Threading Technology has to offer, permitting users will see immediate performance gains when multitasking. It also improved reaction and response time, and increased number of users a server can support. Today's multi-processing software is also compatible with Hyper-Threading Technology enabled platforms, but further performance gains can be realized by specifically tuning software for Hyper-Threading Technology. For future software optimizations and business growth this technology complements traditional multi-processing by providing additional headroom.



The Reduced Instruction Set Computer (RISC) is a microprocessor CPU design philosophy which supports a simpler set of instructions all of which take the same amount of time to execute. Some RISC microprocessors are AVR, PIC, DEC Alpha, SPARC, MIPS, and IBM's PowerPC.

Why RISC was created?

Traditional CPUs had many features that facilitated easier coding but were left unused. The main problem here was that these features took a long time to be executed.

Earlier, when compiler technology was absent, programming was done at the hardware level using machine code or assembly. This led to the creation of complex or complicated instructions that were actually only a representation of high level functions in the high-level languages. It was considered easy to program at hardware level than to write a compiler which lead to complexity in the CPUs.

Memory was limited and slow at that time. The whole system did not have more than a few kilobytes. This raised the need to keep information at a high density of closeness in computer programs. This was beneficial for such memory as it reduced the access time.

Since CPUs had only a few registers as internal CPU register bits were expensive and more instructions were required to work with them, hence reducing speed. This made it necessary for CPU makers and designers to build instructions that could do the maximum possible work.

The above led to what is now called Complex Instruction Set Computer (CISC) philosophy.

RISC came on the horizon in the 1970s when certain things were found out. One of this was the paradox that a particular operation worked slower than a number of smaller operations achieving the same functionality. This was due to the fact that CPU designers only used the most-used operations as they were limited by time to deliver. Also, newer CPUs were way faster than the memory they worked with. This meant that as CPUs become more faster, more registers would be needed to handle such higher operational frequencies.

Under RISC, instead of using a single overly complex instruction, a series of simple instructions was used to achieve the same thing. Such simple instructions left more space and reduced the need to store data in the registers as it could now be carried alongwith the instructions. This also made the memory interface simple.

RISC allows CPU designers to:

* increase the size of the register set
* increase internal parallelism
* increase the size of caches (introduced later)
* add more functions

Development Cycle of RISC

The first actual RISC machine was CDC 6600 supercomputer, designed in 1964 by Jim Thornton and Seymour Cray. The CDC 6600 sported a load-store architecture and eleven pipelined functional units for arithmetic and logic.

Data General Nova minicomputer, designed in 1968, was another load-store architecture based machine.

UC Berkeley's RISC project was initiated in 1980 leas by David Patterson that used pipelining to gain higher performance. It used a thing called register windows that limited the CPUs 128 registers to the set of 8 registers that could be used at a time. This translated into faster time and higher performance.

RISC-I processor came out 1982 from this project. It had only 44,420 transistors in contrast to the CISC ones that had 100000 on an average at that time and it had only 32 instructions. Surprisingly, it won over any other single-chip design in performance. Comprised off 40,760 transistors and 39 instructions, RISC-II came out in 1983 and was over 3 times faster than RISC-I.

IBM started developing a chip-based RISC CPU 1975 that led to ROMP in 1981, which means Research (Office Products Division) Mini Processor. In 1986, when it was released on the RT-PC, it proved to be a failure in the light of the fact that it was not a good performer.

Sun Microsystems developed the SPARC with the help of RISC-II. It showed how the advantages of RISC were real. IBM developed the POWERPC, which is now the most used RISC chips.

RISC chips are used almost in all machines, the best example being cars where even 10 of such chips may be used. The desktop PC scenario, however, uses Intel’s x86 architecture.


A Complex instruction set computer (CISC) is a microprocessor instruction set architecture in which each instruction executes several low-level operations. These operations maybe an arithmetic operation or a memory store or retrieval. The term was coined to differentiate between it and the Reduced Instruction Set Computer (RISC).

Development of CISC

There was no official start of the development of the CISC. It was more of an industry trend that prevailed before the RISC processors were designed. Compilers were not present at that time. This forced the developers to write instruction sets that could support high level languages. This was basically a semantics gap. They made complex addressing modes and clubbed them together to form one single instruction. This clubbing resulted in small programs sizes and reduced the number calls to main memory that caused a tremendous savings on the cost of a computer in the 1960s.

Problems in the CISC

There were certain problems that were later discovered that finally led to the development of RISC. These were:

* CISC allowed high-level languages to express their constructs in a few instructions. This was achieved easily. The problem was that this did not translate into better or improved performance each time. It was found out that it was possible to increase performance by using a series of simple and multiple instructions instead of a single and complex one on some processors.
* The complex instruction also meant greater execution time. The overhead that was the result of the complexity of an instruction took up more and more time and silicon. As a result, even simple instructions were slow to execute due to the presence of the complex ones.
* The CISC architecture resulted in the use of many transistors on the chip. This left very less or no space for other techniques of improving performance.
* Another implication of the CISC was that it took the chip maker a lot of time and effort to develop it.

There are certain examples of CISC processors such as the System/360, VAX, PDP-11, Motorola 68000 range of processors etc.

Difference between CISC and RISC:-

The differences between CISC and RISC architectures are as follows:

* While CISC gave an emphasis to hardware in order to reduce the complexity of the high level languages, the RISC architecture concentrated on software and kept the hardware as simple as possible.
* The CISC architecture includes multi-clock complex instructions to do the processing while the RISC architecture uses the Single-clock, reduced instruction to do so.
* The CISC architecture used the memory-to-memory way of working. The instructions like LOAD from memory and STORE in memory were kept together other instructions to form a single, complex instruction. RISC, on the other hand, used Register-to-Register and both of the above instructions were independent of each other.
* The CISC architecture was able to achieve small program size due to the clubbing of many instructions into one and had a high number of cycles per second. This lead to high speeds but was not always true. The use of RISC resulted in larger programs and a lower number of cycles in each second.
* The CISC architecture stored the complex instructions that it had in the transistors while the RISC architecture used the transistors to store memory registers.



A microprogram is a special type of computer program that implements a CPU instruction set. As is known, each high level language command is compiled to a series of machine language instructions. After this, each instruction is implemented by a set of many microinstructions, which together form what is called a microprogram. Microcode is the common term used for a microprogram. Microcode exists in a special high speed memory and is typically written by the CPU engineer while designing the processor. Microcode allows one computer microarchitecture to emulate another architecture, which is often more complicated than the current.

The main aim of microprograms is to achieve the fastest possible execution as a slow microprogram means a slow machine instruction which translates to slow programs. Programming of a microprogram requires in-depth knowledge of low-level hardware and computer circuitry. Microprograms maybe stored in ROM or RAM. They are then called a control store.

History of Microprograms

Control stores were first introduced in 1947 in the design of the Whirlwind computer to simplify computer design and move beyond ad hoc methods. The control store propsed hence was a two-dimensional lattice. It was more like a player piano roll which controlled a sequence of wide words constructed of bits which are played one by one.

Maurice Wilkes, in 1951, enhanced the concept of control stores as given by the Whirwind proposal. He added to it a conditional execution which is similar to a conditional in computer software. It was Wilkes who coined the term microprogramming for this feature of a processor.

Why use microprograms

Before the advent of microcode, all instruction sets for a CPU were hard-wired, i.e., each instruction was implemented on the circuit itself, rather than in a software form. This kind of implementation led to fast performance. But when the instruction sets grew more complex, it became impossible to have all the sets hard-wired, leave alone debug them.

Microcode was helpful to remove this problem. It made it possible for CPU design engineers to write a microprogram to implement a machine instruction instead of doing the cumbersome job of designing circuitry. Microcode was easy to change and debug at any point or stage in the design process without having to change the design itself. This directly had an impact on productivity as time taken to desing a new processor was reduced significantly.

Microprogramming also helped solve the problem of memory bandwidth. During the 1970s, CPU speeds grew more quickly than memory speeds os much so that they outrunned them. Numerous acceleration techniques were developed such as memory block transfer and memory pre-fetch etc. that reduced this. However, it was high level machine instructions which were now possible due to microcode that changed the way things were going.

The IBM System/360
and DEC VAX family used complex microprograms. The IBM System/38 and AS/400 revolutionized the concept even further.

The best thing about microprogramming as probably that it reduced the cost of correcting defects in code which required replacing a particular part of the microcode instead of the whole wiring.

Recent Educational News